Please consider supporting us by disabling your content blocker.
loader

Every time Big Tech dip their fingers into the hardest problems in society, they mess it up big time, says DeepMind Professor, Neil Lawrence. (Photo by Dan Kitwood/Getty Images)

Getty Images

“Computers bring interesting solutions, and there are many areas where we can deploy them,” says the DeepMind Professor of Machine Learning at the University of Cambridge, Neil Lawrence. And then he adds:

“But the people who are in control of the deployment of these things are perhaps the least socially intelligent people we have on the planet.”

In my conversation with Lawrence, he offered four pieces of advice on how to save humanity from the big tech companies, which he says have never delivered anything toward solving the hardest problems in society — “in fact, every time they dip their fingers into it, they mess it up big time.”

Like other international experts, Lawrence is concerned that Big Tech is hijacking democratic processes. He says that repairing the damage is easier said than done. Especially since the only people who can do it have been undermined by the very tech companies they are now expected to regulate.

“Oh my God”, Lawrence says when talking about people who work in the public sector. “They are left with the hardest problems in society: health, social security, education, defense. Their confidence in who they are and what they are doing is being undermined. And now we’re turning around and saying, ‘Well, fix it!’”

Realize Where We Went Wrong With Tech

The best tech analyst I have ever come across is the German philosopher Martin Heidegger. In 1954, he predicted that unless we get a better grip of what he called the essence of technology, we will not only lose touch with technology, we will also lose touch with reality, and eventually we will lose touch with ourselves.

Basically, he said three things would happen:

● First, we think technology can solve problems it cannot solve

● Then we forget how to distinguish between true and false, and

● Finally, we stop trusting our own ability to think

Looking at society 70 years later, Lawrence agrees that Heidegger’s prediction was impressively accurate. But he thinks it’s important to distinguish between the people who develop the technology and the people who deploy it.

As a variant of The Sorcerer’s Apprentice, he says that the technology is developed by people like himself and his academic colleagues, “who actually have a quite deep understanding of it and what its pitfalls might be.”

“But then ‘the apprentice’ are these big tech companies, who read the spell book and go ‘Oh, this is the recipe for achieving this thing that they worked out the complexity of’. And then they cast the spell to do their own chores.”

Neil Lawrence says tech CEOs “I’ll take the profit, you tell me the rules” strategy is mind-blowingly shocking. Photo: Meta CEO Mark Zuckerberg testifies in the Rayburn House Office Building on Capitol Hill, WASHINGTON, DC – OCTOBER 23, 2019 (Photo by Chip Somodevilla/Getty Images)

Getty Images

Using the example of Meta’s Mark Zuckerberg, Lawrence says tech CEOs’ call for regulation is completely misguided in terms of how society works. “It’s like, ‘I’ll take the profit, you tell me the rules.'” Although he calls this strategy mind-blowingly shocking, he emphasizes that it does not come from any evil nature. “It’s just the naivety of people who have no idea how society works.”

Respect How AI Regulation Works

What Zuckerberg and others don’t understand is that ‘the sorcerer’ in our story is Karl Popper’s open society. This means that there is not one powerful person or institution that can go back and undo the mess that the careless apprentice has caused.

“The way that we regulate these things is distributed across different experts with different capabilities. And the mechanism through which we do that is information technology — historically the book and the printed word.”

However, with the casting of the social media spell, and now the AI spell, the experts and the mechanisms they use to regulate are undermined.

The consequence is, as Heidegger predicted, that we are losing touch with technology, reality and ourselves – confusing the sorcerer and the apprentice and systematically undermining the only people who know how society works and what it takes to regulate an industry that has gone rogue.

Remember Who We Are As Humans

In his new book, The Atomic Human, Lawrence argues that it is not our capabilities that make us unique, but our limitations and vulnerabilities. While machines can communicate at the speed of light, our embodied intelligence is limited to communicating at the speed of sound.

To overcome this limitation, we make a lot of shortcuts when trying to understand and communicate with our surroundings. And one of them is to anthropomorphize. We simply think of all other intelligent entities as human, projecting our own intelligence onto them and assuming they have the same motivations as us.

Hear how Lawrence’s notion of the embodied human intelligence leads to a conversation about the French philosopher Merleau-Ponty and his description of the human body as a unique self that promotes rather than prevents us from understanding and communicating with our surroundings.

Lawrence compares the way naive tech solutionists relate to AI with how Greek legends relate to God. In the Greek legends, the gods have very human behaviors: “It’s like someone is saying, ‘This is what I would do if I had that power’. Zeus, for example, is just typical male bad behavior. It’s like, ‘I’d definitely be a bloke with a beard, and I’d definitely try to have sex with anything that moves and then hide it from my wife.'”

In contrast, Lawrence says that modern religion, as practiced by, for example, Orthodox Jews, Reformed Jews, Hindus, and Muslims introspect in the face of God. That is, they use the idea of another intelligence to better understand and adjust their own cultures and ways of behaving.

“It made me realize that a route to move the conversation to somewhere more productive was to follow that path and shift from narcissism to introspection.”

Re-empower The Open Society

According to Lawrence, “We’re at the moment where the apprentice is surrounded by water and going ‘Oh, my goodness’, chopping up the broomstick, trying to do things to fix the situation. But it’s too late.”

When people like OpenAI founder Sam Altman give thousands of people free money to demonstrate the benefits of universal basic income, he is not helping to solve the problems of a future where AI takes everyone’s jobs.

One person or one company wading into the social conversation and telling us they have the answer is not the solution. It’s the very problem. And every time one of these “tech bros” try to fix the hardest problems in society with technology, they make it harder for the people working in the government and public sector to succeed.

“In the Goethe poem the sorcerer returns and says ‘Broom, broom, that’s enough’ and it’s all fixed. But we’ve been disempowered. The academics have been trying to say, ‘there’s a problem here, there’s a problem there’, but in the very act of casting the spell, the people representing the open society have been undermined.”

The solution is to listen to the people who are carrying out the true duties in society and support them in putting things back in place. Lawrence says the mindset shift required to re-empower the open society will take time. And he emphasizes that the public sector doesn’t have all the answers.

“But we have a number of people who are working to improve things for all of us. They are human beings, and they have mechanisms that support their attempt to bring out the best versions of themselves.”

Summarizing our conversation and the “Guide to Saving Humanity from Big Tech” that can be derived from it, Lawrence concludes: “Our culture is not perfect, we have big problems to solve. But have some damn respect for it!”

For more videos of my conversation with Neil Lawrence, check out my Big Questions for Big Thinkers series on YouTube.