Welcome to A-Mail. I’m Anna Codrea-Rado, a business, tech and culture writer. This newsletter is my attempt to make sense of how work makes us feel. If you like my writing, consider supporting my work by making a contribution. (You can cancel anytime, including immediately after subscribing 😜)
When I first watched I, Robot, I cried.
Yes, the sci-fi movie with Will Smith as a technophobic detective who investigates whether a sentient robot, Sonny, has broken the laws of robotics and is behind the murder of the very scientist who created him.
In the scene that makes me teary, Sonny has been hauled in for questioning in connection with the death of Dr Alfred Lanning (that’s “Father” to Sonny). Just before Smith’s character, Del Spooner, comes into the interrogation room, Sonny clocks him winking at a colleague.
After making an endearingly awkward attempt at a wink himself, Sonny asks Spooner what it means.
“It’s a sign of trust,” he says. “It’s a human thing. You wouldn’t understand,”
Asking about the wink is just one of the repeated bids for connection Sonny makes in the scene. Spooner shoots each one of them down by emphasising the traits that separate humans from robots (“Your designer, [not your father]”; “Human beings have dreams”; “Robots don’t feel anything”; ), until finally telling him: “You’re just a machine.”
In asserting his humanity over Sonny, Spooner treats him as the less-than-human robot that he is.
The cruel irony of showing what makes one human by othering another spears my heart.
Nearly 20 years after the movie came out, I, Robot feels like the cautionary tale of AI ethics we all need right now. It’s not 2035 yet, the year the movie is set in, but the AI scientists are already running in fear from their labs.
This week, the “godfather of AI”, Dr Geoffrey Hinton, quit his job at Google in order to sound the alarm about the dangers of AI.
It’s pretty intense when the guy who played an instrumental role in creating the technology that gave us ChatGPT and similar chatbots has now flip-flopped and become an AI naysayer. (Hinton’s warnings are just the latest to come from a string of AI designers and pioneers)
So the folks who created this tech are now freaking out. I think we need to pay attention, but not for the reasons you might assume.
Despite this being a newsletter about work and careers, I’ve not engaged with the “OMG the robots are going to steal our jobs!!” debate yet. That’s because while I do think AI will displace jobs (including significant chunks of my own), I don’t think that robots will rise up and take over the planet. I’m also not personally all that worried about chatbots being able to eg write this newsletter in my place. (They wouldn’t be able to make as many spelling mistakes as I can).
The discussion around AI often revolves around its terrifying potential and the catastrophic consequences that could arise if it falls into the wrong hands. While there is undoubtedly truth to that, I think we should be more cautious rather than fearful.
AI is a mirror, albeit a distorted one.
The story of the development of AI isn’t one of tech taking on a life of its own, it’s one of our complicated relationship with work and the contradictory nature of being human.
At least in the media that I consume, the framing of the AI discussion is completely divorced from the human hands that created it. It’s not like robots fell from the skies and invaded Earth, people created them.
That’s what I’m interested in: why people wanted to create artificial intelligence in the first place. And that’s not a question of computer science, but of human nature.
What is that drives someone like Hinton to dedicate their lives to work that could be (is!) dangerous? Why are they unable (or willing?) to stop until they reach their goal? What in their lives drove them to this kind of work?
And as for that work itself, at its core, it’s about probing the limits of human consciousness. My head hurts just thinking about that.
Why? Why does someone want to do that?
Those are the questions I’d ask if I were interviewing someone like Hinton.
I’d also ask, are you ok? Because I think the AI scientists aren’t alright right now.
Is Geoffrey Hinton burned out?
After working there for more than a decade, Hinton left Google citing worries about how fast generative AI is developing.
According to a New York Times interview, he quit so that he could speak freely about the dangers of AI. (He later clarified that he didn’t mean he wanted to criticise Google, which he says has always acted responsibly).
He also told the NYT’s Cade Metz that “a part of him now regrets his life’s work.”
So he’s disillusioned with his job; has a lack of satisfaction with his achievements and is feeling cynical? That sounds an awful lot like burnout to me!!
(In an interview with Technology Review, Hinton alluded to being “mildly depressed”; it’s unclear whether he meant that in a clinical sense or not. Burnout and depression have many of the same symptoms and the former can result in the latter. But hey, let’s all remember that I’m just a person writing on the internet and not a doctor 🧂).
OK, back to the NYT interview, because if I’m right and Hinton IS burned out, there are some hints in there that might explain why.
Amid the scary stuff about the disaster scenario that Hinton now imagines (weaponising AI, fake information and job losses), there are a few details buried in there that gave me a faint whiff of hubris.
To illustrate my point, I’ve annotated key parts of the NYT interview with Hinton:
In 1972, as a graduate student at the University of Edinburgh, Dr. Hinton embraced an idea called a neural network. A neural network is a mathematical system that learns skills by analyzing data. At the time, few researchers believed in the idea. But it became his life’s work.
Throughout his career, he worked tirelessly on an idea that few people believed in at the time, toiling away in obscurity. In hindsight, it is clear that he was a visionary who was ahead of his time. However, such single-minded dedication to a singular goal can be risky, as it may result in an excessive amount of energy and emphasis being placed on achieving it. In other words, that kind of hubris is a recipe for burnout.
Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”
I think it’s utterly fascinating that Hinton would quote Oppenheimer when asked to justify his work. Oppenheimer’s legacy is a complicated one that continues to be debated today. But it’s generally agreed that his life was full of contradictions. He worked on the atomic bomb, but was a pacifist and anti-nuclear weapons. Perhaps Hinton was always conflicted about his work, too. (I get it – I’m a journalist, I believe in the fourth estate and at the same time, I hate the media industry. Two things can be true).
“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
This is the most damning quote in the entire interview, imho. What I took from this is that Hinton DID think that AI would eventually outsmart humans, he just didn’t think it would happen in his lifetime.
I don’t mean to drag Hinton. Far from it, having hubristic personality traits is what not only makes him a brilliant scientist, but also what makes him human. And ultimately, his resignation shows us that another of his emotions took over: humility.
The scientist’s single-minded focus on getting an answer, and then the folly of thinking they can control its outcome is an ancient story.
In fact, the Greeks can teach us a lot about how to think about AI in the modern world. You can read many Greek myths as depicting a complex and nuanced view of artificial intelligence. A scientist’s work getting out of hand is a tale as old as Prometheus stealing fire from the gods or Icarus flying too close to the sun.
As the Stanford classics scholar, Adrienne Mayor, wrote back in 2016, “The beloved myths of Hercules, Jason and the Argonauts, the sorceress Medea, the engineer Daedalus, the inventor-god Hephaestus, and the tragically inquisitive Pandora all raised the basic question of the boundaries between human and machine.”
I didn’t ask Hinton if he’s burned out. Instead, I did the next best thing and put the question to one of his godchildren.
I asked Bard, Google’s AI chatbot if it thought Hinton was burned out.
This is what it said (also, sorry, but please note that a ROBOT told me that I’d asked a great question):
That's a great question.
Burnout is a state of physical, emotional, and mental exhaustion caused by excessive and prolonged stress. It can lead to a number of problems, including fatigue, anxiety, depression, and physical health problems.
In the case of Geoffrey Hinton, it's possible that he was experiencing burnout due to the high demands of his work. He is one of the most respected figures in the field of AI, and he has been under a lot of pressure to publish papers in top journals and to develop new AI technologies that can be commercialized. This can lead to long hours and a lot of stress.
In addition, Hinton may have been feeling anxiety about the potential negative impacts of his work. AI is a rapidly evolving field, and it has the potential to be used for both good and bad. Hinton may have been worried about the potential for AI to be used to create harmful weapons or to automate jobs, which could lead to unemployment.
Finally, it's possible that Hinton was simply feeling burnt out from the uncertainty of the field. AI is a rapidly changing field, and it can be difficult to keep up with the latest developments. This can lead to feelings of anxiety and stress, as Hinton worried about being left behind.
It's important to note that burnout is not just caused by work. It can also be caused by personal problems, such as relationship problems or financial problems. However, in the case of Geoffrey Hinton, it's likely that his work was a major contributing factor to his burnout.
In the end, it’s a human thing
In one of the pivotal scenes in I, Robot, Will Smith works out that the real baddie is VIKI, the Siri-esque supercomputer who rules all the robots.
She justifies her plan of controlling the human race as one of safeguarding.
“You charge us with your safekeeping, yet despite our best efforts, your countries wage wars, you toxify your Earth and pursue ever more imaginative means of self-destruction. You cannot be trusted with your own survival.”
Spooner and Sonny are in a bind – VIKI has duped them and is about to instruct the robots to kill people, including Dr Calvin the kindly robopsychologist.
And it’s with a wink that Sonny is able to relay a secret code to Spooner, letting him know that he can trust him. Together they save the day.
The pursuit of building a machine that can surpass people is a profound reckoning with what it means to be human. It’s a philosophical tussle with whether a machine can truly understand the essence of humanity and ever surpass us. And if it can, where does it leave us?
As I watch the fear, anxiety and hope around AI swell, that’s where I see the real humanity. We are constantly striving to create something greater than ourselves, even if that means risking a part of selves. After all, we all know that feeling of going all in on something only to have it not turn out as you’d hoped. That maddening contradiction, the vast unknowable, surely that’s the human thing?
A-MAIL IS READER-FUNDED. ADDITIONAL SUPPORT COMES FROM:
BetterHelp. Manage your mental health with 25% off your first month on BetterHelp. Millions of people have joined BetterHelp to find the right therapist for them, without having to leave home. The affordable online platform offers live video and phone sessions, plus unlimited messaging. Start with 25% off your first month. >> Start therapy online now <<
Deel. Compliantly hire anyone, anywhere, in 5 minutes with Deel
Deel is your one-stop shop for hiring, paying, and managing your remote team. We stay on top of local labor laws across the world to ensure compliance and mitigate risk so that you don't have to. >> Get Started <<
Aura. Are you tired of unwanted emails, texts, and calls from scammers and spammers? Aura uses cutting-edge AI technology to scan the internet to identify where your personal information is exposed and being sold. Try Aura free for 2 weeks and see if your information has been compromised. >> Start Free Trial <<
To reach 17K+ readers and support A-Mail through sponsored messages, please reply directly to this email.
I know this is massively not the point of this (very interesting) piece, but I had absolutely no idea that the plot of the movie I, Robot was so different from the book of the same name?! I'd have assumed they were at least similar, but while the themes and characters sound familiar from having read the book, I'm like 97% sure this isn't the plot of the book!
Also, I agree with you about AI not being able to replicate a lot of the things about writing and journalism that I love the most, but I'll confess that I am a little bit anxious about companies and publications wanting to try and cut costs by *trying* (however unsuccessfully) by replacing people with AI.
This is a really interesting perspective about his burnout. I read multiple articles this week about him and all I could think was "can we go one week without an old white man regretting his life's work without taking an ounce of blame for his part in it?" This is the first one that made me empathize with him, so that's for that.