Tech pioneer Geoffrey Hinton doesn’t deny there’s plenty to fear in the AI revolution.
Like Daedalus watching his son fall from melting wings, the University of Toronto computer scientist and so-dubbed “Godfather of AI” climbed the stage at Toronto’s Collision Conference Wednesday to tell of the dangers his industry’s invention could bring, shaping business, to warfare, to democracy.
“[There’s] surveillance to help authoritarian states stay in power; there’s lethal autonomous weapons [that] you’re going to see very soon; there’s fake videos corrupting elections; there’s job losses,” Hinton listed like a lunch order, adding cybercrime, bioterrorism and “finally, the existential threat that it will go rogue and take over.”
“So,” replied novelist and session moderator Stephen Marche.
“Which one do you want to handle first?”
The ‘A’ is for… Authoritarian?
It’s easy for the mind to jump to literal robot overlords, but Hinton stressed that the most urgent and sinister AI applications are those that serve living, breathing leaders: mass surveillance and weaponization run amok.
As described in an open letter by the Future of Life Institute, of which Hinton is listed as a signatory, lethal autonomous weapons are those that “select and engage targets without human intervention.” Think drones that attack based on pre-determined criteria, with no case-by-case permission by a human pilot.
“If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable,” the letter reads. “Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.”
AI pioneer Geoffrey Hinton is seen backstage before speaking at the Collision Conference, in Toronto, on Wednesday, June 19, 2024.THE CANADIAN PRESS/Chris Young
Governments haven’t totally neglected efforts to rein in AI, but as Hinton notes, the devil’s in the caveats.
The European Union, for example, adopted its comprehensive AI regulation in March, primed to bring restrictions to the organization’s 27 member nations on everything from children’s toys, to Chinese-style social credit systems, to facial recognition technology.
Stern though the regulation sounds, described by Future of Life as the hope for a global AI standard, its enforcement is limited: Military applications, public and private, are exempt.
“Could you tell me what that would look like?” Marche asked Hinton of the weapons.
“Just imagine something creeping up behind you, that’s intent on killing you,” Hinton said.
Afraid I can’t do that, Dave
But even in a dystopian government armed with killer robots, those robots first need to do what their government asks — or more fundamentally, they need to understand what the order means.
Hinton next pointed to the issue of alignment — that an artificial intelligence might take on a task with different assumptions and priorities than a human’s.
An example: AI stopping climate change by wiping out all life on Earth, technically removing the demand for pollutants, but at a plainly ludicrous cost.
“[AI] may not be smart enough to realize that’s not what you meant,” Hinton said, going on to explain that ignorance about the details is mutual.
“We certainly can’t say, when it makes a decision, why it made the decision.”
And that’s still assuming that AI listens to its human counterparts at all.
Hinton’s present stance on the AI future is a fairly recent change, borne from his research into the comparative efficiency between digital computation, like traditional computers, and “analogue” computing, more akin to how humans think.
The experiments involved computers that grew from a shared intelligence model, learning separately from examples fed into them and becoming increasingly unique from their fellow computers over time, despite sharing the same origins.
Hinton says he discovered that while analogue computing could run on less energy by finding its own efficiencies, that uniqueness made it difficult to share information between computers, as each one completed tasks in different ways. The only solution was knowledge translation — essentially a literal conversation between computers, as humans have. This proved drastically less efficient.
By making computers more human, Hinton found, he was also making them less powerful.
“That’s what got me scared that these things might be better than us,” he said. “The threat that in the long run, these things get smarter than us, and might go rogue; that’s not science fiction.”
Hinton says it could start from an innocent place.
Give an AI bot a complicated task, and it may learn it can work faster if it has more control, and if it comes to the conclusion that it’s a better decision-maker than people are, suddenly, those people become an inefficiency.
“If these things are much smarter than us, they’ll realize: ‘Just take the control away from the people, and we’ll be able to achieve what they want much more efficiently,'” he said.
“That seems to me like a very slippery problem.”
Digital doomsaying
Hinton’s view of the AI future isn’t unanimous.
In the Collision talk Wednesday, Marche pushed back on his slippery-slope scenario, noting the numerous assumptions it required.
“That the best avenue to an outcome is control … seems to me like a very human understanding,” he pointed out.
And the day before, fellow Collision speaker and AI firm co-founder Aidan Gomez leaned away from the apocalyptic scenario and highlighted what he called a growing “philosophical divide” in the field.
“I’m of the opinion that it’s going to take us a while to exceed human capabilities uniformly,” said Gomez, who has previously received funding from Hinton for his work, and interned for him at Google.
“We [and Hinton] have stayed, over the years, pretty divergent on this viewpoint,” said Nick Frosst, Gomez’ co-founder and a protege of Hinton’s, in an interview with BNN Bloomberg Thursday. “We don’t think that this technology presents a doomsday scenario.”
Patch Notes
Amid troubling predictions, Hinton’s Wednesday talk did point to one of an “an equal number of wonderful things” when it comes to AI innovations: Medicine.
“You’re going to be able to go to a doctor who’s seen 100 million patients, and knows your whole genome and the results of all your tests — and the results of all your relatives’ tests, and has seen thousands of cases of this extremely rare disease you have,” he explained.
As for warding off the dark side, Hinton says that despite not knowing the solution for many issues, the path forward could involve a blend of strict requirements for safety testing among AI companies and robust public education, especially for the coming tidal wave of AI-powered disinformation in future elections.
“Pay for a lot of advertisements where you have a very convincing fake video, and then right at the end of it, it says: ‘This was a fake video; that was not [former U.S. president Donald] Trump, and he never said anything like that;'” Hinton suggested.
“It’s just like inoculation … so that people can build up resistance.”
In the end, Hinton doesn’t describe the battle for a better future in AI as taking arms against the killbots: He says it’s a matter of politics.
“The problem is, in our political system, to turn those huge increases in productivity into benefits for everybody,” he said.
“So, the problem is us,” Marche asked.
“The problem is always us,” Hinton said.
With files from The Canadian Press
Edited by CTVNews.ca Special Projects Producer Phil Hahn