March 29, 2024

Ferrum College : Iron Blade Online

Complete Canadian News World

Why the ‘godfather’ of artificial intelligence Jeffrey Hinton quit Google to speak out about the stakes

Why the ‘godfather’ of artificial intelligence Jeffrey Hinton quit Google to speak out about the stakes

When Jeffrey Hinton had a moral objection to Google’s work with the US military in 2018, he didn’t take part in public protests or put his name on the open letter of complaint signed by more than 4,000 of his colleagues.

Instead, he just spoke to Sergey Brin, the co-founder of Google. He said he was a little upset about it, too. Hinton said in an interview at the time.

The incident is emblematic of Hinton’s quiet influence in the world of artificial intelligence. The 75-year-old professor is revered as one of the “godfathers” of artificial intelligence due to his formative work in deep learning – an area of ​​artificial intelligence that has driven the huge advances that are happening in the sector.

But the tale also reflects Hinton’s loyalty, according to those who know him best. In principle, he has never aired any institutional grievances, ethical or otherwise, publicly.

It was this belief that led him to resign as vice president and engineering fellow at Google last week, so he could speak more freely about his growing concerns about the dangers of artificial intelligence to humanity.

His longtime collaborator and friend Yoshua Bengio, who won the Turing Prize alongside Hinton and Yan-Locun in 2018, said he saw the resignation coming. “He could have stayed at Google and talked, but his sense of loyalty wasn’t,” Bengio said.

Hinton’s resignation follows a series of groundbreaking AI launches over the past six months, starting with Microsoft-powered ChatGPT in November, and Google’s chatbot, Bard, in March.

See also  Destiny 2's Legend difficulty is the worst change 'Lightfall' has made

Hinton expressed concerns that the race between Microsoft and Google would push forward the development of artificial intelligence without proper barriers and rules.

“I think Google was very responsible in the beginning,” he said in a speech at an EmTech Digital event on Wednesday, after announcing his resignation. “Once OpenAI had built similar things with .

Since the 1970s, Hinton has pioneered the development of “neural nets,” a technology that attempts to mimic how the brain works. It now supports most of the AI ​​tools and products we use today, from Google Translate and Bard to ChatGPT and self-driving cars.

But this week, he acknowledged concerns about its rapid development, which could lead to misinformation flooding the public domain and artificial intelligence usurping more human jobs than expected.

“What worries me is that this will happen [make] The rich are richer and the poor are poorer. as you do it. . . “Society is getting more violent,” Hinton said. “This must-be-great technology . . . is being developed in a society that is not designed to be used for the benefit of all.”

Hinton also sounded the alarm about the long-term threats AI systems pose to humans, if the technology is given too much autonomy. He’d always thought this existential danger out of reach, but he’s recently reset his thinking about its urgency.

“It is quite plausible,” he said, “that humanity is a passing stage in the evolution of intelligence.” He added that Hinton’s decision to leave Google after a decade was prompted by an academic colleague who persuaded him to speak out about it.

See also  LEGO 2K Drive will include real money transactions

Born in London, Hinton came from a famous line of scholars. He is the great-grandson of British mathematicians Mary and George Boole, the latter of whom invented boolean logic, the theory that underlies modern computing.

As a cognitive psychologist, Hinton’s work in artificial intelligence has aimed to approximate human intelligence—not just to build AI technology but to shed light on the way our brains work.

His background means he’s “not the most mathematical person you’ll find in the machine learning community,” said Stuart Russell, a professor of artificial intelligence at the University of California, Berkeley and an academic fellow at the Hinton School.

He pointed to Hinton’s breakthrough in 1986, when he published a paper on a technique called “backpropagation,” which showed how computer programs can learn over time.

“It’s clearly a key card,” Russell said. “But he didn’t derive the . . . control the way a mathematician might. He used his intuition to figure out a method that would work.”

Hinton was not always outspoken in public about his moral views, but he did make them clear in private.

In 1987, when he was an Associate Professor at Carnegie Mellon University in the United States, he decided to leave his position and immigrate to Canada.

One of the reasons he gave, according to Bengio, was a moral one—he was concerned about the use of technology, especially artificial intelligence, in warfare and much of his funding came from the US military.

“He wanted to feel good about the funding he was getting and the work he was doing,” Bengio said. “He and I share values ​​about society. That human beings matter, that the dignity of all human beings is essential. And everyone should benefit from the progress that science makes.”

See also  Arc's new iPhone browser wants to be your search companion

In 2012, Hinton and his graduate students at the University of Toronto—including Ilya Sutskiver, now co-founder of OpenAI—made a breakthrough in the field of computer vision. They built neural networks that can recognize objects in images at more precise sizes than was possible before. Based on this work, they founded their first startup, DNNresearch.

Their company — which didn’t make any products — was sold to Google for $44 million in 2013, after a competitive auction led to Chinese firm Baidu, Microsoft and DeepMind bidding for the trio’s expertise.

Since then, Hinton has spent half his time at Google and the other half as a professor at the University of Toronto.

According to Russell, Hinton is constantly coming up with new ideas and trying new things. “Every time he had a new idea, he would say at the end of his speech:” And this How does the brain work! “

When asked on stage if he regretted his life’s work, given that it may have contributed to the myriad harms he’s outlined, Hinton said he’s been thinking about it.

“This stage [AI] Was not expected. Until very recently, I thought this existential crisis was far-fetched. “So I really don’t have any regrets about what I’m doing.”