Home Crypto Fears for Humanity and the Dangers of Artificial Intelligence According to Geoffrey Hinton

Fears for Humanity and the Dangers of Artificial Intelligence According to Geoffrey Hinton

0
Fears for Humanity and the Dangers of Artificial Intelligence According to Geoffrey Hinton

[ad_1]

The cryptocurrency industry has been widely impacted by the advance in artificial intelligence technology. More specifically, the increasing po polarity of AI-based chatbots like ChatGPT and Google Bard has spawned a subsection of cryptocurrencies themed around that field.

There is a myriad of coins in the market that saw exponential growth because of the soaring interest in AI.

We’ve also tapped ChatGPT on numerous topics, such as:

And while it’s exciting to chat with an AI on topics that you would normally discuss with your fellow peers, there’s also another side to it, and Geoffrey Hinton speaks of it loud and clear.

The Godfather of Artificial Intelligence

Geoffrey Hinton is a British-Canadian cognitive psychologist and computer scientist who was born in 1947 in Wimbledon, London.

He is most noted for his work on artificial neural networks and is a former employee of Google. He left the company in May 2023 in a big public exit, voicing public concerns about the risks of artificial intelligence (AI) technology.

jeff_hinton
Source: Technology Review via Linda Nylind / Eyevine Via Redux

Hinton is the first winner of the Rumelhart Prize in 2001 and is internationally renowned for his work on artificial neural nets, especially in relation to how they can be designed to learn without the need for a human teacher. He also won the 2018 ACM A.M. Turing Award for conceptual and engineering breakthroughs in the field.

He’s also commonly referred to as one of the godfathers of AI. And now, he has some concerns.

We’ve Discovered the Secret of Immortality, but There’s a Catch

In an interview for The Guardian, Hinton made it clear that he left Google on good terms and that he has no objections to what the company is doing or has done.

In the article, he compares biological intelligence (human brain) to digital intelligence, outlining the inefficiencies faced by people. He says that our brain runs at low power, but our approach is quite inefficient in terms of information transfer. Digital intelligence, on the other hand, is different.

You pay an enormous cost in terms of energy, but when one of them learns something, all of them know it, and you can easily store more copies. So the good news is, we’ve discovered the secret of immortality. The bad news is, it’s not for us.

In essence, Hinton came to the conclusion that humans are building intelligence that has the potential to outthink humanity.

I thought it would happen eventually, but we had plenty of time: 30 to 50 years. I don’t think that anymore. And I don’t know any examples of more intelligent things being controlled by less intelligent things.”

To make the comparison more comprehensible, the computer scientist pinned us (humans) against frogs while also adding:

And it (AI) is going to learn from the web, it’s going to have read every single book that’s ever been written on how to manipulate people, and also seen it in practice.”

Ex-Machina-2015-featured-3
Snip from the Movie Ex Machina. Source: Movie House Memories

Fears for Humanity

Citing a recent report by Hinton, reports outline the potential dangers of the so-called superintelligence.

The Godfather of AI outlines scenarios where an AI may seek to gain control over numerous aspects of its own environment in pursuit of solving complex problems. These aspects even include human manipulation. The scientist believes that the AI wouldn’t even need an explicit goal of achieving power or destruction to leverage its ability to mimic human behavior.

It’s not all doom and gloom, though. He believes that there are ways to mitigate catastrophic scenarios, but he’s also of the opinion that we’ve passed the point of no return and stopping AI development is downright impossible, nor that it should be stopped.

I think we should continue to develop it because it could do wonderful things. But we should put equal effort into mitigating or preventing the possible bad consequences.

How close are we to those bad consequences? Closer than you might think.

I’ve got huge uncertainty at present. It is possible that large language models, having consumed all the documents on the web, won’t be able to go much further unless they can get access to all our private data as well. I don’t want to rule things like that out – I think people who are confident in this situation are crazy.

The right way to think about the odds of a disaster is closer to a simple coin toss than we might like.”

Closing Thoughts

Artificial Intelligence is likely to play an increasing role in our lives. In fact, according to ChatGPT (oh, the irony) itself, some of the fields it will impact in the next 7 years include:

  • Healthcare
  • Automation
  • Transportation
  • Education
  • Smart Homes
  • Customer Service

Since voicing his concerns, Hinton has come under fire by many that he didn’t follow some of his colleagues who quit earlier. It’s easy to come to that conclusion, but it’s also easy to oversee the complexity of the problem, which touches on multiple technological, philosophical, and ethical principles.

I guess all Hinton is trying to say is that AI’s impact on humanity, whether it’d be good or bad, is likely much closer than most of us seem to think.

Featured image courtesy of CBC, submitted to them by Geoffrey Hinton. 

SPECIAL OFFER (Sponsored)

Binance Free $100 (Exclusive): Use this link to register and receive $100 free and 10% off fees on Binance Futures first month (terms).

PrimeXBT Special Offer: Use this link to register & enter CRYPTOPOTATO50 code to receive up to $7,000 on your deposits.

[ad_2]

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here