Thursday, December 26, 2024
HomeInnovateAs Bill Gates hails a 'Stunning New Technology Age’, here’s why some...

As Bill Gates hails a ‘Stunning New Technology Age’, here’s why some caution against Generative AI

Generative AI tools are pushing the boundaries of technological possibilities, but we need to avoid pushing too hard, too far, too fast

“The age of A.I. has begun.” 

With these words, Bill Gates penned a mostly hopeful missive for the audience tuned into his thoughts via his Gates Notes blog and email list, with indications that there are much bigger changes coming, with GPT-4 on the horizon.

The Microsoft co-founder should know best, having been present at close quarters while Open.AI navigated the challenges he presented to its then nascent AI. Having presented before it the challenge of clearing an Advanced Placement Biology test, in part because it required critical thinking rather mere regurgitation of facts, he was stunned to see the AI pick up the ropes in just a few months as opposed to a few years, and then clear the exam with a near perfect score.

Calling Generative AI, “the most important advance in technology since the graphical user interface”, Gates put forth a number of ways in which AI could rewrite the future of industry, work, and society. And he’s not alone in that assessment; apps such as ChatGPT, Bard, and the new AI powered Bing grabbing headlines aplenty. In fact, ChatGPT is already yesterday’s news, with a successor released in GPT-4.

Amazon Web Services (AWS) announced an expansive partnership with Hugging Face, an AI startup. Apple is reportedly exploring AI across its business models, including with Siri, while Mark Zuckerberg aims to “turbocharge” Meta’s products with AI. No one stands to win or lose as much from the AI turf war, with the technology underpinning how they sell and deliver products, information, assistance, advertisements, or even managing data.

As a glut of AI models flood the market, it is worth understanding two things, Firstly, that the race for AI is heating up, and that the positive use cases for generative AI needs to be taken with a grain of salt given its potential for misuse and harm. 

The danger of AI

Given the incredible speed with which we have seen generative AI advance in recent months, it is not surprising to see the overwhelming optimism surrounding the space. 

But it’s not all sunshine and rainbows; Sam Altman, CEO of OpenAI, has publicly admitted that he’s scared of the tech his company is cooking up, saying, “I think it’s weird when people think it’s like a big dunk that I say, I’m a little bit afraid,” the OpenAI CEO said in an interview with podcaster Lex Fridman. “I think it’d be crazy not to be a little bit afraid, and I empathize with people who are a lot afraid”, with his biggest worry being that, “there will be other people who don’t put some of the safety limits that we put on it”.

He’s not alone. Other seasoned veterans in the AI field are asking that we pause and take a step back from the breakneck development we have seen. In a letter published on the website of the Future of Life Institute, more than 1,000 prominent signatories – including renowned tech leaders and personalities such as Apple co-founder Steve Wozniak, Twitter CEO Elon Musk, and author Yuval Noah Harari – called for a tech truce.

The Institute, whose stated mission is “steering transformative technology towards benefitting life and away from extreme large-scale risks”, called for every company working on AI models that are more powerful than the recently released GPT-4 to immediately halt work for at least six months. This moratorium should be “public and verifiable” and would allow time to “jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.”

The letter says this is necessary because “AI systems with human-competitive intelligence can pose profound risks to society and humanity.” Those risks include the spread of propaganda, the destruction of jobs, the potential replacement and obsolescence of human life, and the “loss of control of our civilization.” The authors add that the decision over whether to press ahead into this future should not be left to “unelected tech leaders.”

In particular, the letter poses four loaded questions, some of which presume hypothetical scenarios that are highly controversial in some quarters of the AI community, including the loss of “all the jobs” to AI and “loss of control” of civilization:

  • “Should we let machines flood our information channels with propaganda and untruth?”
  • “Should we automate away all the jobs, including the fulfilling ones?”
  • “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us?”
  • “Should we risk loss of control of our civilization?”

The letter comes in the wake of claims that GPT-5 could achieve artificial general intelligence. In layman terms, that is the ability to understand and learn anything a human can comprehend. That could make it incredibly powerful in ways that we haven’t even explored yet, or begun to comprehend. That alone calls for responsible planning and management surrounding the development of AI systems. That is something the open letter claims is not happening, with “AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.” 

Instead, the letter asserts that new governance systems must be created that will regulate AI development, help people distinguish AI-created and human-created content, hold AI labs like OpenAI responsible for any harm they cause, enable society to cope with AI disruption (especially to democracy), copyright infringement, malicious misuse, and more.

The Institute believes that the time to step back is now, and that if “all key actors” don’t agree to slow AI research soon, “governments should step in and institute a moratorium.” 

This seems like a definitive turning point for humanity and technology, much like the emergence of the internet, or the smartphone, making it critical that we strike a balance between caution and progression. Regardless of which side of the debate you stand on with respect to AI, the disruptive power of AI models means the debate, and the surrounding tug-of-war, will continue to rage for some time still.

Karan Karayi
Karan Karayihttps://in-focusindia.com/
A part-time car enthusiast and full-time food aficionado, Karan is forever chasing his next big creative thrill. He also doesn’t enjoy writing in third-person.
RELATED ARTICLES

Latest Artilces