• IN GOD WE TRUST
  • SOLAR SYSTEMS NETWORK
  • SOLAR SYSTEMS TECHNOLOGY
  • CREATED BY SATOSHI NAKAMOTO
  • DEVELOPED BY JC MARTIN

AI: How 'Freaked Out' Should We Be

 

  • By JC Martin
  • StarBox, NewsBox

 

4 Sept 2023

 

At SXSW, Amy Webb outlined her vision for where artificial intelligence could be headed in the next 10 years

 

Artificial intelligence has the awesome power to change the way we live our lives, in both good and dangerous ways. Experts have little confidence that those in power are prepared for what's coming.

 

Back in 2019, a research group called OpenAI created a software program that could generate paragraphs of coherent text and perform rudimentary reading comprehension and analysis without specific instruction.

 

OpenAI initially decided not to make its creation, called GPT-2, fully available to the public out of fear that people with malicious intent could use it to generate massive amounts disinformation and propaganda. In a press release announcing its decision, the group called the program "too dangerous".

 

Fast forward three years, and artificial intelligence capabilities have increased exponentially.

 

In contrast to that last limited distribution, the next offering, GPT-3, was made readily available in November. The Chatbot-GPT interface derived from that programming was the service that launched a thousand news articles and social media posts, as reporters and experts tested its capabilities - often with eye-popping results.

 

Chatbot-GPT scripted stand-up routines in the style of the late comedian George Carlin about the Silicon Valley Bank failure. It opined on Christian theology. It wrote poetry. It explained quantum theory physics to a child as though it were rapper Snoop Dogg. Other AI models, like Dall-E, generated visuals so compelling they have sparked controversy over their inclusion on art websites.

 

Machines, at least to the naked eye, have achieved creativity.

 

On Tuesday, OpenAI debuted the latest iteration of its program, GPT-4, which it says has robust limits on abusive uses. Early clients include Microsoft, Merrill Lynch and the government of Iceland. And at the South by Southwest Interactive conference in Austin, Texas, this week - a global gathering of tech policymakers, investors and executives - the hottest topic of conversation was the potential, and power, of artificial intelligence programs.

 

Arati Prabhakar, director of the White House's Office of Science and Technology Policy, says she is excited about the possibilities of AI, but she also had a warning.

 

"What we are all seeing is the emergence of this extremely powerful technology. This is an inflection point," she told a conference panel audience. "All of history shows that these kinds of powerful new technologies can and will be used for good and for ill."

 

Her co-panelist, Austin Carson, was a bit more blunt.

 

"If in six months you are not completely freaked the (expletive) out, then I will buy you dinner," the founder of SeedAI, an artificial intelligence policy advisory group, told the audience.

 

BBC News Media Clip

WATCH: Microsoft's Brad Smith says AI will affect generations to come

 

"Freaked out" is one way of putting it. Amy Webb, head of the Future Today Institute and a New York University business professor, tried to quantify the potential outcomes in her SXSW presentation. She said artificial intelligence could go in one of two directions over the next 10 years.

 

In an optimistic scenario, AI development is focused on the common good, with transparency in AI system design and an ability for individuals to opt-in to whether their publicly available information on the internet is included in the AI's knowledge base. The technology serves as a tool that makes life easier and more seamless, as AI features on consumer products can anticipate user needs and help accomplish virtually any task.

 

Ms Webb's catastrophic scenario involves less data privacy, more centralisation of power in a handful of companies and AI that anticipates user needs - and gets them wrong or, at least, stifles choices.

 

She gives the optimistic scenario only a 20% chance.

 

Which direction the technology goes, Ms Webb told the BBC, ultimately depends in large part on the responsibility with which companies develop it. Do they do so transparently, revealing and policing the sources from which the chatbots - which scientists call Large Language Models - draw their information?

 

The other factor, she said, is whether government - federal regulators and Congress - can move quickly to establish legal guardrails to guide the technological developments and prevent their misuse.

 

In this regard, government's experience with social media companies - Facebook, Twitter, Google and the like - is illustrative. And the experience is not encouraging.

 

"What I heard in a lot of conversations was concern that there aren't any guardrails," Melanie Subin, managing director of the Future Today Institute, says of her time at South by Southwest. "There is a sense that something needs to be done. And I think that social media as a cautionary tale is what's in people's minds when they see how quickly generative AI is developing."