Please consider supporting us by disabling your content blocker.
loader

AI Image

Artificial intelligence is not a new technology, as Vivian Schiller — executive director of the Aspen Institute’s Aspen Digital — pointed out several times during the Aspen Ideas Festival, which ended on Saturday.

AI models have existed for decades, but they have recently captured the public imagination in a new way after one model, OpenAI’s ChatGPT, made the tech accessible and usable for the public. The result has been a “hype cycle,” as Airbnb co-founder/CEO and Ideas guest Brian Chesky called it, which has grabbed more attention than the advent of the internet.

Chesky said the hype cycle might be overblown, explaining that AI isn’t even an essential component of most phone apps. Likewise, Schiller frequently mentioned “Amara’s law,” which states that humans tend to overestimate the short-term effects of new technology and underestimate the long-term effects.

But with the breakneck pace of AI development and the acceleration of technology innovation overall, experts at Ideas said long-term effects may not be so distant.

During a panel discussion on Monday, Schiller asked University of Manchester professor and historian David Olusoga how fast new technologies typically lead to large-scale disruption in society. Olusoga agreed that technologies can take a long time to reach the public and change the world — James Watt’s steam engine, for instance, was invented in the 1760s but didn’t change the world until the 1830s. Now, however, Olusoga said new technologies tend to be adopted faster.

“We can see that gap between innovation and disruption shortening in the 20th century,” Olusoga said, arguing that the adoption of electricity and the internet moved faster than that of the steam engine.

Despite his suspicions of the hype, Chesky pointed out in his own panels that 21st century internet platforms have moved quickly from innovation to widespread disruption, changing the way Silicon Valley operates. Chesky argued that attitudes around recent tech revolutions in the 2000s have already swung from starry-eyed naivete to sober caution.

Changes in attitude

When they first met each other at Silicon Valley startup accelerator Y Combinator in 2008, Chesky said he and OpenAI co-founder and CEO Sam Altman were part of a fast-paced, move-first-and-think-later culture that was largely naive to the negative impacts large tech companies might have.

“When I came to Silicon Valley, the word ‘technology’ might as well have been a dictionary definition of the word ‘good,’” Chesky said. “Facebook was a way to share photos with your friends, YouTube was cat videos, Twitter was talking about what you’ve been doing today. I think there was this general innocence.”

Now, Chesky said, that culture has changed. In the decade since the two tech titans’ time at Y Combinator, the world has watched social media facilitate government overthrows in the Middle East and election meddling in the United States. American politicians spout off regularly about the mental health effects of social media on today’s children, and governments have passed sweeping regulations on large tech firms.

“I think over time we’ve realized … that when you put a tool in the hands of hundreds of millions of people, they’re going to use it in ways you didn’t intend,” Chesky said.

Hard-nosed tech journalist Kara Swisher agreed in her own panel that attitudes in Silicon Valley appear to be changing. Swisher said she has enjoyed meeting younger tech entrepreneurs in recent years who often tend to have “a better idea about the danger of the world we live in.”

Those attitudes have translated into nervousness and controversy around the advent of publicly accessible large language models.

Altman, who spoke during the “Afternoon of Conversation” on Wednesday, was fired from OpenAI in November because then-board members were concerned about how fast their AI was progressing. Former board members have since said that Altman lied to them multiple times about the company’s safety processes. Altman later returned to the company, which now has a new board.

He described the ordeal as “super painful” while addressing the Ideas audience on Wednesday, but said he understood the former board members. He described them as “nervous about the continued development of AI.” Altman did not agree that the technology was developing too fast.

“Although I super strongly disagree with what they think, what they’ve said since and how they’ve acted, I think they are generally good people who are nervous about the future,” Altman said.

‘A lot of trust to earn’

Whether “too” fast or not, experts at Ideas certainly agreed that the technology is moving quickly. Government officials and private sector actors alike claimed that the technology is moving faster than governments can regulate it.

“Policy just doesn’t move at the same pace as technology does,” said Karen McCluskie, the deputy director of technology at the United Kingdom’s Department for Business and Trade. “If tech is all about moving fast and breaking things, then diplomacy is all about moving slow and mending things. Those are opposite ideas. But that’s going to have to change.”

The tech is moving so fast, some experts said, that many technologists are concerned they will run out of data to train the AI models (Altman doubts this will be a major problem). The dilemma is serious enough that some experts have proposed using “synthetic data” to train the models. And while the computing power and electricity required to run the models makes them prohibitively expensive, experts say those costs will certainly drop in the near future, potentially making development faster and more competitive.

Tech leaders claim they are meeting unprecedented speed with unprecedented caution. Rather than fighting to accelerate a sluggish acceptance of their new tech, executives at Ideas said they are intentionally postponing product releases while they run safety checks. Altman said OpenAI has sometimes not released products or taken “long periods of time” to assess them before releasing them.

“What are our lives going to be like when it’s not just that the computer understands us and gets to know us and helps us do these things, but we can tell it to discover physics or start a great company?” Altman said. “That’s a lot of trust we have to earn as the stewards of this technology. And we’re proud of our track record. If you look at the systems that we’ve put out and the time and care we’ve taken to get them to a level of generally accepted robustness and safety, that’s well beyond what people thought we were going to be able to do.”

Chesky compared the acceleration in tech to driving.

“If you imagine you’re in a car, the faster the car goes, the more you need to look ahead and you need to anticipate the corners,” he said.

Government officials at Ideas said some of those corners are already flying by the window. In a session on AI’s role in elections, Schiller pointed to several examples of attempted voter deception or election interference using AI-generated fake information and media. So-called “bad actors” have used AI to deceive voters in Slovakia, Bangladesh and New Hampshire.

Ginny Badane, general manager of Microsoft’s Democracy Forward program, said the Russian government also has used AI to produce a fake documentary ad ridiculing the Olympic Committee and the upcoming Paris Olympics, from which Russia has been banned. The video uses a simulation of Tom Cruise’s voice as a narrator.

NBC Anchor Lester Holt — who interviewed Chesky and Altman — used a different vehicle metaphor from Chesky’s, saying “most of us are just passengers on this bus, watching you guys do these incredible things and listening to you compare it to the Manhattan Project and wondering, ‘Where is this going?’”

AI2

Michigan Secretary of State Jocelyn Benson discusses the role of artificial intelligence in elections at the Aspen Ideas Festival on Friday. Michigan has begun a campaign to educate voters about the possibility of bad actors using fake videos and images to influence elections.

Some successes

Despite its fast development, experts say AI is still far from the revolution it promises to be.

While the successes have been groundbreaking — one company, New York-based EvolutionaryScale, can now use AI to generate specialized proteins for personalized cancer care — AI still doesn’t play a critical role in most of our lives. For a technology that has been compared to the internet and even the taming of fire, experts say we are only seeing the beginning of its possible impacts.

“If you look at your phone, and you look at your home screen and ask which apps are fundamentally different because of generative AI, I would say basically none. Maybe the algorithms are a little different,” Chesky said.

But while AI may not have changed the world just yet, executives did say it has certainly changed the world for some individuals.

“One of the most fun parts of the job is getting an email every day from people who are using these tools in amazing ways,” Altman said. “People saying, like, ‘I was able to diagnose this health problem I’ve had for years but couldn’t figure out, and it was making my life miserable, and I just typed my symptoms into ChatGPT and got this idea, went to see a doctor and now I’m totally cured.’”

Holt asked Altman where he would like to be in the next five years.

“Further along the same path,” he answered.

For more information, visit Aspen Daily News.