Hello and welcome to Eye on AI.
This week, Waymo co-CEO Tekedra Mawakana announced that the self-driving vehicle company is now providing 100,000 paid robo-taxi rides per week across Los Angeles, San Francisco, and Phoenix. This figure has doubled from the 50,000 paid rides reported just a few months ago in June, marking a significant milestone for the company. It also highlights the presence of AI on our roads, operating alongside human drivers daily.
Autonomous vehicles, from robo-taxis to self-driving trucks, heavily rely on AI. Some, like Waymo’s, utilize extremely detailed maps combined with rules-based systems and real-time machine learning to navigate roads, while others are moving towards systems that depend even more on AI to assess driving environments in real time. AI not only enables self-driving technology but also exemplifies the numerous challenges associated with it. For instance, issues like hallucinations and accuracy dominate discussions around leading commercial AI models. However, in the context of self-driving cars, accuracy is literally a matter of life or death.
Despite this, self-driving cars are not a major topic in today’s AI discussions, and autonomous vehicle manufacturers are not emphasizing AI in their narratives. A decade ago, self-driving cars were seen as the ne plus ultra of AI, yet Waymo’s website does not mention AI even once, only briefly referencing machine learning at the bottom of the page. This stands in stark contrast to nearly every other industry—from enterprise software to healthcare—where companies are vocally promoting their use of AI. This discrepancy is notable because while many sectors are hastily adopting AI without a clear strategy, self-driving cars represent a truly disruptive technology that fundamentally relies on AI to operate. So, why aren’t self-driving car companies capitalizing on the AI trend?
“It’s because of regulation. It’s a closed community,” said Eran Ofir, CEO of Imagry, a company that develops mapless AI autonomous driving software, emphasizing that “AI is happening every day on the road with hundreds of thousands of vehicles.”
Long before generative AI models like ChatGPT took center stage in AI discussions, autonomous vehicle manufacturers were navigating regulatory challenges to obtain permits for testing and operating their vehicles on public roads. They have faced significant setbacks (such as Cruise being banned and recalling its cars after an accident last year), but have made considerable progress overall, as evidenced by Waymo’s recent achievements and ongoing expansion. The emergence of generative AI, a different type of AI model, has introduced new complexities. Although many issues surrounding generative AI—like copyright and deepfakes—do not apply to the AI systems powering self-driving vehicles, they have sparked heated debates and increased skepticism among consumers and regulators regarding the technology and its implications. For the self-driving vehicle industry, aligning with the AI hype may pose more risks than benefits.
“The discussions with the regulators are challenging, and they want to avoid the negative consequences and debates surrounding the regulation of AI,” Ofir noted. “No one wants that aspect to be imposed on autonomous vehicles.”
Another intriguing distinction is that while the objective for self-driving vehicles is to enhance safety in an activity we already perform (driving), generative AI primarily aims to expedite tasks, sometimes compromising safety. Generative AI has accelerated scams, introduced new cybersecurity threats, and facilitated the creation of nonconsensual explicit deepfakes, leading to a general mistrust of real information.
This raises important questions about how we assess the risks and impacts of various AI applications—questions that lawmakers are currently grappling with as they seek to regulate AI. Are self-driving cars an AI technology, or is AI merely a tool used in self-driving cars? Is it both? Does it matter?
Different types of AI can function very differently, yield distinct impacts, and pose varying risks. One thing remains clear: AI is already integrated into our daily lives, even operating in the next lane.
And with that, here’s more of today’s AI news.
Sage Lazzaro
[email protected]
sagelazzaro.com
AI IN THE NEWS
U.S. political campaigns are steering clear of AI tools. Over 30 tech companies have pitched their AI tools to U.S. political campaigns for November’s election, but the vast majority of the campaigns aren’t biting, according to the New York Times. The few that have didn’t want to admit it and found that the technology fell flat. The AI technologies pitched include products that reorganize voter rolls, recreate candidates’ likenesses, and voice tools that can make tens of thousands of personalized phone calls to voters. “People just didn’t want to be on the phone, and they especially didn’t want to be on the phone when they heard they were talking to an AI program,” said Matthew Diemer, a Democrat running for election in Ohio’s Seventh Congressional District, reflecting on his experience with a tool called Civox. His campaign utilized the AI program to make personalized voter calls that incorporated his talking points and sense of humor. The AI program made almost 1,000 calls in just five minutes, but nearly all recipients hung up upon realizing they were speaking to an “AI volunteer,” according to the Times.
Microsoft schedules a limited rollout of its controversial Recall AI tool for October. That’s according to The Verge. The software giant planned to launch the feature—which captures screenshots of user activity and allows searching through archives—to testers in June, but delayed the launch due to backlash over security concerns. Recall relies on local models integrated into the Windows operating system, but security researchers discovered that the database was unencrypted and easily accessible to attackers. While no computer connected to the internet can be entirely secure, the amount of sensitive data Recall retains and its ease of access raise significant concerns, experts warn.
NIST wants you to help red-team AI office software.AI and algorithmic assessment nonprofit Humane Intelligence has called for U.S. residents—both software developers and everyday users—to participate in the qualifying round of a nationwide challenge to red-team (or attack) AI productivity software products to identify flaws. In his AI executive order last year, President Joe Biden called for a series of such challenges to be conducted through the U.S. National Institute of Standards and Technology (NIST), marking the first of several Humane Intelligence initiatives in collaboration with government agencies, according to Wired. Participants who advance through the online round will join an in-person red-teaming event at the end of October at the Conference on Applied Machine Learning in Information Security (CAMLIS) in Virginia.
FORTUNE ON AI
A new web crawler launched by Meta last month is quietly scraping the internet for AI training data —by Kali Hays
Andreessen Horowitz leads $80 million bet on startup seeking to tame AI with copyright —by Jeff John Roberts
LVMH’s Bernard Arnault has quietly invested in 5 AI startups this year via his family office —by Prarthana Prakash
AI CALENDAR
Aug. 28: Nvidia earnings
Sept. 10-11: The AI Conference, San Francisco
Sept. 10-12: AI Hardware and AI Edge Summit, San Jose, Calif.
Sept. 17-19: Dreamforce, San Francisco
Sept. 25-26: Meta Connect in Menlo Park, Calif.
Oct. 22-23: TedAI, San Francisco
Oct. 28-30: Voice & AI, Arlington, Va.
Nov. 19-22: Microsoft Ignite, Chicago, Ill.
Dec. 2-6: AWS re:Invent, Las Vegas, Nev.
Dec. 8-12: Neural Information Processing Systems (Neurips) 2024 in Vancouver, British Columbia
Dec. 9-10: Fortune Brainstorm AI San Francisco (register here)
EYE ON AI NUMBERS
50 million
That’s how many views just one video of Chubby, a cat people are generating using AI, has garnered thus far. The BBC reports that videos of Chubby are drawing millions of views and a devoted following online, “blurring the line between spam and art.” Chubby is distinctly “rotund, ginger,’ and depicted in sad situations like facing a schoolyard bully or being addicted to cigarettes, which makes it easy for users to create their own versions using any of the text-to-image models easily available online. It’s no surprise that a cat marks one of the first memes of the AI content age (save for maybe Shrimp Jesus). From Nyan Cat to Grumpy Cat, cats have always been on the frontlines of every era of internet content.
This is the online version of Eye on AI, Fortune’s weekly newsletter on how AI is shaping the future of business. Sign up for free.