A survey of annual reports from the biggest U.S. corporations are increasingly highlighting artificial intelligence as a possible risk factor.
According to a report from research firm Arize AI, the number of Fortune 500 companies that cited AI as a risk hit 281. That represents 56.2% of the companies and a 473.5% increase from the prior year, when just 49 companies flagged AI risks.
“If annual reports of the Fortune 500 make one thing clear, it’s that the impact of generative AI is being felt across a wide array of industries—even those not yet embracing the technology,” the report said. “Given that most mentions of AI are as a risk factor, there is a real opportunity for enterprises to stand out by highlighting their innovation and providing context on how they are using generative AI.”
To be sure, the jump in warnings also coincides with the explosion of awareness and interest in AI after OpenAI’s release of ChatGPT in late 2022. The number of companies that made any mention of AI leapt 152% to 323.
Now that AI is fully on corporate America’s radar, the risks and opportunities are coming into focus, with companies disclosing where they see potential downside coming from.
But certain companies are more worried than others. Leading the way was the media and entertainment industry, with 91.7% of Fortune 500 companies in that sector citing AI risks, according to Arise. That’s as AI has rippled through the industry as performers and companies look to guard against the new technology.
“New technological developments, including the development and use of generative artificial intelligence, are rapidly evolving,” streaming leader Netflix said in its annual report. “If our competitors gain an advantage by using such technologies, our ability to compete effectively and our results of operations could be adversely impacted.”
Hollywood giant Disney said rules governing new technologies like generative AI are “unsettled,” and eventually could affect revenue streams for the use of its intellectual property and how it creates entertainment products.
Arise said 86.4% of software and tech companies, 70% of telecoms, 65.1% of healthcare companies, 62.7% of financials, and 60% of retailers also warned.
By contrast, just 18.8% of automotive companies flagged AI risks, along with 37.3% of energy firms and 39.7% of manufacturers.
The warnings also came from companies that are incorporating AI into their products. Motorola said “AI may not always operate as intended and datasets may be insufficient or contain illegal, biased, harmful or offensive information, which could negatively impact our results of operations, business reputation or customers’ acceptance of our AI offerings.”
Salesforce pointed to AI and its Customer 360 platform, which provides information about customers’ customers: “If we enable or offer solutions that draw controversy due to their perceived or actual impact on human rights, privacy, employment, or in other social contexts, we may experience new or enhanced governmental or regulatory scrutiny, brand or reputational harm, competitive harm or legal liability.”
AI was also flagged as a risk when it comes to cybersecurity and data leaks. In fact, the recent Def Con security conference highlighted the importance of AI in cybersecurity.
Meanwhile, a study published in the Journal of Hospitality Market and Management in June found consumers were less interested in purchasing an item if it was labeled with the term “AI.”
Consumers need to be convinced of AI’s benefits in a particular product, according to Dogan Gursoy, hospitality management professor at Washington State University’s Carson College of Business and one of the study’s authors.
“Many people question, ‘Why do I need AI in my coffee maker, or why do I need AI in my refrigerator or my vacuum cleaner?’” he told Fortune earlier this month.
- 0 Comments
- Ai Process
- Artificial Intelligence