loader

AI needs governance

getty

There has been no end to talk of embracing artificial intelligence — or any other new technology for that matter — before one’s competitors take hold of it. The consequences, it is always said, are dire, with slow-adopting companies becoming dinosaurs ready to suffer the effects of an asteroid hit. But business leaders are apparently not buying into this notion — governance, security, and corporate culture need to be factored into the equation before AI becomes a significant part of operations and decision-making.

If anything, nearly all business leaders (98%) say they are “willing to forgo being the first to use AI” if that ensures they deliver it safely and securely, according to a survey of 205 executives, published in MIT Technology Review and underwritten by Boomi.

AI ambitions are substantial, but few have scaled beyond pilots, the survey showed. Fully 95% of companies surveyed are already using AI and 99% expect to in the future. However, the majority, 76%, have only deployed AI in one to three use cases.

Welcome to the trailing edge of AI. What will bring organizations to get AI past the finish line — which is the point in which it is in full production, and delivering the returns promised?

Focusing on Results

Industry leaders agree that it’s time to look past the hype and hopes of AI and focus on solid results. “It’s understandable to be skeptical of headlines claiming that every new tech breakthrough will change everything,” said Raj Sharma, global managing partner for growth and innovation at EY.

Sharma advises caution but is optimistic about what AI can ultimately deliver. “Generative AI and AI-driven large language models appear poised to fulfill this promise. With last year being considered the year of AI hype, currently, we’re witnessing 2024 as the year of AI reality. Businesses are exploring large-scale transformation while regulators focus on implementing new AI codes and regulations.”

Governance and Risk Management

To that end, governance, security, and privacy are the biggest brakes on the speed of AI deployment, cited by 45% of respondents to the MIT-Boomi survey.

“The fake-it-until-you-make-it approach ceases to be a viable strategy when it comes to AI,” Mrinal Manohar, CEO of Casper Labs, pointed out. “The primary difficulty is determining how to adopt the appropriate governance frameworks for high-risk applications while still driving responsible AI innovation and adoption.”

“People drive faster when they have seatbelts on,” Manohar added. Governance and risk management can help unleash AI’s full potential and get it across the finish line. However, the technology has yet to deliver on its potential, “largely due to a lack of governance and unified standards. Strong AI governance will drive faster innovation and more reliable real-world deployment, setting AI up for success.”

Corporate Culture’s Role

Corporate culture is a significant factor with AI governance and risk management. “While at this point many organizations have adopted an internal policy on the use of AI tools, such policies are only helpful if they are actually implemented and followed throughout an organization, not stashed away in a drawer or on a rarely-visited intranet site,” said Anna Westfelt, partner and head of the data privacy practice at law firm Gunderson Dettmer.

“Build a culture of accountability and compliance when it comes to privacy and security, and train employees to apply caution when using AI tools,” Westfelt urged. This includes “consistent monitoring by the organization’s IT team of AI, together with regular training to make sure employees are aware of the boundaries around the use of AI tools.”

At this point, “many organizations have been able to obtain more secure and restricted walled-off commercial versions of popular and publicly available GenAI tools,” Westfelt continued. “Employees should be discouraged from using any tool that is not vetted by the organization.” An inventory of tools used and training provided should also be tracked, she said.

Conclusion

Ultimately, the goal of AI governance and risk management is the responsible and ethical use of the technology as it develops within organizations. “The way forward is to integrate proactive risk management throughout every stage of the AI transformation,” Sharma said. “This can help to build confidence, foster agility, and navigate disruption effectively.”