Please consider supporting us by disabling your content blocker.
loader

Training is critical for the ethical use of AI

getty

Artificial intelligence (AI) technologies are now being widely used by organizations. In fact, 72% of organizations surveyed by consultancy McKinsey earlier this year had adopted AI in at least one business function. What’s more, 65% of respondents reported that their organizations were regularly using generative AI (genAI) technologies, while three-quarters predicted that genAI would lead to significant or disruptive change in their industries over the coming years.

AI can be applied to a host of business use cases that boost efficiency, productivity and competitiveness, from the automation of routine tasks through to the sophisticated analysis of large datasets. At the same time, however, the technology presents some significant ethical risks, particularly in relation to bias, intellectual property and privacy. So, how can leaders help their organizations to navigate the ethical risks associated with AI?

1. Exercise sound human judgment

“Developing and deploying new technologies in an ethical, responsible way will depend on human judgment,” says Rob Hayward, chief strategy officer at organizational ethics advisory firm Principia. “AI is not simply good or evil, but every new AI solution will involve thousands of micro decisions on the ground throughout the innovation lifecycle. Those decisions will not only depend on legal and regulatory parameters, but on individual and collective judgement on the right thing to do.”

Hayward emphasizes that exercising sound judgment will require people at every level of the organization, from designers and engineers to marketers and product managers, “to identify and reflect on the ethical challenges presented by new technologies, and to understand their decisions through the lens of the impact that they will have on the world.”

The most important thing that leaders can do is to engage their people in open and honest dialogue on how AI can help or hinder the organization’s ethical aspirations and commitments. To drive ethical AI, he also argues that leaders must strengthen organizational systems, policies and governance mechanisms.

2. Promote a culture of responsible innovation

Culture is key to success in almost every organizational endeavor. So, with AI, it’s essential to promote a culture of responsible innovation, explains Nell Watson, an AI expert, ethicist and author of Taming the Machine: Ethically harness the power of AI. Conduct regular audits for biases and unintended consequences, especially in high-stakes domains like hiring and performance evaluation. Prioritize data privacy and security with robust access safeguards and explicit consent protocols.

Watson recommends that leaders implement clear monitoring for AI decision-making, ensuring accountability and preserving the right to challenge algorithmic outcomes. They should also consider the long-term implications of deploying AI, including potential job displacement, and proactively invest in reskilling initiatives to future-proof the workforce.

“Remember that ethical AI is a journey, not a destination,” Watson says. “Foster open dialogue with stakeholders, including employees and the public, to address reasonable concerns and build trust. By balancing the efficiency from innovation with ethical considerations, leaders can harness AI’s incredible potential without causing scandal or putting unfair burdens onto others.”

3. Have a valid business case for using AI

AI is just one technology among many that make up the Fourth Industrial Revolution or Industry 4.0. Other technologies in the mix include advanced analytics, blockchain and the cloud. “These emerging digital technologies all come with complex trade-offs around ethics, sustainability and ‘tech for good,’” says Richard Markoff, supply chain management professor at ESCP Business School in Paris and co-author of The Digital Supply Chain Challenge.

Markoff states that just like with other technologies, any deployment of AI should derive from true business drivers, have a robust business case, and be subject to “a careful implementation with deep engagement and commitment from company leadership.” Citing the example of driverless vehicles, he says that while much debate has focused on passenger cars and taxis, the most likely near-term application could be “driverless trucks that carry freight within the supply chains of most businesses.”

4. See AI as a force for good

Such is the controversy around AI, it’s easy to forget that the technology is here to help, not to take over. “AI can handle the mundane tasks so you can focus on what truly matters,” emphasizes Chris Griffiths, co-author of The Focus Fix: Finding clarity, creativity and resilience in an overwhelming world. “By offloading repetitive chores to AI, you free up your team’s brainpower for strategic and creative thinking.”

Griffiths believes that we need to embrace AI as our ally, using it to lighten our cognitive load while ensuring that we’re ethically sound in our approach. “This way, we not only enhance our productivity, but also find greater clarity, creativity and joy in our everyday work,” he says.

Training in the ethical use of AI is critical. “This isn’t just a matter of teaching people to push the right buttons, or use the right AI model,” Griffiths explains. “It’s about understanding how to ethically harness AI’s full potential. Leaders need to cultivate an environment where teams see AI as a tool for good – a way to boost productivity without sacrificing mental wellbeing.”

Enjoyed this article? Follow me by clicking the blue “Follow” button beneath the headline above.