Please consider supporting us by disabling your content blocker.
loader

AI Technology and the Debate Over Accountability

AI Technology and the Debate Over Accountability

As artificial intelligence technology continues to advance, it has become increasingly easier for individuals to exploit its capabilities for malicious purposes. This trend has sparked a vigorous debate concerning accountability — specifically, whether developers of AI technology or the individual users should face penalties for misconduct.

The complexity of this issue deepens, particularly when minors are involved. During a Senate hearing on November 19, chaired by Colorado’s U.S. Senator John Hickenlooper, Hany Farid, a professor at the University of California Berkeley, brought to light a disturbing example involving a 12-year-old boy who was capable of creating non-consensual fake nude images of classmates.

While acknowledging the boy’s accountability, Farid emphasized a critical point: the government needs to impose significant penalties on the AI companies responsible for providing the tools that enabled such behavior. He asserted that punitive measures against minors may not yield substantial national impact, but holding AI companies accountable could inspire change.

Alvin McBorrough, founder and managing partner of OGx, a Denver consulting firm, elaborated, saying, ‘It does become the responsibility for the developer and the deployer to put trusted safeguards in place.’

Amid these discussions, the rapid expansion of the AI industry raises alarm among advocates for regulation, who argue that both state and federal governments are falling behind in crafting legislation to hold both creators and users accountable for the misuse of AI technologies.

Concerns are escalating as realistic-looking videos and images are increasingly used to victimized teenagers and adults. Schools currently lack the policies to effectively punish students who create fake porn videos or inappropriate images of classmates. Consequently, they face few repercussions — a dilemma exacerbated by the absence of regulatory measures at state and federal levels.

Moreover, scams targeting vulnerable populations, particularly the elderly, are proliferating, facilitated by advanced AI technologies. In 2021, consumers lost a staggering $10 billion to scams and fraud, a sharp rise from $3.5 billion in 2020, according to the Federal Trade Commission.

Justin Brookman, Director of Technology Policy for Consumer Reports, noted that the costs associated with producing believable fake images have plummeted, stating, ‘What used to cost a scammer about $4 is now only 12 cents.’

Farid highlighted another pressing issue: the persistence of discrimination in AI applications due to the continued use of outdated algorithms, which are simply updated with AI technology rather than being entirely re-engineered. U.S. Representative Brittaney Pettersen pointed out that the housing and finance sectors are particularly affected by this bias.

McBorrough commended the Colorado legislature for passing Senate Bill 205, aimed at mitigating bias in AI-driven decisions by establishing applicable frameworks. The bill is set to take effect in February 2026.

Critics, however, argue that such regulations may stifle innovation. Concerns have been raised about provisions deemed ineffective or impractical.

Meanwhile, the Attorney General’s Office has been tasked with implementing the new law, including the establishment of auditing policies and identifying high-risk AI practices, alongside forming a task force to address legislative shortcomings in the forthcoming session.

McBorrough, who collaborates with major AI developers like Google, asserted that the industry is committed to the public’s safety against AI misuse. He stated, ‘The intention is good, and some developers are being cautious in making decisions.’

Farid cautioned that without stringent legal measures targeting large AI firms, the misuse of AI technology will only proliferate. He stressed that financial penalties must be considerable to effect change in behavior amongst tech giants. ‘If major technology companies develop AI technology that allows scams, the misuse will continue,’ he added.

Conclusion

The debate over accountability in AI technologies is far from settled. As the technology continues to evolve, so too must the policies governing its use. Only through careful regulation and stringent penalties can we hope to mitigate its misuse while fostering innovation that benefits society.