loader

“Our job is to build tools to help artists and help broadcasters and help engineers do their jobs better. And so as we’re building these types of tools, and we’re integrating this type of technology, we also have to make sure that we are being ethical in what we are putting together,” said SMPTE President Renard Jenkins during a recent session focused on Ethics and Regulation in AI.

The conversation featured representatives from SMPTE’s Joint Task Force on AI and Video, who shared their perspective on AI ethics and regulatory approaches. You can watch the full video below or read on for highlights.

The task force was formed in 2020. ETC AI and Neuroscience in Media Director Yves Bergquist stated that the group found “both an issue and an opportunity was… all the ethical and legal questions around deployment of artificial intelligence in the media industry.”

Jenkins emphasized that the media industry “are consumers of this technology.” While that alone is table stakes for the ethics debate, he said, “We also have a great responsibility in ourselves because we are able to touch millions with a single program or a single piece of content.”

Bergquist, who also serves as CEO of Corto AI, remarked, “I love looking at artificial intelligence from within the media industry because the media industry is a technology industry.”

He explained that M&E “has a massive track record in marrying human creativity with technology. It’s also not a producer of artificial intelligence. It’s a consumer of artificial intelligence products.”

The Good(?) News

However, Bergquist noted, “The practice of ethical AI is identical to the practice of good, methodologically sound AI. You need to know biases in your data. You need to have a culturally and intellectually diverse team.”

In fact, he stated, “I have yet to see a requirement of ethical AI that isn’t also a requirement of rigorous AI practice.”

To be both ethical and intellectually rigorous, Bergquist said, “You need to understand the impact …of your models on your organization, on society at large.”

AMD Fellow Frederick Walls concurred, adding, “Transparency and explainability…they’re part-and-parcel of making sure that your model does what it’s supposed to do.”

Understanding Bias in AI

“The issue of transparency is critical,” Bergquist said. “It’s also something that we have tools to address.”

He cited IBM researcher Kush R. Varshney’s “Trustworthy Machine Learning” (downloadable as PDF and found here), which lays out the “food label model” to detail important elements such as “how those models were trained, what data they were trained on, what biases were identified in the training, what are the variables that are participating the most in the model.”

Bergquist also mentioned that Google researchers have proposed “model cards” to pair with LLMs, featuring “metadata about how the model was trained, how much data was trained, how it performs, what methodologies are baked in the model, which biases are based on the data.”

After all, Jenkins pointed out, “As we know, you have to actually input some bias into your model because if not, it can go off the rails. And we have to think of bias … essentially from its original definition, which is to show a preclusion.”

Walls added, “There are sources of bias everywhere in an AI model, and I don’t think there’s a way to really get rid of it.”

“But I think there’s definitely responsibility for those who are … implementing a model to understand what those biases are, and where they might be coming from.” He also noted that documentation (logging) is also critical.

The Human Element & Policy

Bergquist emphasized that AI “is not independent of humans. It is built by humans and reflective of human biases.”

He believes we need to dial down on the Silicon Valley hype, which claims “AI is the sort of this magical technology that is going to take over our lives.”

This false advertising is damaging to progress because, Bergquist said, “Eighty-seven percent of all AI initiatives in large organizations fail because either people think that it’s magic and [will] solve all their problems, or they think that it’s just really completely incapable and can do nothing and therefore shouldn’t be looked at.”

Jenkins said, “Most of the time, the reason that those types of things fail is because individuals have not taken the time to put in the proper infrastructure, or taken the time to figure out who should be the right person leading these types of things internally.”

Walls advised that organizations start with the NIST AI Risk Management Framework when they begin to develop “a corporate strategy around mitigating risks with using AI.” He described it as “an excellent tool” and recognized that policies will differ among organizations.

He also referenced C2PA, an organization “that’s working on standards related to ensuring that you can verify the provenance and authenticity of content.”

Jenkins suggested that SMPTE’s own AI report provides “a good foundation” or perhaps “a roadmap” for organizations to create their own AI working groups to determine internal policies.