Please consider supporting us by disabling your content blocker.
loader
Pointing to an AI robot poster during the 2022 World Robot Conference in Beijing
Pointing to an AI robot poster during the 2022 World Robot Conference in Beijing (photo credit: LINTAO ZHANG/GETTY IMAGES)

Why education should lead the charge in managing AI risks

As Generative Artificial Intelligence (Gen-AI) evolves at an unprecedented rate, the debate over how to manage its potential dangers intensifies. Globally, regulation is often proposed as a solution. In the US, multiple bills have been introduced at both the state and federal levels.

Similarly, in the EU, we’ve seen a slew of acts, including the AI Act and Digital Single Market (DSM) Act, bringing regulation to the forefront of the discussion in ways unseen since the introduction of the GDPR privacy regulation. While legislation is important, there are three key reasons why we should prioritize education in addressing AI’s risks.

Critical Thinking as a Tool

First and foremost, in the era of AI, our abilities to think critically, adapt, and learn independently could very well be our most important tools. These qualities will be more essential than ever as the world evolves due to AI. Through education, we can instill a culture of vigilance and proactive engagement, encouraging individuals to stay informed about the latest developments and potential threats.

Integrating AI Tools in Education

Education, however, doesn’t end with critical thinking. It is an ongoing mindset needed in the age of Gen-AI. For instance, when we released LTX Studio, our latest AI-first product, we clearly communicated to users that while we as a company were responsible for various aspects of the product, such as privacy, we also expected them to take responsibility for their usage. Abuse could result in their removal from the platform.

Adapting to New Technologies

The second reason to prioritize education is that AI-based tools are rapidly changing the way we work. Simply put, to best prepare ourselves or our children for the future workplace, we need to start integrating such tools into our daily routines, from school to home to work. As a high school student in the early 2000s, I remember how some of my teachers viewed using data from online resources as problematic.

When I mentioned it to my mother, she recalled how calculators were frowned upon by her math teachers. It took time for the education system to recognize calculators, computers, and the internet, which weren’t something to fight but rather to embrace. More than once, teachers reached this conclusion after their students adopted the new tools faster than they did.

Today, calculators are taken for granted, and students research online regularly, yet the emergence of AI has rekindled the debate about adopting new tools and technologies. As these historic examples show, debating is nice – but ultimately technology will become part of our lives, so it’s better to educate about it sooner rather than later.

The Challenges of Regulation

The third and final reason for prioritizing education is because the questions around regulation are simply too big. As the deputy chairman of Israel’s Regulation Authority recently wrote, the rapid pace of AI development brings to question the ability of regulators to keep up. For example, the EU’s AI Act includes measures aimed at restricting outlying AI models posing a potential systematic risk.

These models are defined by their computing power – the number of floating-point operations per second (FLOPs), with the threshold set at 10^25. However, since this threshold was introduced a number of mainstream models have crossed it. One of them is Meta’s open-sourced Llama model, and Meta said it won’t release its multimodal model in the EU. Epoch AI’s data shows Meta’s model isn’t alone: also Google’s Gemini, OpenAI’s GPT-4 and others have crossed the computing threshold. This isn’t to say regulation is obsolete, rather that in the high-paced technological environment we live in, it simply isn’t enough.

Conclusion

As mentioned, the emergence of AI applicative tools is already changing the way we work, and it will continue to shape our world in the years to come. Like any new technology, Gen-AI brings risks and dangers, but also an immense promise for improvement of our productivity, creativity and health. As regulators develop frameworks to mitigate the risks, we must focus on educating ourselves and our children about these tools, ensuring we harness their potential to enhance our lives and society.

The writer is chief of staff at Lightricks.