loader

Microsoft Advocates for Responsible AI Use in Europe

As a long-standing technology partner to European governments, businesses, and citizens, Microsoft seeks to ensure that the continent benefits from digital technologies and artificial intelligence (AI) while still respecting the rights of EU citizens online.

AI is no longer a distant prospect but a reality, reshaping businesses, revolutionizing healthcare, and driving scientific discovery across the EU. However, alongside these advancements come significant challenges that demand responsible navigation. Microsoft, a major provider of AI services, emphasizes its role in fostering public trust through its solutions.

The new EU mandate provides an opportunity to explore how technology can benefit people, driving innovation and competitiveness while implementing proportional measures to shield them from potential abuses. Microsoft looks forward to collaborating with the new leadership in European institutions during the 2024-2029 mandate.

Strong political leadership is crucial at this juncture. Ursula von der Leyen, President of the European Commission, stated: ‘Europe is leading the way in making AI safer and more trustworthy, and tackling the risks stemming from its misuse.’ However, the EU must also recognize AI’s potential to drive digital transformation and economic growth. As President von der Leyen highlighted, the focus should be on ‘becoming a global leader in AI innovation.’ She committed to strengthening the EU’s approach to AI-generated content during her tenure.

To achieve innovation and safety, a balanced, inclusive approach is necessary, involving the government, civil society, and industry. The EU is making strides in establishing accountable legal frameworks for safe online products, including AI. Microsoft recognizes recent legislative developments and is ready to engage with EU stakeholders on effective implementation. The company also sees a pressing need for updated legal measures to counter the misuse of AI technologies.

Research on online safety reveals that some societal groups are disproportionately at risk from intentional technology misuse. Therefore, Microsoft advocates for concrete steps to protect vulnerable populations, including children, women, and older adults, from harm caused by abusive AI-generated content.

In a recent white paper, Microsoft outlines actions it is taking to mitigate these risks along with policy recommendations to bolster existing efforts. Central to their stance is the establishment of clear and balanced rules that protect individuals while fostering innovation in Europe. The company recommends that the EU integrate provenance tools, strengthen existing legal frameworks, and enhance victim-centered decision-making processes.

Microsoft emphasizes the necessity of a robust safety architecture for its services, grounded in safety by design and incorporating durable media provenance and watermarking techniques. Collaboration with industry, governments, and civil society is essential to safeguard services from abusive content while promoting awareness and education about AI. Building trust in AI across society is critical to realizing its advantages.

As part of its strategy, Microsoft focuses on three key risk areas: protecting children from online exploitation, safeguarding women from non-consensual images, and defending older adults from AI-enabled fraud.

Although the challenges posed by AI are substantial, the potential reward is equally significant. Addressing these issues proactively will help pave the way for a future where AI enhances creativity, upholds privacy, and fortifies democratic principles.

Microsoft remains dedicated to playing its part but acknowledges that collaboration with stakeholders across the EU digital ecosystem is vital. The company aims for technology to serve as a positive force in society, consistent with its mission to empower every person and organization to achieve more. The time for action is now.

For further details, read the full report at aka.ms/SyntheticMediaEUWhitepaper.