Please consider supporting us by disabling your content blocker.
loader

A camera looks down on four students seated at desks, groups together. On the desks are laptops, spoons and wires

Introduction

New federal guidance outlines how ed tech providers can build trust with district leaders as they integrate artificial intelligence into products and platforms.

Key Risks

The Education Department outlined eight categories of risk for using AI in schools:

  • Race to release
  • Bias and fairness
  • Data privacy and security
  • Harmful content
  • Ineffective systems
  • Malicious uses
  • Misinformation management
  • Transparency and explainability
  • Underprepared users

Expert Opinion

Reflecting current distrust with AI, Patrick Gittisriboongul, assistant superintendent of Lynwood Unified School District in California, stated:

Would I buy a generative AI product? Yes! But there’s none I am ready to adopt today because of unresolved issues of equity of access, data privacy, bias in the models, security, safety, and a lack of a clear research base and evidence of efficacy.

Shared Responsibility

A call for ed tech providers to share responsibility with schools in introducing AI into the classroom came from President Joe Biden in a 2023 executive order. Harnessing AI while mitigating its risks requires “a society-wide effort that includes government, the private sector, academia and civil society,” according to the executive order (link).

Guidance for Ed Tech Providers

The agency outlined five key areas for ed tech providers to consider in developing this shared responsibility with schools:

  • Designing for education: Developers should understand specific education values and challenges. Educator and student feedback should be included in all aspects of product development.
  • Providing evidence of rationale and impact: Educational institutions need evidence of an ed tech tool’s advertised solutions.
  • Advancing equity and protecting civil rights: Ed tech providers should be aware of representation and bias in data sets, algorithmic discrimination, and how to ensure accessibility for students with disabilities.
  • Ensuring safety and security: Ed tech providers need to lay out how they will protect the safety and security of users of AI tools.
  • Promoting transparency and earning trust: To build trust with district leaders, ed tech providers, educators and other stakeholders need to collaborate.

Conclusion

The department’s guidance noted that states and school districts are also developing their own AI use guidelines. As of June, 15 states have released resources for integrating AI in education. Ed tech providers, the agency added, should review relevant school and state AI guidance as they look to work with school districts.

For more information, visit the original article here.