Co-author: Matthew Tikhonovsky
President Biden’s October 2023 Executive Order on AI directed various agencies to take certain actions by June 26, 2024 — 240 days after the EO’s issuance. These actions involved steps to strengthen data privacy, identify techniques for labeling and authenticating AI-generated content, and curb the dissemination of AI-generated explicit content.
NSF Launches Funding Program for Privacy-Enhancing Technologies
As AI technologies have evolved, concerns about data privacy have been top of mind for regulators and policymakers. In March 2023, the National Science and Technology Council (NSTC) published its “National Strategy to Advance Privacy-Preserving Data Sharing and Analytics (PPDSA).” This strategy creates a framework for mitigating privacy-related risks associated with technologies used for data analysis, including AI.
Building off this strategy and pursuant to the AI EO, on June 26, 2024, the NSF launched the Privacy-Preserving Data Sharing in Practice (PDaSP) program. The program seeks solicitations for three main tracks of project funding:
- Track 1: “Advancing key technologies to enable practical PPDSA solutions” – Focuses on maturing PPDSA technologies and transitioning theory to practice.
- Track 2: “Integrated and comprehensive solutions for trustworthy data sharing in application settings” – Supports integrated privacy management solutions for different use-cases and contexts.
- Track 3: “Usable tools and testbeds for trustworthy sharing of private or otherwise confidential data” – Emphasizes developing tools and testbeds to support the adoption of PPDSA technologies.
The PDaSP program is supported by partnerships with other federal agencies and industry. Current funding partners include Intel Corporation, VMware LLC, the Federal Highway Administration, the Department of Transportation, and the Department of Commerce. Project funding is expected to range from $500,000 to $1.5 million for up to three years.
NIST Issues Draft Guidance on Synthetic Content
Policymakers have also focused on concerns related to synthetic content — audio, visual, or textual information generated or significantly altered by AI. The AI EO directed the Secretary of Commerce along with other relevant agencies to identify standards and techniques for authenticating content, labeling synthetic content, and preventing AI from producing harmful materials.
On April 29, 2024, the Department of Commerce’s National Institute of Technology (NIST) published a draft report on “Reducing Risks Posed by Synthetic Content.” The draft report covers three main areas:
- Data tracking techniques for disclosing AI-generated content, including digital watermarking and metadata recording.
- Best practices for testing and evaluating data tracking and synthetic content detection technologies.
- Techniques for preventing harm from Child Sexual Abuse Material (CSAM) and Non-Consensual Intimate Imagery (NCII) created or disseminated by AI.
Comments on the draft report were submitted by June 2, 2024. While the final report was due on June 26, 2024, it has not been made publicly available.
- 0 Comments
- Ai Process
- Artificial Intelligence