close
close

Federal Agencies Continue to Take Action on AI Under AI Executive Order — AI: The Washington Report | Mintz – Antitrust Viewpoints

(co-author: Matthew Tikhonovsky)

  1. President Biden’s October 2023 Executive Order on Artificial Intelligence directed various agencies to take specific actions by June 26, 2024—240 days after the executive order is issued.
  2. Actions taken by the agencies over the 240 days included steps to strengthen data privacy protections, specify techniques for labeling and authenticating AI-generated content, and restrict the dissemination of AI-generated pornographic content.
  3. More specifically, on June 26, 2024, the National Science Foundation (NSF) launched a program to fund projects aimed at advancing data privacy protections.
  4. The National Institute of Technology (NIST) has published a draft report on techniques for labeling and authenticating AI-generated content and restricting AI-generated pornographic content.

President Joe Biden’s October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI EO) directed various federal agencies to take certain actions related to AI. As we discussed in the AI ​​EO timeline, June 26, 2024—240 days after the EO was issued—was the deadline for agencies to take action to strengthen data privacy and label and authenticate AI-generated content. In this bulletin, we discuss two major actions taken 240 days after the EO was signed.

NSF Launches Privacy-Enhancing Technologies Funding Program

With the development and proliferation of AI technology in recent years, data privacy concerns have become a priority for various regulators and policymakers. In March 2023, the National Science and Technology Council (NSTC) published “National Strategy for Promoting Privacy-Preserving Data Sharing and Analytics (PPDSA).” According to the strategy document, PPDSAs are “methodological, technical, and sociotechnical approaches that leverage privacy-enhancing technologies to derive value from data and enable its analysis to drive innovation while ensuring privacy and security.” The NSTC strategy creates a framework for mitigating privacy risks associated with data analytics technologies, including artificial intelligence.

In October 2023, Biden’s AI executive order also emphasized the need to protect data privacy. The executive order directed NSF to “work with agencies to identify ongoing work and potential opportunities to incorporate (privacy-enhancing technologies (PETs)) into their operations.” PETs encompass a broad range of tools designed to protect privacy, including differential privacy and end-to-end encryption. The executive order also directed NSF to “to the extent possible and appropriate, prioritize research—including efforts to translate research discoveries into practical applications—that encourages the adoption of cutting-edge PET solutions for agency use, including through engagement in research.”

Based on the NSTC strategy document and consistent with the AI ​​EO, NSF launched the Privacy-Preserving Data Sharing in Practice (PDaSP) program on June 26, 2024. The program seeks proposals for three primary funding tracks for the project:

  • Track 1: “Developing Key Technologies to Enable Practical PPDSA Solutions” – This track focuses on maturing PPDSA technologies and combinations of such technologies, with a focus on “translating theory into practice for the key PPDSA techniques under consideration.”
  • Track 2: “Integrated and comprehensive solutions for reliable data sharing in application settings” – This path supports integrated privacy management solutions with a focus on solutions for various use cases and application contexts, including various technological, legal and regulatory contexts.
  • Track 3: “Useful Tools and Testbeds for Trustworthy Sharing of Private or Otherwise Confidential Data” – This track emphasizes the need to “develop tools and test environments to support and accelerate the adoption of PPDSA technologies.” Currently, stakeholders face various barriers to adopting such technologies, including “the lack of effective and easy-to-use tools,” which this track seeks to overcome.

The PDaSP program is supported by partnerships with other federal agencies and industry. Current funding partners include Intel Corporation, VMware LLC, the Federal Highway Administration, the Department of Transportation, and the Department of Commerce. NSF also welcomes collaboration with other agencies and organizations interested in co-funding projects. Funding for the project is expected to range from $500,000 to $1.5 million for up to three years.

NIST Issues Draft Guidance on Synthetic Content

Policymakers also focused on concerns about synthetic content—audio, visual, or textual information that has been generated or significantly altered by AI. The AI ​​EO explicitly directed the Secretary of Commerce, along with other relevant agencies, to identify, within 240 days of the EO, “existing standards, tools, methods, and practices, as well as the potential development of additional science-based standards and techniques, for” authenticating content, labeling synthetic content, and “preventing generative AI from producing child sexual abuse material or creating unwitting intimate images of real people.”

On April 29, 2024, pursuant to the Executive Order on Artificial Intelligence, the Department of Commerce’s National Institute of Technology (NIST) published a draft report on the “Reducing the risks posed by synthetic content.” The draft report covers three main thematic areas, discussed below.

First, the report includes two data-tracking techniques to reveal that content is being generated or modified by AI: digital watermarking and metadata logging. While digital watermarking “involves embedding information into the content (image, text, audio, video)” to indicate that the content is synthetic, metadata logging stores information about the properties of the content and makes it available, allowing an interested party to “verify the origin of the content and how the history of the content may have (changed) over time,” according to the report.

Second, the report describes best practices for testing and evaluating data tracking and synthetic content detection technologies, including techniques for testing digital watermarking and metadata logging techniques, and automated content-based detection techniques.

Finally, the report discusses specific techniques to prevent harm from child sexual abuse material (CSAM) and non-consensual intimate images (NCII) that are created or distributed by AI. The report discusses techniques for filtering CSAM and NCII from data used to train AI systems, blocking AI-generated image output that potentially contains CSAM or NCII, and hashing confirmed synthetic CSAM and NCII to prevent further distribution.

Comments on the draft report were accepted until June 2, 2024. Although the final report was due by June 26, 2024, it has not yet been made publicly available.

(Show source.)