loader

U.S. Military AI Strategy: Transforming Testing and Evaluation

The U.S. Military’s AI Strategy: A Necessary Overhaul

As the U.S. military strides forward into the era of artificial intelligence (AI), significant changes are required in how it evaluates and tests AI-driven technologies. Retired Lt. Gen. John ‘Jack’ Shanahan, the first director of the Department of Defense’s Joint Artificial Intelligence Center (JAIC), emphasized that the military is currently ‘not well postured’ for effective testing and evaluation (T&E) of AI systems.

Shanahan explained that while the military has a strong historical record in T&E, ‘we’ve been doing it forever,’ the rapidly evolving nature of AI necessitates a paradigm shift in how these tests are conducted. Unlike traditional systems that can undergo extensive testing at specific intervals, AI requires a ‘continuous integration/continuous deployment’ model that adapts to updates occurring ‘in hours and days, not months and years.’

Continuous Testing is Key

In his remarks during the launch of a report at the Center for a New American Security, Shanahan noted the growing stakes in military conflict and the need for dynamic testing environments. ‘For continuous integration/continuous deployment, I think we ought to be thinking about it down at unit level,’ he proposed, suggesting that testing may not always need to funnel back to centralized facilities, particularly in times of warfare.

He warned of the risks associated with prioritizing rapid deployment of AI technologies for fear of falling behind rivals such as China. ‘If we start saying we’re going to lose the competition against China unless we put this out in the field as fast as possible, that’s risky,’ he cautioned.

Challenges in AI Testing

The challenges in integrating AI into military frameworks are compounded by user acceptance issues. A study of DARPA’s autonomous dogfighting program revealed discrepancies in AI performance between simulations and actual operational conditions. ‘The test pilots would shut off the autonomy and kill the test right from the beginning,’ noted report author Josh Wallin, highlighting the importance of involving operators earlier in the testing process.

Wallin emphasized that the Defense Department must address how AI autonomous systems will interact with both friendly and enemy forces, stressing that the unpredictability of AI behavior in real-world scenarios poses a significant hurdle.

A Roadmap for the Future

To support a successful transition to effective AI integration, Shanahan called for a comprehensive approach to T&E throughout the life cycle of systems. ‘We do have to look at this as a full life cycle approach,’ he stated. The military’s response to AI challenges must not just emphasize deployment speed but also ensure that systems used are effective and reliable under battlefield conditions.

As the military explores these new frontiers, the challenges faced will parallel those in industries across the globe. The need for robust testing, operator involvement, and a clear strategy will drive success in the adaptation of AI technologies.