Pentagon adds ChatGPT to its official AI toolkit
Experts told Air & Space Forces Magazine that while the Pentagon’s adoption of generative AI tools, including ChatGPT, could lead to more efficient work for Department of Defense personnel, it also presents risks if users do not stay vigilant.
The department announced Feb. 9 it was adding OpenAI’s ChatGPT to its GenAI.mil platform, which uses machine learning on large data sets to function as a chatbot and produce text, images, or software code based on unclassified information, News.Az reports, citing foreign media.
***
GenAI.mil launched in December, using Google’s Gemini for Government GPT and later adding xAI’s government suite, based on its Grok model. Already, the platform has surpassed one million unique users.
The addition of ChatGPT may fuel further interest and growth. OpenAI is largely credited with launching a boom in generative AI, and ChatGPT remains the most popular version of the technology. According to a January study of web traffic, ChatGPT accounted for nearly 65 percent of generative AI chatbot site visits among the general public, triple that of Google’s Gemini.
Gregory Touhill, a retired Air Force brigadier general who now serves as the director of cybersecurity at Carnegie Mellon’s Software Engineering Institute, told Air & Space Forces Magazine that expanded access to AI is important because Airmen and Guardians need it to be competitive.
“I think it’s important for our Airmen today, we want our Airmen to be well prepared for the future, and the future is racing toward us now,” Touhill said. “AI is a tool that our Airmen and our Guardians can use to obtain decisive capabilities in the cyber domain.”
Touhill and SEI are currently working with the Pentagon and other agencies to develop risk management processes for using AI in government settings, he said.
Touhill added that he’s confident AI can help service members automate and eliminate tasks to do more faster. But more importantly, AI can potentially free Airmen and Guardians from lesser tasks so they can apply more time to higher-order work.
Caleb Withers, a research assistant in the Technology and National Security Program at the Center for a New American Security, foresees AI benefiting prototyping, wargaming, research, and bureaucratic paper-pushing. “With wide adoption, I imagine these tools will quickly become some of the most used,” Withers said.
But Touhill and Withers cautioned that AI also poses real risks as its use grows. “The security challenge is the fusion of hardware, software, and wetware,” Touhill said, the last term referring to the humans using AI. “We don’t want our Airmen and Guardians disclosing information into a system not designed to process that information. Once it’s in, it’s part of the system; it’s not like you can back away and wipe it clean.”
Using official defense applications and not open commercial solutions is one step to protecting information loss. Good training, common sense, and protocols will also help, Withers said, as does a skeptical approach to AI use.
“These systems are not yet fully reliable, and in some cases can be quite unreliable or fail,” Withers said. “There’s a risk of overconfidence in them.”


