Defense AI Technology: Worlds Apart From Commercial AI
By: Brooks McKinney, APR
Artificial intelligence (AI) is all around us. If you own a smartphone, use navigational software or shop online, you are familiar with AI technology — whether you know it or not. Digital assistants (think Alexa or Siri), chatbots and auto-correcting software are all based on computer algorithms that use your online behavior to continuously refine their models of your personal interests, shopping habits and lifestyle choices. Their well-informed “suggestions” are designed to coax you to buy additional related commercial products and services.
But in the world of aerospace and defense, AI technologies — and their underlying rules of engagement — have some important differences.
Defense AI: Doing More With Less
At its core, artificial intelligence focuses on developing computer systems that can perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making and language translation.
“In commercial AI, the goal is to teach an algorithm how to do a specific task,” explained Shivani Desai, an AI systems architect with Northrop Grumman. “Those algorithms, however, require lots of data — lots of examples of what to do and what not to do.” Aerospace and defense AI applications, she continued, rarely have access to balanced data sets.
What Happens When Rules of the Road Don’t Apply?
According to Justin Vivirito, AI partnerships lead for Northrop Grumman, commercial AI also counts on having access to an orderly, well-defined environment. He points to the evolving world of autonomous vehicles as an example.
“For driverless cars, the physical environment is fully mapped or can be updated rapidly in real-time,” he said. “Well-defined rules govern the interaction of participants, common signs and control systems regulate traffic flow, and everyone’s location is well-known through GPS navigation systems.”
By contrast, a typical defense AI application has to deal with buildings, roads, and other obstacles that are different than they appear on maps, an absence of “rules of the road” and unreliable or nonexistent communications or GPS navigation references.
AI Recommendations Combine with Human Decision Making
In aerospace and defense, AI technology can aid warfighter decision-making in several ways.
“AI can be as simple as what we call a recommender system,” said Desai. “It might suggest to a pilot that the weather looks too cloudy and that he should not fly. Or it might advise a warfighter on what action he or she should take next.”
She refers to these systems as “man on the loop” systems.
“They don’t necessarily take over and perform the action for you,” she said. “The human remains the ultimate decision-maker.”
Reducing Workload, Improving Clarity
AI technology can also augment the work of defense imagery analysts.
“A good AI algorithm can help analysts looking through large amounts of video or still imagery by identifying anomalies for them to adjudicate,” Vivirito said. “It can also help to reduce the rate of human fatigue-induced errors.”
What’s important to realize about defense AI, he added, is that it’s not simply a “bolt-on” capability that magically makes a system smarter or more capable than before. It has to be integrated in a system from the ground up.
Responsible AI Focuses on Outcomes
Regardless of how defense AI is used, it must be secure and ethical.
“We always start by asking ourselves and our customers: Why do we need AI? What purpose will it serve? What does it give us from a machine perspective that we don’t have today? And most importantly, what is the cost of the algorithm making a mistake?” Desai explained.
This last question speaks to perhaps the most significant difference between commercial and defense AI: the impact of an algorithm malfunction.
“If the AI used by an unmanned aerial vehicle to surveil and analyze enemy missile sites misinterprets what it sees, for example, it could result in a loss of life or irreparable damage to international relationships,” Desai said. “We work to avoid such outcomes by subjecting every AI algorithm to rigorous verification and validation processes.”
“If there’s a chance that an algorithm could create unintended consequences, it will never make it out the door,” she emphasized.
Ensuring System Compliance and Precision
According to Jackson Bursch, an AI software engineer for Northrop Grumman, defense AI requires a diverse skillset, including more disciplines than the domain of software engineering.
“We’re not just developing software, we’re developing complex systems that work in every domain,” he explained. “So, we need people who specialize in specific sensors for data collection, others who can build AI software and still others who can handle the network engineering that connects those sensors to our software. Every part of the system has to be precisely integrated for the AI to function properly.”
But it’s more than that, Vivirito noted. “We coordinate with our government customers legal and policy compliance experts to help ensure that our systems are governed properly, that they are being used effectively, that they’re traceable and reliable and that they continue to produce equitable, unbiased results throughout their mission life. This builds justified confidence in our systems,” he said.
At the end of the day, as Bursch observed, both commercial AI and defense AI use information to complement the human decision-making process. However, the traceability of that information remains a recurring difference between the two worlds.
“Before we deploy any new AI to our customers, we document exactly how and where we plan to obtain the information, ensure that we have permission to use that data and demand that our algorithm complies with all relevant privacy issues, ensuring that we are secure and ethical for aerospace and defense” he said.