The Domain Handover Dilemma: Where to Trust AI First (and Last)

Last week’s AI for Defense Summit brought together leaders from across the defense, intelligence, and homeland security communities to tackle some of the most pressing questions facing AI implementation in national security. Two standout panels—one moderated by Seekr’s Senior Advisor Dr. Lisa Costa and another featuring Seekr’s Director of Strategic Partnerships Mark Fedeli—offered insights into the future of AI-powered warfare and infrastructure protection.
Let’s dive into the top takeaways from these powerhouse sessions and what they mean for defense leaders navigating the AI transformation.
Drawing the line: Where defense leaders trust AI today
When Dr. Lisa Costa posed the provocative question of which operational domain panelists would hand over to autonomous control first, the response was unanimous: cybersecurity. The reasoning was clear: cyber operations offer cleaner data, faster-paced threats where human speed becomes the limiting factor, and well-defined boundaries for AI systems to operate within.
“The speed of cyber would be probably the most attractive, because the signal to noise ratio is so high,” explained Justin Fanelli, Tech Director for the Navy’s Program Executive Office Digital.
But what about the last domain they’d surrender to AI?
Here opinions diverged between maritime operations and nuclear capabilities, with all agreeing that nuclear should remain firmly under human control. Dr. Costa’s strategic questioning revealed the maritime domain as particularly challenging due to its “uncertain physical environment” and the continued need for general-purpose forces.
The hypersonic imperative: When AI becomes mission-critical
The discussion revealed scenarios where AI isn’t just helpful, but absolutely necessary. Hypersonic payloads present the ultimate defensive challenge, where the compressed timelines make traditional human-in-the-loop decision-making impossible. In golden dome scenarios—defensive operations against incoming threats—the speed of hypersonic vehicles fundamentally eliminates the option for human confirmation before engagement.
This reality drives home that some defensive operations will require AI not as an enhancement, but as the only viable solution to impossible timelines. As Dr. Lisa Costa noted, the challenge becomes even more complex when considering that shooting down hypersonic vehicles is “like trying to shoot a bullet out of the sky with a BB.”
From guardrails to governance: The “Keep Summer Safe” problem
The panel introduced a memorable framework for AI limitations through a Rick and Morty reference—the episode where a car is told to “keep summer safe” and proceeds to take increasingly extreme measures. This highlighted a critical point: the limitation needs to be at the intersection of human understanding of both intent and constraints.
As Brian Stensrud from CAE noted: “If I don’t understand why a system is doing what it’s doing, if I don’t understand what its Commander’s Intent is interpreted to be, and I don’t understand what the constraints are—that is where I would say it’s unusable.”
Critical infrastructure vulnerability: The 85% private sector challenge
The infrastructure protection panel highlighted a sobering reality: 85% of U.S. critical infrastructure sits in private hands, creating unique challenges for AI implementation. These operators are reluctant to modify operational systems even with known cyber vulnerabilities, fearing systems that might not come back online.
Meanwhile, ransomware attacks on the defense industrial base have seen “a drastic increase over at least the last eight months,” according to Lesley Bernys from the DoD Cyber Crime Center. And the challenge extends beyond encrypted files. Attackers are repositioning assets and creating complex scenarios that require sophisticated AI-powered detection.
The China challenge: Beyond technological competition
Mark Fedeli offered a stark perspective on U.S.-China AI competition, noting that Chinese models like Qwen are trained on fundamentally different information environments. The risk extends beyond mere technological capabilities to the foundational information systems that train AI models.
“The risk is we’re not looking at all of the threats that China presents… if we do get outpaced in AI, are we going to overreach?” Fedeli warned. “What’s at risk is self-evident truth versus propaganda defining the information environment.”
Breaking through the innovation adoption gap
The future warfare panel revealed a critical challenge in defense AI implementation: organizations trapped by legacy thinking even when freed from legacy constraints.
The solution, Fanelli argued, lies in focusing on adoption rather than just innovation:
“We don’t want necessarily just innovators in the government. The private sector is so good at innovating that we want to do more adoption and less competing with industry.”
This shift requires moving beyond incremental improvements to fundamental re-engineering of processes. The panel outlined a hierarchy of AI impact: automating existing processes yields only modest 5% improvements, while streamlining workflows can achieve 15-40% gains. However, true transformation—delivering improvements above 40%—demands complete re-engineering of both operational value chains and tactical kill chains.
Final takeaways
The AI for Defense Summit made clear that we’re past the point of asking whether AI belongs in national security. We’re now focused on how to implement it safely, effectively, and at scale. The challenges are as much cultural and procedural as they are technical.
Success will require moving beyond traditional approaches to embrace new paradigms: from human-centered design to abstracted interfaces, from committee-driven standards to AI-enabled translation, from centralized processing to distributed agentic systems.
Most importantly, it will require maintaining human judgment and values at the center of systems that can operate at superhuman speed and scale. The future of AI in defense isn’t about replacing human decision-making—it’s about augmenting human capability while preserving human accountability.
As both panels demonstrated, the organizations that succeed will be those that can balance the urgent need for AI capabilities with the equally urgent need for trustworthy, explainable, and ethically-grounded systems. The stakes couldn’t be higher, and the timeline couldn’t be shorter.