Top Takeaways from INSA’s AI Panel: Culture, Competition, and Data-Driven Decision Advantage

Seekr logo
Seekr Team
April 25, 2025
top takeaways for government from INSA AI panel
Government

This week’s Intelligence & National Security Alliance (INSA) Spring Symposium featured a thought-provoking AI panel with industry experts, including former Chief Technology & Innovation Officer of the U.S. Space Force Dr. Lisa Costa.

Dr. Costa took a tour through the biggest questions facing the intelligence community and its partners in the AI arms race. The discussion was not just about technology, it was about trust, speed, culture, and the urgent need to adapt both human and technical systems for an AI-powered future.

Seekr Advisor Dr. Lisa Costa at the INSA Spring Symposium expert panel

Let’s dig into the biggest takeaways from this powerhouse session—and what they mean for IC leaders trying to get ahead of the AI curve.

1. Culture is the biggest barrier—not technology

When asked what is holding back AI acceleration in national security, Dr. Costa did not hesitate: “Culture, culture, culture,” she said.

It’s not just about having the best algorithms or the most efficient, modern computing resources. The biggest blocker is a mindset frozen in legacy thinking. According to Dr. Costa, the challenge is moving from “that’s the way we’ve always done it” to “yes, and.”

“We do have a lot of processes that are 30, 40, 50 years old… Implementing AI to just replicate that process? Sure, it’ll be faster—but that process shouldn’t exist in the first place.”

Dr. Costa emphasized the importance of seeing AI as an opportunity for human reengineering—rethinking how people, not just machines, engage with intelligence and decision-making. The goal is for AI to help refactor outdated processes that no longer work in modern contexts.

2. Push AI to the edge—or get left behind

Dr. Costa made the case for pushing AI capability as close to the point of data collection as possible. Whether it’s satellites collecting intelligence in space or sensors on a special operations mission, the future is about actionable insights at the edge.

“I want to put AI at the farthest edge I possibly can…I may not have to download anything to a ground station. I may immediately face action in space or terrestrially based on what is collected.”

This means not only reducing latency and bandwidth demands but enabling a level of responsiveness that centralized systems simply can’t deliver.

3. Provenance and bias: Know your data, trust your insights

When it comes to adversarial AI and data validation, Dr. Costa did not mince words: “People lie. They lie again. They lie some more. And sensors—also, not knowingly—lie.”

She stressed the critical need for AI systems that can track data provenance, understand error rates, and detect bias across diverse sources. Using blockchain for data integrity is a step, but real decision advantage comes from layered trust frameworks that incorporate transparency at every level.

Her favorite rule of thumb?

“Never accept a black box from a gray zone.”

4. Infrastructure must be as trustworthy as the AI it runs

Infrastructure is not just pipes and wires—it’s the bedrock of security, performance, and trust.

Dr. Costa recounted her time leading tech for special operations versus Space Force, and the stark contrast in readiness.

“You’re not going to run modern AI capabilities on 40-year-old networks…Your AI provider needs an infrastructure solution that’s as trustworthy as the AI itself.”

She also highlighted the promise of agentic AI—decentralized agents that can operate independently, carry less data, and execute micro-missions securely and efficiently.

5. Collaboration without common definitions is chaos

Cross-agency collaboration remains tough, largely because no one speaks the same data language. Dr. Costa pointed to challenges like the wildly different definitions of “weapons of mass destruction” across FBI, SOCOM, and state agencies.

The solution? AI agents that automatically translate and synonymize definitions based on mission context.

“If the answer is we have to form a group and create a standard—that’s death. That’s never going to work. But incorporating it into AI from the beginning? That’s a force multiplier.”

6. Hybrid AI architectures are the future

As AI becomes more compute-hungry, centralized data centers are hitting their limits. The solution, according to Costa, lies in hybrid models—combining local edge processing with centralized power when necessary.

“We have not made good use of the Internet of Things. There are so many sensors out there… We can perform a lot of analysis locally and only push computation-heavy tasks centrally.”

Her vision aligns with a broader move toward distributed, resilient AI architectures—less vulnerable, more scalable, and better tuned to dynamic mission environments.

7. Industry’s role: Build trustworthy, verifiable AI with provenance

Finally, Costa challenged industry leaders to deliver AI that is not just powerful but provable.

“You want people to understand the ramifications of the questions they ask, and where they’re asking them… When they get a response back, they need to understand the sources and not just take it as the answer.”

Trust is not optional. It is foundational. Especially when decisions might involve life, death, or the stability of nations.

Final takeaways

The INSA AI panel was a wake-up call: AI is not a someday problem—it is a today imperative. However, to unlock its full value, we need to fix cultural legacy issues, rethink infrastructure, bring AI to the edge, run it cost-effectively, and embed transparency and trust at every step.

As Dr. Lisa Costa made clear, the future of AI in national security depends not just on how smart our machines become—but how fast we evolve our mindsets, our systems, and our partnerships to match.

Explore trusted AI solutions for government

Learn More

Get the latest Seekr product updates and insights

This field is for validation purposes and should be left unchanged.