Cognitive Warfare: Those with Superior AI Win

Seekr Chairman and CEO Pat Condo
Chairman and CEO
June 16, 2025
cognitive warfare in AI
Government

Adapted from a presentation at SNG Live DefenseInnovation on June 11, 2025

Throughout the 1990s, I led the development of some of the largest search systems for the Defense and Intelligence communities: targeting and tracking systems, investigative platforms, the open-source platform FBIS, PrISM, Total Information Awareness, and several other search solutions for Homeland Defense.

From my years in the search industry, two factors have always mattered: the quantity of data and, more importantly, its quality. Data quality is critical—because it’s the foundation for effective AI.

Social engineering changed everything

In the search business, quality has always mattered. For us, quality meant understanding the provenance, lineage, and intent of information. Without that, you couldn’t be sure you were getting the right answer to the right questions. Fast forward to 2016–2022, and two pivotal events exposed how data could be weaponized. First, Cambridge Analytica mined 86 million Facebook accounts, proving that while everything is published in a democracy, it’s also exposed to adversaries. By manipulating that data, they learned that they could easily influence opinions, shape narratives, and even sway votes.

Our adversaries took note. The possibilities of social engineering were not lost to them, because the following year, Brexit unfolded, driven by exaggerated fears around immigration, fiscal policy, and currency stability. The outcome: Britain voted to leave the European Union (EU), only to discover months later that much of the influence campaign had been fueled by adversarial information operations designed to poison public discourse and change minds.

From 2016 to 2022, we witnessed some of the most unprecedented social engineering campaigns in the history of information. Nation-states like China, Russia, North Korea, Iran, and Venezuela flooded social media with a constant stream of manipulative content. Every major debate in America seemed to end in division, and sometimes violence. Why? Because that was the objective. For every real American profile online, there were thousands of fake ones created by adversaries. These inauthentic voices often dominated the conversation, drowning out genuine debate. Over time, Americans began to understand the meaning of polarization, not just politically, but across business, culture, and daily life. This wasn’t accidental; it was a deliberate strategy of cognitive warfare.

The age of cognitive warfare is born

In the 2020s, AI made a major leap forward, largely because models need vast amounts of data to train, and the most readily available source was, and still is, the web. Why? First, it’s free. Second, much of it is low value. And third, there were few enforceable rules preventing its use. While discussions around fair use and privacy laws like the Privacy Act surfaced, they were rarely enforced in practice. This open environment fueled the rapid development of AI engines, leading to the emergence of foundation models by 2022. The launch of ChatGPT marked a turning point, and it was a massive success, quickly adopted by millions. However, with that success came a new challenge and the first time we started hearing the term hallucination. A hallucination is an inexplicable, often non-repeatable error produced by reasoning engines trained on imperfect data; a wrong answer generated with confidence.

Now we come to the second major impact that the events from 2016 to 2022 fostered. Our adversaries saw what Cambridge Analytica achieved, witnessed the influence of social engineering on Brexit, and observed the chaos polarization caused across Western democracies. Then came the rise of AI. As ChatGPT and other foundation models gained massive adoption, our adversaries recognized that AI could spread coordinated, inauthentic narratives at scale. What emerged is what I call cognitive warfare—a new form of conflict where adversaries build and deploy large language models to shape global discourse. These models won’t just answer questions, they’ll provide the answers adversaries want you to believe.

What is cognitive warfare and what can we do to fight it? Watch the video:

Right now, as Americans wake up and start their day, Beijing is waging war, not with missiles, but with an intricate warfare: information and influence.

The introduction of DeepSeek drove hundreds of millions of downloads in record time, worldwide. Now, developers are creating AI applications using DeepSeek’s foundation model, unaware of the potential consequences.

Before we know it, these DeepSeek derivative works will be in the hands of everyday Americans…our largest corporations, and even our military and defense operations…all looking to push AI to its boundaries.

It’s capitalism versus security, but at what cost? A direct line of influence from China to control our industries, our defense, and our civilians.

Welcome to the age of information-driven cognitive warfare.

It took nearly 20 years to ban Kaspersky and Huawei chips in the United States. We don’t have 20 years.

The time is now to protect and advance. Introducing Seekr—AI platform that detects the adversary before the damage is done.

The war we don’t see coming will be the war we’ll have to win.

The launch of DeepSeek was described as an extinction-level event. Within days, the stock market lost $600 billion in value. People began to doubt the value of all the AI being developed in America. They began to doubt whether datacenters were really needed, and they began to doubt the financial structures that support AI. But then, just days later, the truth came out: DeepSeek turned out to be a lie. It hadn’t been built with $8 million in funding; it was powered by 50,000 stolen GPUs. The entire project had been developed in the shadows and distributed globally through free sites, bypassing every conventional safeguard.

What did we learn? We learned that there were no guardrails for AI. Once DeepSeek was released into the wild, it couldn’t be contained. People adopted it quickly because it was free, powerful, and accessible. Soon, hundreds, then thousands of developers began building applications on top of it.

Now, more Chinese companies are distributing similar large language models, all built on the same principles; untested, unverifiable, and unchecked. These models are already in use, integrated into systems around the world. Now imagine you’re operating in a critical sector—military, finance, healthcare. Your data is being viewed, used, manipulated, and potentially sent back for exploitation. How will you know?

As a nation, we need to ban adversarial AI, vet every model, and deploy trustworthy AI that defends democracy.

Explore trusted AI solutions for government

Learn More

Get the latest Seekr product updates and insights

This field is for validation purposes and should be left unchanged.