Back to blogs

SeekrGuard and Confronting the AI Model Threat: Insights from DefenseTalks

DefenseTalks hero

Date

December 15, 2025

Category

Share

At the recent DefenseTalks conference in Washington, D.C., Seekr’s Chairman and CEO Pat Condo shared insights from his discussions with policymakers and industry leaders about the emerging era of cognitive warfare and the risks posed by adversarial AI models. His keynote underscored the growing urgency around AI governance and the existential threat posed to America by the unchecked adoption of foreign foundation models. 

After conversations with NATO officials, Condo believes that Europe will need to focus on the Russian threat and “leave China for America to deal with,” as Chinese capabilities grow more advanced by the day. At GITEX, the world’s largest technology and startup conference, Condo’s discussions centered on the flow of Chinese technology into Central Asia and Africa—all underwritten by China. The prevalence of cheap and powerful Chinese AI models are the reason why the world’s most downloaded AI foundation models come from China.

To date, DeepSeek and Qwen alone have been downloaded over 200 million times and boast an average of 30 million monthly active users. This represents 30% of global AI usage. And with an estimated power advantage of 100 gigawatts (GW) online in China compared to about 5 GW in the U.S., Chinese developers could turn out over 5,000 foundation models every six months. While China remains the number one destination for these AI models, it is closely followed by the U.S., India, and Russia. Developing continents such as Africa and South America are following the same pattern, accessing these models on older, non-export-controlled GPUs. By unleashing cheap and powerful AI models, China’s foundation model factory could enable it to win the next cognitive war in the digital battlespace. 

Why is China successful?

Because China’s AI models are free, they are highly performant, and seamlessly compatible with all cloud and chip architectures. What makes these models even more dangerous is that they are being used by hundreds of millions of people and America’s industrial defense base to build applications that will eventually reach billions of people. Unfortunately, this comes at a high price. These models are embedded with designs and instructions from their creators—the Chinese government. 

This is not a new dilemma for the United States; cheap and effective foreign technology has infiltrated our country before. Twenty years ago, Russin-built technology from Kaspersky was one of the most popular cybersecurity products in the U.S.. The result? We learned to eradicate hidden dangers, and the U.S. cybersecurity market began to thrive. Ten years ago, Huawei built invasive technology—from microchips to telecommunications networking, consumer electronics, and AI infrastructure. Today, Huawei products are banned in the U.S., yet exist everywhere else, implementing AI in conjunction with state-run model builders. 

How do we navigate this existential threat?

Current policies and products evaluate and certify AI models but fall short on mission-specific and use case focused risk evaluation. For example, most evaluation tools rely on generic benchmarks, and cannot tell whether a model understands your terminology, recognizes your policy boundaries, or handles edge cases. Cybersecurity offers a useful model of operational maturity with audits, scans, and continuous monitoring. However, AI governance has not yet developed the same culture and appetite for best practices: there are guidelines, but not methods, and most organizations lack the tools or expertise to meaningfully evaluate. 

SeekrGuard protects countries and corporations from AI threats 

SeekrGuard is a comprehensive and sophisticated new AI evaluation & certification solution designed to help organizations evaluate any model against their own specific criteria, with agents that interrogate models for bias, accuracy, and data governance. Organizations can quickly create virtual evaluators to quantify model risk based on their mission, define unique risk frameworks, and generate clear scorecards for side-by-side comparisons across real-world scenarios.

SeekrGuard allows flexible, targeted AI testing by mixing and matching datasets, evaluators, and models tailored to any environment. It enables building custom evaluators to identify true intentions—no code required—and automates test data generation by transforming any document into an evaluation dataset. The result is that organizations can now expose the true intentions of a model or application and enable greater assurance and trust before moving models into production. 

Images are as risk-prone as text

The same risks and threats exist beyond text-based data. For example, the rate at which non-text data types are growing far exceeds textual data—from satellite imagery to full-motion video from drones. Thus, more development and investment is needed in object identification, understanding the impact of synthetic data, and creating multi-stage agent validation capabilities to analyze and compare computer vision models. 

Conclusion 

In summary, quantitative evaluation of all types of models and data leads back to simple principles: knowing the provenance, lineage, and intent of every model you use remains paramount. These cannot be optional for the U.S. government, but foundational to its use of development of novel AI solutions in the next century.

Accelerate your path to AI impact

Book a consultation with an AI expert. We’re here to help you speed up your time to AI ROI.

Request a demo

content cta_1440 x 642