SeekrFlow New Releases: Advanced Reasoning, Expanded Models, Greater Control
September 18, 2025This month, SeekrFlow™ delivers powerful new capabilities that help enterprises build AI they can trust. From reinforcement fine-tuning that unlocks reasoning, to expanded model choice, agent customization, safety controls, and precision data preparation, our latest releases give teams more confidence and flexibility to deploy AI at scale.
Build models that solve problems with confidence
SeekrFlow now supports Group Relative Policy Optimization (GRPO) Fine-Tuning, a reinforcement learning technique that strengthens reasoning in large language models.
GRPO equips models to handle structured, high-stakes tasks such as mathematics, coding, and compliance-critical problem solving. By reinforcing reasoning pathways, this method enables models to consistently deliver validated results. Organizations can now train AI systems that perform with the precision required for enterprise and government workflows.
Choose the right model for every task
The SeekrFlow Model Library has been expanded with 15 new models and 2 refreshed models, giving teams more options across reasoning, vision, coding, and enterprise-scale tasks. All models are available now in the API/SDK and the Playground for immediate testing. This expansion gives enterprises the freedom to select the right model for experimentation, reasoning, or large-scale deployment.
New models added
- meta-llama/Llama-3.2-90B-Vision-Instruct
- NousResearch/Yarn-Mistral-7B-128k
- mistralai/Mistral-7B-Instruct-v0.2
- mistralai/Mistral-Small-24B-Instruct-2501
- google/gemma-2b
- google/gemma-2-9b
- google/gemma-3-27b-it
- Qwen/Qwen3-8B-FP8
- Qwen/Qwen3-32B-FP8
- Qwen/Qwen3-30B-A3B-FP8
- Qwen/Qwen3-235B-A22B-FP8
- Qwen/Qwen2-72B
- mistralai/Mamba-Codestral-7B-v0.1
- microsoft/Phi-3-mini-4k-instruct
- meta-llama/Llama-4-Scout-17B-16E
- meta-llama/Llama-4-Scout-17B-16E-Instruct
Refreshed models
- meta-llama/Llama-3.2-1B-Vision-Instruct
- meta-llama/Llama-3.2-3B-Vision-Instruct
Safeguard applications with content moderation
New moderation models make it easier to integrate safety and scoring pipelines into applications.
- Seekr ContentGuard: A model for podcast moderation, equipped with GARM category classification and the Seekr Civility Score™. It can score transcripts, detect harmful content, label tone, and surface ad risk.
- Meta Llama Guard 3: A general-purpose moderation model that classifies unsafe content across 22 MLCommons taxonomy categories. It is suitable for moderating chat outputs, user-generated text, and AI-generated responses.
Together, these models enable enterprises to filter unsafe or brand-sensitive content, monitor tone and civility, and apply guardrails across agents and assistants. Moderation can now be embedded directly into workflows, helping organizations scale AI responsibly while protecting governance and brand reputation.
Configure agents faster with built-in tools
File Search and Web Search are now available as built-in tools directly in the UI, streamlining the way teams extend agents.
- File Search enables agents to retrieve and reason over documents ingested into SeekrFlow
- Web Search allows agents to pull in real-time web information to complement enterprise data
By making these tools accessible in the UI, this update empowers more teams to configure, prototype, and operationalize agents without writing code.
Prepare data with speed, precision, or full control
The AI-Ready Data Engine now includes advanced ingestion and chunking methods, allowing teams to tailor data processing to their specific needs. These options are available in the UI, API, and SDK.
- Accuracy-Optimized preserves hierarchy, tables, and structure for compliance records, research, and contracts where precision is critical
- Speed-Optimized processes large files quickly while maintaining accuracy, making it ideal for batch runs, RFPs, and time-sensitive workflows
- Manual Chunking provides predictable, user-controlled splits with sliding windows and document markers, ensuring consistency for unstructured content such as resumes or multi-document compilations
These capabilities allow enterprises to prioritize fidelity, turnaround time, or granular control, supporting a wide range of AI workflows from fine-tuning to retrieval.
Get started with SeekrFlow today
The latest release strengthens SeekrFlow across every layer of the platform: smarter fine-tuning, safer moderation, expanded model choice, faster agent configuration, and more precise data preparation. Each update is designed to help enterprises deploy AI with greater accuracy, reliability, and control.
With these updates, SeekrFlow continues to remove barriers to scalable and trusted AI development for the enterprise.
Ready to transform your AI development process? Sign up for SeekrFlow or book a consultation with a product expert to see how these new capabilities can accelerate your path to enterprise-ready AI.