Tuesday, December 23, 2025

Trinity Mini – A New U.S.-Constructed Open-Weight Reasoning Mannequin

 11.11_blog_hero

This weblog publish focuses on new options and enhancements. For a complete checklist, together with bug fixes, please see the launch notes.

Trinity Mini – A New U.S.-Constructed Open-Weight Reasoning Mannequin on Clarifai

This launch brings Trinity Mini, the most recent open-weight mannequin from Arcee AI, now out there instantly on Clarifai. Trinity Mini is a 26B parameter Combination-of-Specialists (MoE) mannequin with 3B lively parameters, designed for reasoning-heavy workloads and agentic AI functions.

The mannequin builds on the success of AFM-4.5B, incorporating superior math, code, and reasoning capabilities. Coaching was carried out on 512 H200 GPUs utilizing Prime Mind’s infrastructure, guaranteeing effectivity and scalability at each stage.

On benchmarks, Trinity Mini reveals sturdy reasoning efficiency throughout a number of analysis suites. It scores 8.90 on SimpleQA, 63.49 on MUSR, and 84.95 on MMLU (Zero Shot). For math-focused workloads, the mannequin reaches 92.10 on Math-500, and on superior scientific duties reminiscent of GPQA-Diamond and BFCL V3 it scores 58.55 and 59.67. These outcomes place Trinity Mini near bigger instruction-tuned fashions whereas retaining the effectivity advantages of its compact MoE design.

UMV0OZh_H1JfvgzBTXh6u

We have now partnered with Arcee to make Trinity Mini absolutely accessible by the Clarifai, supplying you with a straightforward solution to discover the mannequin, consider its outputs and examine it in opposition to different reasoning fashions out there on the platform and combine it into your individual functions.

Screenshot 2025-12-10 at 9.04.57 PM

Builders may begin utilizing Trinity Mini instantly utilizing Clarifai’s OpenAI Compatable API. Take a look at the information right here.

Twitter - 2025-12-08T142605.464

You may also entry Trinity Mini by way of Clarifai on OpenRouter. Test it out right here.

Mannequin Decommissioning

We’re upgrading our infrastructure, and several other legacy fashions can be decommissioned on December 31, 2025. These older task-specific fashions are being retired as we transition to newer, extra succesful mannequin households that now cowl the identical workflows with higher efficiency and stability.

All affected performance stays absolutely supported on the platform by present fashions. If you happen to need assistance selecting replacements, we have now ready an in depth information protecting beneficial options for textual content classification, picture recognition, picture moderation, OCR and doc workflows. Checkout the decommissioning and options information

Revealed Fashions

Ministral-3-14B-Reasoning-2512

Ministral-3-14B-Reasoning-2512 is now out there on Clarifai. That is the most important mannequin within the Ministral 3 household and is designed for math, coding, STEM workloads and multi-step reasoning. The mannequin combines a ~13.5B-parameter language spine with a ~0.4B-parameter imaginative and prescient encoder, permitting it to simply accept each textual content and picture inputs.

It helps context lengths as much as 256k tokens, making it appropriate for lengthy paperwork and prolonged reasoning pipelines. The mannequin runs effectively in BF16 on 32 GB VRAM, with quantized variations requiring much less reminiscence, which makes it sensible for personal deployments and customized agent use instances.

Attempt Ministral-3-14B-Reasoning-2512

GLM-4.6

GLM-4.6 is now out there on Clarifai. This mannequin unifies reasoning, coding and agentic capabilities right into a single system, making it appropriate for multi-skill assistants, tool-using brokers and sophisticated automation workflows. It helps a 200k token context window, permitting it to deal with lengthy paperwork, prolonged plans and multi-step duties in a single sequence.

The mannequin delivers sturdy efficiency throughout reasoning and coding evaluations, with enhancements in real-world coding brokers reminiscent of Claude Code, Cline, Roo Code and Kilo Code. GLM-4.6 is designed to work together cleanly with instruments, generate structured outputs and function reliably inside agent frameworks.

Attempt GLM-4.6

Management Heart

The Management Heart offers customers a unified view of utilization, efficiency and exercise throughout their fashions and deployments. It’s designed to assist groups observe consumption patterns, monitor workloads and higher perceive how their functions are operating on Clarifai.

On this launch We added a brand new choice to export the information of every chart as a CSV file, making it simpler to investigate metrics outdoors the dashboard or combine them into customized reporting workflows.

Screenshot 2025-12-10 at 10.10.38 PM

Further Adjustments

Platform Updates

  • Up to date account navigation with a cleaner format and reordered menu gadgets.

  • Added assist for customized profile footage throughout the platform.

  • PAT values in code snippets at the moment are masked by default.

There are a number of extra usability and account-management enhancements included on this launch. Verify them out right here

Python SDK

  • Added platform specification assist in config.yaml and the brand new --platform CLI choice.

  • Improved mannequin loading validation for extra dependable initialization.

  • Refactored Dockerfile templates right into a cleaner multi-stage construct to simplify runner picture creation.

Further SDK enhancements embody higher dependency parsing, improved setting dealing with, OpenAI Responses API assist and a number of runner reliability fixes. Be taught extra right here

Able to Begin Constructing?

You can begin constructing with Trinity Mini on Clarifai immediately. Take a look at the mannequin within the Playground, and combine it into your workflows by the OpenAI-compatible API. Trinity Mini is absolutely open-weight and production-ready, making it simple to construct brokers, reasoning pipelines or long-context functions with out further setup.

If you happen to need assistance selecting the best configuration or need to deploy the mannequin at scale, our workforce can information you. Reachout to the workforce right here.


Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles