Tuesday, April 28, 2026

Zero Day Help at 400 Tokens Per Second

Blog thumbnail - NemotronTM 3 Nano Omni  day 0

We’re excited to announce day-0 assist for NVIDIA Nemotron 3 Nano Omni on Clarifai. Accessible now on Clarifai Reasoning Engine, Nano Omni brings quick multimodal reasoning to builders constructing agentic techniques, delivering throughput of 400+ tokens per second.

NVIDIA Nemotron 3 Nano Omni is a 30B A3B multimodal reasoning mannequin constructed for workloads that span paperwork, pictures, video, and audio. With a 256K context window and assist for textual content, picture, video, and audio inputs with textual content output, it provides builders a single mannequin for dealing with wealthy multimodal context inside agentic workflows.

That makes it a powerful match for sub-agents in workflows the place multimodal understanding and pace must go collectively.

A Multimodal Mannequin for Specialised Sub-Brokers

As agent techniques develop extra succesful, in addition they change into extra specialised. Completely different fashions and parts tackle planning, execution, retrieval, and verification, every working inside a broader workflow. In that structure, the mannequin dealing with multimodal inputs has to do greater than course of remoted inputs. It has to interpret a number of modalities collectively, protect context throughout steps, and reply quick sufficient to remain inside the operational loop.

As a light-weight multimodal mannequin for sub-agents, Nemotron 3 Nano Omni can motive throughout screens, paperwork, charts, audio, and video with out routing every modality by way of a separate stack. Reasonably than splitting imaginative and prescient, speech, and language throughout a number of fashions, it provides builders a extra unified strategy to deal with multimodal reasoning whereas protecting the general system simpler to handle.

Constructed for Laptop Use, Paperwork, and Audio-Video Reasoning

Nano Omni is particularly related for the sorts of workloads which might be turning into central to enterprise agentic techniques.

For pc use, brokers must learn interfaces, observe UI state over time, and confirm whether or not actions accomplished as anticipated. For doc intelligence, they should motive throughout textual content, tables, charts, screenshots, scanned pages, and combined visible construction in the identical go. For audio and video workflows, they should join what was stated, what was proven, and what modified over time.

These are all circumstances the place multimodal functionality has to work reliably in manufacturing, with a mannequin that may deal with a number of modalities effectively with out splitting the workflow throughout separate fashions.

The mannequin represents a major bounce in functionality from earlier fashions within the Nemotron household. Important enchancment in benchmarks like OCRBenchV2, OCR_Reasoning, MathVista_MINI and OSWorld replicate the mannequin’s improved efficiency for the true world workloads right now’s brokers are more likely to serve.

MULTIMODAL ACCURACY - nemotron

That’s the place Nano Omni matches naturally, giving builders a single multimodal reasoning stream for the duties sub-agents are more and more anticipated to deal with.

Agent-Pleasant Tokenomics

In agent techniques, sub-agents tackle recurring duties throughout paperwork, screens, audio, and video inside a bigger workflow. Every invocation provides to the price, throughput, and infrastructure calls for of the general system. NVIDIA Nemotron 3 Nano Omni consolidates imaginative and prescient, speech, and language right into a single multimodal mannequin, lowering inference hops, orchestration logic, and cross-model synchronization in contrast with separate notion stacks.

Nano Omni delivers roughly 2x larger throughput on common, together with about 2.5x decrease compute for video reasoning by way of temporal-aware notion and environment friendly video sampling. For multimodal agent workflows, which means larger throughput and decrease compute overhead with out including complexity to the stack.

The mannequin makes use of a hybrid Combination-of-Specialists structure with a Transformer-Mamba design, together with 3D convolution layers and Environment friendly Video Sampling for temporal and video inputs. It might run on a single H100, H200, or B200, making it sensible to deploy multimodal sub-agents with out stretching infrastructure necessities.

Excessive-Throughput Inference on Clarifai

On Clarifai Reasoning Engine, NVIDIA Nemotron 3 Nano Omni runs at 400+ tokens per second, giving builders the throughput wanted for manufacturing multimodal agent workflows. That issues in techniques the place sub-agents are known as repeatedly to course of paperwork, interfaces, audio, and video as a part of an ongoing workflow.

Clarifai Reasoning Engine is constructed for inference acceleration by combining optimized kernels, speculative decoding and adaptive efficiency strategies to enhance throughput for reasoning fashions with out compromising accuracy.

Getting Began on Clarifai

Builders can strive NVIDIA Nemotron 3 Nano Omni within the Clarifai Playground and also can entry it by way of an OpenAI-compatible API, making it simpler to combine into present purposes, instruments, and agentic frameworks.

For larger-scale or extra managed deployments, Clarifai offers a direct path to manufacturing with Compute Orchestration. Builders can run Nano Omni on Clarifai Reasoning Engine or deploy it throughout their very own cloud, VPC, on-prem or air-gapped environments whereas managing deployments by way of a unified management airplane.

NVIDIA Nemotron 3 Nano Omni is offered on Clarifai right now.

If in case you have any questions on accessing NVIDIA Nemotron 3 Nano Omni on Clarifai, be part of our Discord.


Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles