Wednesday, February 4, 2026

Run Your Personal AI Coding Agent Regionally with GPT-OSS and OpenHands

Introduction

Whether or not you are refactoring legacy code, implementing new options, or debugging complicated points, AI coding assistants can speed up your growth workflow and cut back time-to-delivery. OpenHands is an AI-powered coding framework that acts like an actual growth accomplice—it understands complicated necessities, navigates complete codebases, writes and modifies code throughout a number of information, debugs errors, and might even work together with exterior companies. Not like conventional code completion instruments that counsel snippets, OpenHands acts as an autonomous agent able to finishing up full growth duties from begin to end.

On the mannequin facet, GPT-OSS is OpenAI’s household of open-source giant language fashions constructed for superior reasoning and code era. These fashions, launched beneath the Apache 2.0 license, deliver capabilities that had been beforehand locked behind proprietary APIs into a totally accessible kind. GPT-OSS-20B presents quick responses and modest useful resource necessities, making it well-suited for smaller groups or particular person builders working fashions regionally.

GPT-OSS-120B delivers deeper reasoning for complicated workflows, large-scale refactoring, and architectural decision-making, and it may be deployed on extra highly effective {hardware} for greater throughput. Each fashions use a mixture-of-experts structure, activating solely the components of the community wanted for a given request, which helps steadiness effectivity with efficiency.

On this tutorial will information you thru creating an entire native AI coding setup that mixes OpenHands‘ agent capabilities with GPT-OSS fashions.

Tutorial: Constructing Your Native AI Coding Agent

Conditions

Earlier than we start, guarantee you will have the next necessities:

Get a PAT key — To make use of OpenHands with Clarifai fashions, you may want a Private Entry Token (PAT). Log in or join a Clarifai account, then navigate to your Safety settings to generate a brand new PAT.

Get a mannequin — Clarifai’s Group presents a big selection of cutting-edge language fashions you could run utilizing OpenHands. Browse the neighborhood to discover a mannequin that most closely fits your use case. For this instance, we’ll use the gpt-oss-120b mannequin.

Set up Docker Desktop — OpenHands runs inside a Docker container, so you may want Docker put in and working in your system. You may obtain and set up Docker Desktop on your working system from the official Docker web site. You’ll want to observe the set up steps particular to your OS (Home windows, macOS, or Linux).

Step 1: Pull Runtime Picture

OpenHands makes use of a devoted Docker picture to offer a sandboxed execution setting. You may pull this picture from the all-hands-ai Docker registry.

Step 2: Run OpenHands

Begin OpenHands utilizing the next complete docker run command.

This command launches a brand new Docker container working OpenHands with all crucial configurations together with setting variables for logging, Docker engine entry for sandboxing, port mapping for net interface entry on localhost:3000, persistent knowledge storage within the ~/.openhands folder, host communication capabilities, and automated cleanup when the container exits.

Step 3: Entry the Internet Interface

After working the docker run command, monitor the terminal for log output. As soon as the appliance finishes its startup course of, open your most well-liked net browser and navigate to: http://localhost:3000

At this level, OpenHands is efficiently put in and working in your native machine, prepared for configuration.

Screenshot 2025-08-11 at 2.39.40 PM

Step 4: Configure OpenHands with GPT-OSS

To configure OpenHands, open its interface and click on the Settings (gear icon) within the bottom-left nook of the sidebar.

The Settings web page permits you to join OpenHands to a LLM, which serves as its cognitive engine, and combine it with GitHub for model management and collaboration.

Connect with GPT-OSS through Clarifai

Within the Settings web page, go to the LLM tab and toggle the Superior button.

Fill within the following fields for the mannequin integration:

Customized Mannequin — Enter the Clarifai mannequin URL for GPT-OSS-120B. To make sure OpenAI compatibility, prefix the mannequin path with openai/, adopted by the complete Clarifai mannequin URL:  “openai/https://clarifai.com/openai/chat-completion/fashions/gpt-oss-120b”

Base URL — Enter Clarifai’s OpenAI-compatible API endpoint: “https://api.clarifai.com/v2/ext/openai/v1”

API Key — Enter your Clarifai PAT.

After filling within the fields, click on the Save Modifications button on the bottom-right nook of the interface.

Screenshot 2025-08-11 at 3.50.24 PM

Whereas this tutorial focuses on GPT-OSS-120B mannequin, Clarifai’s Group has over 100 open-source and third-party fashions you could simply entry by the identical OpenAI-compatible API. Merely exchange the mannequin URL within the Customized Mannequin area with another mannequin from Clarifai’s catalog to experiment with totally different AI capabilities and discover the one that most closely fits your growth workflow.

Step 5: Combine with GitHub

Throughout the identical Settings web page, navigate to the Integrations tab.

Enter your GitHub token within the supplied area, then click on Save Modifications within the bottom-right nook of the interface to use the mixing

Screenshot 2025-08-12 at 3.17.35 PM

Step 6: Begin Constructing with AI-Powered Improvement

Subsequent, click on the plus (+) Begin new dialog button on the high of the sidebar. From there, hook up with a repository by choosing your required repo and its department.

As soon as chosen, click on the Launch button to start your coding session with full repository entry.

Screenshot 2025-08-12 at 3.20.38 PM

In the primary interface, use the enter area to immediate the agent and start producing your code. The GPT-OSS-120B mannequin will perceive your necessities and supply clever, context-aware help tailor-made to your related repository.

Instance prompts to get began:

  • Documentation: “Generate a complete README.md file for this repository that explains the undertaking goal, set up steps, and utilization examples.”
  • Testing: “Write detailed unit assessments for the person authentication capabilities within the auth.py file, together with edge instances and error dealing with situations.”
  • Code Enhancement: “Analyze the database connection logic and refactor it to make use of connection pooling for higher efficiency and reliability.”

OpenHands forwards your request to the configured GPT-OSS-120B mannequin, which responds by producing clever code options, explanations, and implementations that perceive your undertaking context, and when you’re happy, you possibly can seamlessly push your code to GitHub straight from the interface, sustaining full model management integration.

 

Screenshot 2025-08-12 at 5.29.09 PM

Conclusion

You’ve arrange a totally purposeful AI coding agent that runs fully in your native infrastructure utilizing OpenHands and GPT-OSS-120B fashions.

If you wish to use a mannequin working regionally, you possibly can set it up with native runners. For instance, you possibly can run the GPT-OSS-20B mannequin regionally, expose it as a public API, and use that URL to energy your coding agent. Try the tutorial on working gpt-oss fashions regionally utilizing native runners right here.

Should you want extra computing energy, you possibly can deploy gpt-oss fashions by yourself devoted machines utilizing compute orchestration after which combine them together with your coding brokers, supplying you with larger management over efficiency and useful resource allocation.


Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles