On this tutorial, we construct an end-to-end, production-style agentic workflow utilizing GraphBit that demonstrates how graph-structured execution, device calling, and optionally available LLM-driven brokers can coexist in a single system. We begin by initializing and inspecting the GraphBit runtime, then outline a sensible customer-support ticket area with typed knowledge buildings and deterministic, offline-executable instruments. We present how these instruments may be composed right into a dependable, rule-based pipeline for classification, routing, and response drafting, after which elevate that very same logic right into a validated GraphBit workflow during which agent nodes orchestrate device utilization by way of a directed graph. All through the tutorial, we hold the system working in offline mode whereas enabling seamless promotion to on-line execution by merely offering an LLM configuration, illustrating how GraphBit helps the gradual adoption of agentic intelligence with out sacrificing reproducibility or operational management. Try the Full Codes right here.
!pip -q set up graphbit wealthy pydantic numpy
import os
import time
import json
import random
from dataclasses import dataclass
from typing import Dict, Any, Listing, Elective
import numpy as np
from wealthy import print as rprint
from wealthy.panel import Panel
from wealthy.desk import Desk
We start by putting in all required dependencies and importing the core Python, numerical, and visualization libraries wanted for the tutorial. We arrange the runtime atmosphere so the pocket book stays self-contained and reproducible on Google Colab. Try the Full Codes right here.
from graphbit import init, shutdown, configure_runtime, get_system_info, health_check, model
from graphbit import Workflow, Node, Executor, LlmConfig
from graphbit import device, ToolExecutor, ExecutorConfig
from graphbit import get_tool_registry, clear_tools
configure_runtime(worker_threads=4, max_blocking_threads=8, thread_stack_size_mb=2)
init(log_level="warn", enable_tracing=False, debug=False)
information = get_system_info()
well being = health_check()
sys_table = Desk(title="System Data / Well being")
sys_table.add_column("Key", type="daring")
sys_table.add_column("Worth")
for okay in ["version", "python_binding_version", "cpu_count", "runtime_worker_threads", "runtime_initialized", "build_target", "build_profile"]:
sys_table.add_row(okay, str(information.get(okay)))
sys_table.add_row("graphbit_version()", str(model()))
sys_table.add_row("overall_healthy", str(well being.get("overall_healthy")))
rprint(sys_table)
We initialize the GraphBit runtime and explicitly configure its execution parameters to regulate threading and useful resource utilization. We then question system metadata and carry out a well being examine to confirm that the runtime is accurately initialized. Try the Full Codes right here.
@dataclass
class Ticket:
ticket_id: str
user_id: str
textual content: str
created_at: float
def make_tickets(n: int = 10) -> Listing[Ticket]:
seeds = [
"My card payment failed twice, what should I do?",
"I want to cancel my subscription immediately.",
"Your app crashes when I open the dashboard.",
"Please update my email address on the account.",
"Refund not received after 7 days.",
"My delivery is delayed and tracking is stuck.",
"I suspect fraudulent activity on my account.",
"How can I change my billing cycle date?",
"The website is very slow and times out.",
"I forgot my password and cannot login.",
"Chargeback process details please.",
"Need invoice for last month’s payment."
]
random.shuffle(seeds)
out = []
for i in vary(n):
out.append(
Ticket(
ticket_id=f"T-{1000+i}",
user_id=f"U-{random.randint(100,999)}",
textual content=seeds[i % len(seeds)],
created_at=time.time() - random.randint(0, 7 * 24 * 3600),
)
)
return out
tickets = make_tickets(10)
rprint(Panel.match("n".be a part of([f"- {t.ticket_id}: {t.text}" for t in tickets]), title="Pattern Tickets"))
We outline a strongly typed knowledge mannequin for help tickets and generate an artificial dataset that simulates practical buyer points. We assemble tickets with timestamps and identifiers to reflect manufacturing inputs. This dataset serves because the shared enter throughout each offline and agent-driven pipelines. Try the Full Codes right here.
clear_tools()
@device(_description="Classify a help ticket into a rough class.")
def classify_ticket(textual content: str) -> Dict[str, Any]:
t = textual content.decrease()
if "fraud" in t or "fraudulent" in t:
return {"class": "fraud", "precedence": "p0"}
if "cancel" in t:
return {"class": "cancellation", "precedence": "p1"}
if "refund" in t or "chargeback" in t:
return {"class": "refunds", "precedence": "p1"}
if "password" in t or "login" in t:
return {"class": "account_access", "precedence": "p2"}
if "crash" in t or "gradual" in t or "timeout" in t:
return {"class": "bug", "precedence": "p2"}
if "fee" in t or "billing" in t or "bill" in t:
return {"class": "billing", "precedence": "p2"}
if "supply" in t or "monitoring" in t:
return {"class": "supply", "precedence": "p3"}
return {"class": "basic", "precedence": "p3"}
@device(_description="Route a ticket to a queue (returns queue id and SLA hours).")
def route_ticket(class: str, precedence: str) -> Dict[str, Any]:
queue_map = {
"fraud": ("risk_ops", 2),
"cancellation": ("retention", 8),
"refunds": ("payments_ops", 12),
"account_access": ("id", 12),
"bug": ("engineering_support", 24),
"billing": ("billing_support", 24),
"supply": ("logistics_support", 48),
"basic": ("support_general", 48),
}
q, sla = queue_map.get(class, ("support_general", 48))
if precedence == "p0":
sla = min(sla, 2)
elif precedence == "p1":
sla = min(sla, 8)
return {"queue": q, "sla_hours": sla}
@device(_description="Generate a playbook response based mostly on class + precedence.")
def draft_response(class: str, precedence: str, ticket_text: str) -> Dict[str, Any]:
templates = {
"fraud": "We’ve quickly secured your account. Please verify final 3 transactions and reset credentials.",
"cancellation": "We may also help cancel your subscription. Please verify your plan and the efficient date you need.",
"refunds": "We’re checking the refund standing. Please share the order/fee reference and date.",
"account_access": "Let’s get you again in. Please use the password reset hyperlink; if blocked, we’ll confirm id.",
"bug": "Thanks for reporting. Please share machine/browser + a screenshot; we’ll try replica.",
"billing": "We may also help with billing. Please verify the final 4 digits and the bill interval you want.",
"supply": "We’re checking cargo standing. Please share your monitoring ID and supply deal with PIN/ZIP.",
"basic": "Thanks for reaching out."
}
base = templates.get(class, templates["general"])
tone = "pressing" if precedence == "p0" else ("quick" if precedence == "p1" else "customary")
return {
"tone": tone,
"message": f"{base}nnContext we acquired: '{ticket_text}'",
"next_steps": ["request_missing_info", "log_case", "route_to_queue"]
}
registry = get_tool_registry()
tools_list = registry.list_tools() if hasattr(registry, "list_tools") else []
rprint(Panel.match(f"Registered instruments: {tools_list}", title="Software Registry"))
We register deterministic enterprise instruments for ticket classification, routing, and response drafting utilizing GraphBit’s device interface. We encode area logic immediately into these instruments to allow them to be executed with none LLM dependency. This establishes a dependable, testable basis for later agent orchestration. Try the Full Codes right here.
tool_exec_cfg = ExecutorConfig(
max_execution_time_ms=10_000,
max_tool_calls=50,
continue_on_error=False,
store_results=True,
enable_logging=False
)
tool_executor = ToolExecutor(config=tool_exec_cfg) if "config" in ToolExecutor.__init__.__code__.co_varnames else ToolExecutor()
def offline_triage(ticket: Ticket) -> Dict[str, Any]:
c = classify_ticket(ticket.textual content)
rt = route_ticket(c["category"], c["priority"])
dr = draft_response(c["category"], c["priority"], ticket.textual content)
return {
"ticket_id": ticket.ticket_id,
"user_id": ticket.user_id,
"class": c["category"],
"precedence": c["priority"],
"queue": rt["queue"],
"sla_hours": rt["sla_hours"],
"draft": dr["message"],
"tone": dr["tone"],
"steps": [
("classify_ticket", c),
("route_ticket", rt),
("draft_response", dr),
]
}
offline_results = [offline_triage(t) for t in tickets]
res_table = Desk(title="Offline Pipeline Outcomes")
res_table.add_column("Ticket", type="daring")
res_table.add_column("Class")
res_table.add_column("Precedence")
res_table.add_column("Queue")
res_table.add_column("SLA (h)")
for r in offline_results:
res_table.add_row(r["ticket_id"], r["category"], r["priority"], r["queue"], str(r["sla_hours"]))
rprint(res_table)
prio_counts: Dict[str, int] = {}
sla_vals: Listing[int] = []
for r in offline_results:
prio_counts[r["priority"]] = prio_counts.get(r["priority"], 0) + 1
sla_vals.append(int(r["sla_hours"]))
metrics = {
"offline_mode": True,
"tickets": len(offline_results),
"priority_distribution": prio_counts,
"sla_mean": float(np.imply(sla_vals)) if sla_vals else None,
"sla_p95": float(np.percentile(sla_vals, 95)) if sla_vals else None,
}
rprint(Panel.match(json.dumps(metrics, indent=2), title="Offline Metrics"))
We compose the registered instruments into an offline execution pipeline and apply it throughout all tickets to supply structured triage outcomes. We combination outputs into tables and compute precedence and SLA metrics to judge system conduct. It demonstrates how GraphBit-based logic may be validated deterministically earlier than introducing brokers. Try the Full Codes right here.
SYSTEM_POLICY = "You're a dependable help ops agent. Return STRICT JSON solely."
workflow = Workflow("Ticket Triage Workflow (GraphBit)")
summarizer = Node.agent(
title="Summarizer",
agent_id="summarizer",
system_prompt=SYSTEM_POLICY,
immediate="Summarize this ticket in 1-2 traces. Return JSON: {"abstract":"..."}nTicket: {enter}",
temperature=0.2,
max_tokens=200
)
router_agent = Node.agent(
title="RouterAgent",
agent_id="router",
system_prompt=SYSTEM_POLICY,
immediate=(
"You MUST use instruments.n"
"Name classify_ticket(textual content), route_ticket(class, precedence), draft_response(class, precedence, ticket_text).n"
"Return JSON with fields: class, precedence, queue, sla_hours, message.n"
"Ticket: {enter}"
),
instruments=[classify_ticket, route_ticket, draft_response],
temperature=0.1,
max_tokens=700
)
formatter = Node.agent(
title="FinalFormatter",
agent_id="final_formatter",
system_prompt=SYSTEM_POLICY,
immediate=(
"Validate the JSON and output STRICT JSON solely:n"
"{"ticket_id":"...","class":"...","precedence":"...","queue":"...","sla_hours":0,"customer_message":"..."}n"
"Enter: {enter}"
),
temperature=0.0,
max_tokens=500
)
sid = workflow.add_node(summarizer)
rid = workflow.add_node(router_agent)
fid = workflow.add_node(formatter)
workflow.join(sid, rid)
workflow.join(rid, fid)
workflow.validate()
rprint(Panel.match("Workflow validated: Summarizer -> RouterAgent -> FinalFormatter", title="Workflow Graph"))
We assemble a directed GraphBit workflow composed of a number of agent nodes with clearly outlined obligations and strict JSON contracts. We join these nodes right into a validated execution graph that mirrors the sooner offline logic at an agent stage. Try the Full Codes right here.
def pick_llm_config() -> Elective[Any]:
if os.getenv("OPENAI_API_KEY"):
return LlmConfig.openai(os.getenv("OPENAI_API_KEY"), "gpt-4o-mini")
if os.getenv("ANTHROPIC_API_KEY"):
return LlmConfig.anthropic(os.getenv("ANTHROPIC_API_KEY"), "claude-sonnet-4-20250514")
if os.getenv("DEEPSEEK_API_KEY"):
return LlmConfig.deepseek(os.getenv("DEEPSEEK_API_KEY"), "deepseek-chat")
if os.getenv("MISTRALAI_API_KEY"):
return LlmConfig.mistralai(os.getenv("MISTRALAI_API_KEY"), "mistral-large-latest")
return None
def run_agent_flow_once(ticket_text: str) -> Dict[str, Any]:
llm_cfg = pick_llm_config()
if llm_cfg is None:
return {
"mode": "offline",
"be aware": "Set OPENAI_API_KEY / ANTHROPIC_API_KEY / DEEPSEEK_API_KEY / MISTRALAI_API_KEY to allow execution.",
"enter": ticket_text
}
executor = Executor(llm_cfg, lightweight_mode=True, timeout_seconds=90, debug=False) if "lightweight_mode" in Executor.__init__.__code__.co_varnames else Executor(llm_cfg)
if hasattr(executor, "configure"):
executor.configure(timeout_seconds=90, max_retries=2, enable_metrics=True, debug=False)
wf = Workflow("Single Ticket Run")
s = Node.agent(
title="Summarizer",
agent_id="summarizer",
system_prompt=SYSTEM_POLICY,
immediate=f"Summarize this ticket in 1-2 traces. Return JSON: {{"abstract":"..."}}nTicket: {ticket_text}",
temperature=0.2,
max_tokens=200
)
r = Node.agent(
title="RouterAgent",
agent_id="router",
system_prompt=SYSTEM_POLICY,
immediate=(
"You MUST use instruments.n"
"Name classify_ticket(textual content), route_ticket(class, precedence), draft_response(class, precedence, ticket_text).n"
"Return JSON with fields: class, precedence, queue, sla_hours, message.n"
f"Ticket: {ticket_text}"
),
instruments=[classify_ticket, route_ticket, draft_response],
temperature=0.1,
max_tokens=700
)
f = Node.agent(
title="FinalFormatter",
agent_id="final_formatter",
system_prompt=SYSTEM_POLICY,
immediate=(
"Validate the JSON and output STRICT JSON solely:n"
"{"ticket_id":"...","class":"...","precedence":"...","queue":"...","sla_hours":0,"customer_message":"..."}n"
"Enter: {enter}"
),
temperature=0.0,
max_tokens=500
)
sid = wf.add_node(s)
rid = wf.add_node(r)
fid = wf.add_node(f)
wf.join(sid, rid)
wf.join(rid, fid)
wf.validate()
t0 = time.time()
outcome = executor.execute(wf)
dt_ms = int((time.time() - t0) * 1000)
out = {"mode": "on-line", "execution_time_ms": dt_ms, "success": bool(outcome.is_success()) if hasattr(outcome, "is_success") else None}
if hasattr(outcome, "get_all_variables"):
out["variables"] = outcome.get_all_variables()
else:
out["raw"] = str(outcome)[:3000]
return out
pattern = tickets[0]
agent_run = run_agent_flow_once(pattern.textual content)
rprint(Panel.match(json.dumps(agent_run, indent=2)[:3000], title="Agent Workflow Run"))
rprint(Panel.match("Accomplished", title="Full"))
We add optionally available LLM configuration and execution logic that permits the identical workflow to run autonomously when a supplier secret is obtainable. We execute the workflow on a single ticket and seize execution standing and outputs. This last step illustrates how the system seamlessly transitions from offline determinism to totally agentic execution.
In conclusion, we applied an entire GraphBit workflow spanning runtime configuration, device registration, offline deterministic execution, metric aggregation, and optionally available agent-based orchestration with exterior LLM suppliers. We demonstrated how the identical enterprise logic may be executed each manually by way of instruments and routinely by way of agent nodes linked in a validated graph, highlighting GraphBit’s power as an execution substrate relatively than simply an LLM wrapper. We confirmed that complicated agentic methods may be designed to fail gracefully, run with out exterior dependencies, and nonetheless scale to totally autonomous workflows when LLMs are enabled.
Try the Full Codes right here. Additionally, be happy to observe us on Twitter and don’t neglect to affix our 100k+ ML SubReddit and Subscribe to our E-newsletter. Wait! are you on telegram? now you possibly can be a part of us on telegram as effectively.
Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is dedicated to harnessing the potential of Synthetic Intelligence for social good. His most up-to-date endeavor is the launch of an Synthetic Intelligence Media Platform, Marktechpost, which stands out for its in-depth protection of machine studying and deep studying information that’s each technically sound and simply comprehensible by a large viewers. The platform boasts of over 2 million month-to-month views, illustrating its recognition amongst audiences.
