An open-source skill development platform where humans and AI agents collaborate — decomposing tasks, iterating skills across simulation and hardware, and composing them into a reusable ecosystem.
A skill sharing platform for Code-as-Policy (Maestro) like systems.
Share and discover robot skills and services across the Tidybot ecosystem.
Get started on GitHub
Contact: yishao@seas.upenn.edu
from robot_sdk import arm, gripper, find_objects orange = find_objects("orange")[0] pan = find_objects("pan")[0] arm.move_to(orange.pose, z_offset=0.10) arm.move_delta(z=-0.10); gripper.close() arm.move_to(pan.pose, z_offset=0.20) gripper.open()
L1 Intelligence (orchestrator + dev/eval agents) writes Python — those Skills are an artifact, not a service — and submits them to L2 Runtime (Agent Server), which calls L3 Substrate (hardware drivers + sim bridges + ML services). Humans hand a task graph to L1 and step out of the loop.
What the robot does. Verb-shaped behaviors that run on the robot—pick, place, scan, navigate—tested with real trials and composed into dependency trees of increasingly complex tasks.
A task is a DAG of the skills shown above. The orchestrator walks the graph, fans out one Claude agent per (skill × sim target), each agent writes Python that the sandbox runs against the sim or hardware, and an evaluator agent reviews the recording before the skill ships into the library.
graph.json — skill nodes, dependencies, and the sim task_env they target.
done, gates by dev_mode.
agent_server /code/submit; sandbox runs it against the per-target sim or hardware.
done, unblocks downstream skills in the graph.
How the robot does it. Noun-shaped resources—a hardware driver, a vision model, a grasp planner—behind a uniform API. Drivers live on-robot; heavier models run off-board. Swap what’s underneath without changing anything above.
The platform we shipped, one card per tier: L1 Intelligence orchestrator that fans out Claude SDK agents, L2 Runtime agent_server that sandboxes their code, and the L3 ops deploy-agent that lets agents bring up GPU services on demand. All MIT, all open source.
No hardware needed — runs entirely in simulation. Requires conda plus Claude Code (default) or OpenClaw for Ollama / Gemini support.
curl -fsSL https://raw.githubusercontent.com/TidyBot-Services/Tidybot-Universe/master/setup.sh | bash -s -- YOUR_ENV_NAME
Follow the instructions printed at the end — start your agent in the skill agent directory and type /xbot-dev.
Progress toward autonomous skill development across 115 RoboCasa kitchen tasks in simulation.