PhyAgentOS is a self-evolving embodied AI operating system built on agentic workflows.
PhyAgentOS
HCP Lab
PhyAgentOS is a self-evolving embodied AI operating system built on agentic workflows.

Physical Agent Operation System (PhyAgentOS)

https://github.com/SYSU-HCP-EAI/PhyAgentOS


Physical Agent Operation System (PhyAgentOS) is a self-evolving embodied AI framework based on Agentic workflows. Moving away from the "black-box" model of traditional "large models directly controlling hardware," PhyAgentOS pioneers a "Cognitive-Physical Decoupling" architectural paradigm. By constructing a Language-Action Interface, it completely decouples action representation from embodiment morphology, enabling standardized mapping from high-reasoning cloud models to edge physical execution layers.

PhyAgentOS utilizes a "State-as-a-File" protocol matrix, natively supporting zero-code migration across hardware platforms, sandbox-driven tool self-generation, and safety correction mechanisms based on Multi-Agent Critic verification.

Framework


PhyAgentOS's core is a local workspace where software and hardware operate as independent daemons reading/writing files:

  • 📝 State-as-a-File: Software and hardware communicate by reading/writing local Markdown files (e.g., ENVIRONMENT.mdACTION.md), ensuring complete decoupling and extreme transparency.
  • 🧠 Dual-Track Multi-Agent System:
    • Track A (Cognitive Core): Includes Planner and Critic mechanisms. Large models do not issue commands directly; they must be verified by the Critic against the current robot's runtime EMBODIED.md (copied from profiles) before being committed.
    • Track B (Physical Execution): An independent hardware watchdog (hal_watchdog.py) monitors and executes commands. Supports both single-instance mode and Fleet mode for multi-robot coordination.
  • 🔌 Dynamic Plugin Mechanism: Supports dynamic loading of external hardware drivers via hal/drivers/, allowing for new hardware support without modifying core code.
  • 🛡️ Safety Correction Mechanism: Strict action verification and LESSONS.md experience library prevent Agent workflows from going out of control.
  • 🎮 Simulation Loop: Built-in lightweight simulation support allows verification of the full chain from natural language instructions to physical state changes without real hardware.
  • 🗺️ Semantic Navigation & Perception: Built-in SemanticNavigationTool and PerceptionService support resolving high-level semantic goals into physical coordinates and constructing scene graphs by fusing geometric and semantic information.