Getting Started
Installation
From PyPI
pip install dcc-mcp-coreFrom Source (requires Rust toolchain)
git clone https://github.com/loonghao/dcc-mcp-core.git
cd dcc-mcp-core
pip install -e .TIP
Building from source requires the Rust toolchain. Install it from rustup.rs. The build is handled by maturin which compiles the Rust core and installs the Python package.
Requirements
- Python: >= 3.7 (CI tests 3.7, 3.8, 3.9, 3.10, 3.11, 3.12, 3.13)
- Rust: >= 1.85 (for building from source)
- License: MIT
- Python Dependencies: Zero — everything is in the compiled Rust extension
Quick Start
Skills-First: create_skill_server (recommended since v0.12.12)
The fastest way to expose scripts as MCP tools. Create a SKILL.md in your script folder, then use create_skill_server to wire everything in one call:
import os
from dcc_mcp_core import create_skill_server, McpHttpConfig
# Point to your skill directories (per-app env var)
os.environ["DCC_MCP_MAYA_SKILL_PATHS"] = "/path/to/my-skills"
# One call: discover skills + start MCP HTTP server
server = create_skill_server("maya", McpHttpConfig(port=8765))
handle = server.start()
print(f"Maya MCP server at {handle.mcp_url()}")
# AI clients (Claude Desktop, etc.) connect to http://127.0.0.1:8765/mcpOr use SkillCatalog directly for more control:
import os
from dcc_mcp_core import SkillCatalog, ToolRegistry
os.environ["DCC_MCP_SKILL_PATHS"] = "/path/to/my-skills"
registry = ToolRegistry()
catalog = SkillCatalog(registry)
discovered = catalog.discover(dcc_name="maya")
print(f"Discovered {discovered} skills")
# Load a skill and inspect the registered tool names
tool_names = catalog.load_skill("maya-geometry")
print(tool_names)See the Skills System guide for writing SKILL.md files and advanced options.
Writing a Minimal SKILL.md
Create a skill in three steps:
# 1. Create the skill directory structure
mkdir -p my-skill/scripts
# 2. Write SKILL.md (follows agentskills.io specification)
cat > my-skill/SKILL.md << 'EOF'
---
name: my-skill
description: "Does something useful in Maya. Use when user asks to do X."
dcc: maya
version: "1.0.0"
search-hint: "keyword1, keyword2, related task"
---
# My Skill
Instructions for the AI agent on how to use this skill.
EOF
# 3. Add a script
cat > my-skill/scripts/do_thing.py << 'EOF'
import sys, json
def main():
params = json.loads(sys.stdin.read())
# ... do work ...
print(json.dumps({"success": True, "message": "Done"}))
if __name__ == "__main__":
main()
EOFThen set DCC_MCP_SKILL_PATHS to the parent directory and use create_skill_server or SkillCatalog.discover().
Tool Registry
from dcc_mcp_core import ToolRegistry
registry = ToolRegistry()
registry.register(
name="create_sphere",
description="Creates a sphere in the scene",
category="geometry",
tags=["geometry", "creation"],
dcc="maya",
)
tool = registry.get_action("create_sphere")
print(tool) # dict with tool metadata
maya_tools = registry.list_actions(dcc_name="maya")Action → Tool terminology
In v0.13+, the project renamed "action" → "tool" at the conceptual level. However, some Rust API method names (get_action, list_actions, search_actions) still use "action" for backward compatibility. These are not bugs — they are compatibility aliases.
Tool Results
from dcc_mcp_core import success_result, error_result
result = success_result("Created 5 spheres", prompt="Use modify next", count=5)
print(result.success) # True
print(result.message) # "Created 5 spheres"
print(result.context) # {"count": 5}
err = error_result("Failed", "File not found", prompt="Check path")
print(err.success) # FalseEvent Bus
from dcc_mcp_core import EventBus
bus = EventBus()
sid = bus.subscribe("scene.changed", lambda: print("Scene updated!"))
bus.publish("scene.changed")
bus.unsubscribe("scene.changed", sid)MCP HTTP Server
Expose your registry to AI clients (Claude Desktop, etc.) over HTTP in one call:
from dcc_mcp_core import ToolRegistry, McpHttpServer, McpHttpConfig
registry = ToolRegistry()
# ... register tools or load skills ...
config = McpHttpConfig(port=8765)
server = McpHttpServer(registry, config)
handle = server.start()
print(f"MCP server running at {handle.mcp_url()}")
# handle.shutdown() to shut downJob lifecycle notifications
Every tools/call emits SSE notifications on completion (issue #326):
notifications/progress— fires when the call included_meta.progressToken.notifications/$/dcc.jobUpdated— fires on every status transition whileMcpHttpConfig.enable_job_notificationsisTrue(default).notifications/$/dcc.workflowUpdated— emitted by the workflow executor (#348).
Disable the $/dcc.* channels with cfg.enable_job_notifications = False; the spec-mandated progress channel still fires whenever a token is supplied.
Instance-Bound Diagnostics
When multiple DCC instances run side-by-side (two Maya processes, Maya + Blender, etc.), each adapter server should be bound to its own DCC process so diagnostics (screenshot, audit log, metrics) target the right window and PID.
DccServerBase accepts three optional instance-binding kwargs and exposes four diagnostics__* MCP tools:
from dcc_mcp_core import DccServerBase
class MayaServer(DccServerBase):
def __init__(self, pid: int, window_title: str):
super().__init__(
dcc_name="maya",
builtin_skills_dir=None,
dcc_pid=pid, # owner DCC PID
dcc_window_title=window_title, # fallback match when PID lookup fails
# dcc_window_handle=0x00A1B2, # or pass an HWND directly
)
server = MayaServer(pid=12345, window_title="Autodesk Maya 2024")
handle = server.start() # exposes diagnostics__screenshot / audit_log /
# tool_metrics / process_status tools bound to
# this Maya instance onlyIf the PID can change at runtime (e.g. the user relaunches Maya), pass a lazy resolver callable instead of dcc_pid:
def current_maya_pid() -> int | None:
return _find_maya_pid() # evaluated on every diagnostics call
server = DccServerBase("maya", resolver=current_maya_pid, ...)For low-level servers built around McpHttpServer directly, call register_diagnostic_mcp_tools(server, dcc_name=..., dcc_pid=...) beforeserver.start() — per the "register all actions before start" rule.
Development Setup
git clone https://github.com/loonghao/dcc-mcp-core.git
cd dcc-mcp-core
# Install with vx (recommended)
vx just install
# Or manual setup
pip install maturin
maturin developRunning Tests
vx just test
vx just lintNext Steps
- Learn about Tools & Registry — the tool registration layer
- Explore Events & Telemetry for lifecycle hooks and lightweight execution metrics
- Check out the Skills System for zero-code script registration
- Expose tools with MCP HTTP Server
- See the Transport Layer for DCC communication
- Understand the Architecture of the 15-crate Rust workspace
- Learn Skill Scopes & Policies for trust-based skill management
- Validate tool names with SEP-986 Naming Rules
Troubleshooting
Build/Import Errors
# Symbol in __init__.py but ImportError → rebuild the dev wheel
vx just dev
# Verify import works
python -c "import dcc_mcp_core; print(hasattr(dcc_mcp_core, 'MyNewSymbol'))"
# Verbose cargo build to catch errors
cargo build --workspace --features python-bindings 2>&1 | grep -E "error|warning" | head -30Common Mistakes
| Problem | Solution |
|---|---|
scan_and_load returns wrong results | Always unpack: skills, skipped = scan_and_load(...) — it returns a 2-tuple |
success_result context is empty | Pass kwargs directly: success_result("msg", count=5) — NOT context={"count":5} |
ToolDispatcher.call() not found | Use .dispatch(name, json_str) — there is no .call() method |
McpHttpServer tools not appearing | Register all tools BEFORE server.start() — the server reads the registry at startup |
SkillScope / SkillPolicy ImportError | These are Rust-only types. Use SKILL.md frontmatter and SkillMetadata methods instead |
DeferredExecutor ImportError | Import directly: from dcc_mcp_core._core import DeferredExecutor |
| Skill scripts not discovered | Check DCC_MCP_SKILL_PATHS env var and dcc: field in SKILL.md matches your filter |
ActionMeta AttributeError | Rust-only type. Use ToolRegistry.set_tool_enabled() and list_tools_in_group() instead |
AI Agent Best Practices
When building tools for AI agents to consume:
- Design around user workflows, not raw API calls. A tool called
create_characteris better than three separate calls tocreate_joint,bind_skin,apply_animation. - Use
ToolAnnotationsto signal safety properties —read_only_hint=True,destructive_hint=False,idempotent_hint=True— so AI clients make informed choices. - Return human-readable errors via
error_result("msg", "specific error")with actionable suggestions inprompt. - Use
next-toolsin SKILL.md to guide AI agents to follow-up tools (e.g.on-failure: [dcc_diagnostics__screenshot]). - Keep
tools/listsmall by using tool groups withdefault_active=falsefor power-user features. Agents activate groups on demand. - Validate all AI-provided inputs with
ToolValidator.from_schema_json()before execution — never trust LLM output blindly.
Building a DCC Adapter with DccServerBase
DccServerBase is the recommended base class for building DCC adapters. It bundles all the boilerplate that every adapter needs:
from pathlib import Path
from dcc_mcp_core import DccServerBase
class BlenderMcpServer(DccServerBase):
def __init__(self, port: int = 8765, **kwargs):
super().__init__(
dcc_name="blender",
builtin_skills_dir=Path(__file__).parent / "skills",
port=port,
**kwargs,
)
def _version_string(self) -> str:
import bpy
return bpy.app.version_string
# That's it — skill management, hot-reload, gateway election are all inherited.
server = BlenderMcpServer(gateway_port=9765)
server.register_builtin_actions() # discover and load skills
server.enable_hot_reload() # optional: auto-reload on file changes
handle = server.start() # returns McpServerHandle
print(f"Running at {handle.mcp_url()}")For zero-boilerplate adapters, use make_start_stop:
from dcc_mcp_core import make_start_stop
start_server, stop_server = make_start_stop(
BlenderMcpServer,
hot_reload_env_var="DCC_MCP_BLENDER_HOT_RELOAD",
)DeferredExecutor — DCC Main-Thread Safety
Many DCCs (Maya, Blender, Houdini) require that API calls execute on the main thread. DeferredExecutor provides a task queue that the DCC event loop polls:
from dcc_mcp_core._core import DeferredExecutor # not yet in public __init__
# Create a queue (capacity = max pending tasks)
executor = DeferredExecutor(capacity=16)
# Submit a callable from any thread (e.g. from MCP HTTP handler)
executor.execute(lambda: maya.cmds.sphere(radius=1.0))
# In the DCC main-loop callback (e.g. Maya's idleCallback, Blender's app.handlers):
executor.poll_pending() # runs all queued callables on the main threadNote:
DeferredExecutoris not yet in the public__init__.py— import directly fromdcc_mcp_core._core. This will be promoted to the public API in a future release.