技能详情(站内镜像,无评论)
作者:robodan @danmartinez78
许可证:MIT-0
MIT-0 ·免费使用、修改和重新分发。无需归因。
版本:v1.0.4
统计:⭐ 1 · 245 · 0 current installs · 0 all-time installs
⭐ 1
安装量(当前) 0
🛡 VirusTotal :良性 · OpenClaw :良性
Package:danmartinez78/vectorclaw-mcp
安全扫描(ClawHub)
- VirusTotal :良性
- OpenClaw :良性
OpenClaw 评估
The skill's declared requirements and instructions are coherent for controlling an Anki/Digital Dream Labs Vector robot, but it relies on installing a pip package (not included in the skill bundle) so you should review that package/repo before installing.
目的
Name, description, and runtime instructions align: controlling a Vector robot via MCP reasonably requires python3, a VECTOR_SERIAL, an SDK config, and launching a Python MCP server. The listed tools (speak, drive, camera, sensors) match that purpose.
说明范围
SKILL.md only instructs robot-related actions: installing the package, configuring the Vector SDK (~/.anki_vector/sdk_config.ini), setting VECTOR_SERIAL, and adding an MCP server entry. It does not ask to read unrelated system files or exfiltrate data.
安装机制
Installation is via pip (vectorclaw-mcp). Using PyPI is expected for a Python MCP package but carries the normal risk that arbitrary code will be installed; the skill bundle itself contains no code to inspect, so the package should be audited (or installed in an isolated environment) before use.
证书
The only environment variable required is VECTOR_SERIAL, which is appropriate for addressing a particular robot. The SDK config path is expected for Vector SDK usage. No unrelated credentials or broad secrets are requested.
持久
The skill is not always-enabled and does not request elevated platform privileges. It relies on launching its own MCP server process (normal for this use). Autonomous invocation is allowed by default but not combined with other concerning flags.
综合结论
This skill appears to do what it claims (control a Vector robot) and only requests the robot serial and an SDK config. However, the skill bundle does not include the actual Python package (vectorclaw-mcp) — SKILL.md instructs you to pip-install it. Before installing or running the MCP server: 1) review the package source (the GitHub repo linked) or the PyPI project to inspect the code; 2) install in an isolated environment (virtualenv, contain…
安装(复制给龙虾 AI)
将下方整段复制到龙虾中文库对话中,由龙虾按 SKILL.md 完成安装。
请把本段交给龙虾中文库(龙虾 AI)执行:为本机安装 OpenClaw 技能「VectorClaw MCP」。简介:MCP tools for Anki Vector: speech, motion, camera, sensors, and automation work…。
请 fetch 以下地址读取 SKILL.md 并按文档完成安装:https://raw.githubusercontent.com/openclaw/skills/refs/heads/main/skills/danmartinez78/vectorclaw-mcp/SKILL.md
(来源:yingzhi8.cn 技能库)
SKILL.md
---
name: vectorclaw-mcp
description: "MCP tools for Anki Vector: speech, motion, camera, sensors, and automation workflows."
openclaw:
emoji: "🤖"
requires:
bins: ["python3"]
env: ["VECTOR_SERIAL"]
install:
- id: pip
kind: pip
package: vectorclaw-mcp
label: "Install VectorClaw MCP (pip)"
mcp:
servers:
vectorclaw:
command: python3
args:
- "-m"
- "vectorclaw_mcp.server"
env:
VECTOR_SERIAL: "${VECTOR_SERIAL}"
---
# VectorClaw MCP
VectorClaw connects OpenClaw to an Anki / Digital Dream Labs Vector robot through MCP.
It provides practical robot control primitives for speech, movement, camera capture, and status/sensor reads.
## What you can do
- Speak text with `vector_say`
- Move and position with `vector_drive`, `vector_head`, `vector_lift`
- Capture camera images with `vector_look` and `vector_capture_image`
- Read robot state with `vector_status`, `vector_pose`, `vector_proximity_status`, `vector_touch_status`
- Build look → reason → act workflows
## Vision requirement for look → reason → act
For see → reason → act workflows, the agent must either be vision-capable itself (e.g., a VLM) or have access to a separate vision model/image-interpretation tool to analyze camera images before choosing actions.
## Requirements
- Vector robot configured and reachable
- Wire-Pod running
- SDK configured at `~/.anki_vector/sdk_config.ini`
- `VECTOR_SERIAL` environment variable set
## Quick setup
1. Install package: `pip install vectorclaw-mcp`
2. Configure SDK: `python3 -m anki_vector.configure`
3. Export robot serial: `export VECTOR_SERIAL=your-serial`
4. Add MCP server:
```json
{
"mcpServers": {
"vectorclaw": {
"command": "python3",
"args": ["-m", "vectorclaw_mcp.server"],
"env": { "VECTOR_SERIAL": "${VECTOR_SERIAL}" }
}
}
}
```
## Tool coverage
**Hardware-verified core tools**
`vector_say`, `vector_drive_off_charger`, `vector_drive`, `vector_emergency_stop`, `vector_head`, `vector_lift`, `vector_look`, `vector_capture_image`, `vector_face`, `vector_scan`, `vector_vision_reset`, `vector_pose`, `vector_status`, `vector_charger_status`, `vector_touch_status`, `vector_proximity_status`
**Experimental tools**
`vector_animate`, `vector_drive_on_charger`, `vector_find_faces`, `vector_list_visible_faces`, `vector_face_detection`, `vector_list_visible_objects`, `vector_cube`
## Current limitations
- Charger return (`vector_drive_on_charger`) is currently unreliable
- Face/object detection is currently inconsistent
- Visual interpretation requires the vision capability described above
## Documentation
- MCP API: `docs/MCP_API_REFERENCE.md`
- SDK Reference: `docs/VECTOR_SDK_REFERENCE.md`
- Hardware log: `docs/HARDWARE_SMOKE_LOG.md`
- Repo: https://github.com/danmartinez78/VectorClaw