openclaw 网盘下载
OpenClaw

技能详情(站内镜像,无评论)

首页 > 技能库 > Decompose Mcp

Decompose any text into classified semantic units — authority, risk, attention, entities. No LLM. Deterministic.

开发与 DevOps

许可证:MIT-0

MIT-0 ·免费使用、修改和重新分发。无需归因。

版本:v0.1.2

统计:⭐ 0 · 566 · 0 current installs · 0 all-time installs

0

安装量(当前) 0

🛡 VirusTotal :良性 · OpenClaw :可疑

Package:decompose-mcp

安全扫描(ClawHub)

  • VirusTotal :良性
  • OpenClaw :可疑

OpenClaw 评估

The skill's declared purpose and requirements mostly line up (local text decomposition + optional URL fetch), but several runtime claims (SSRF protections, fully local behavior) rely on external, install-time code (a PyPI package) that this manifest can't verify — so proceed with caution and audit the package before trusting network-enabled use.

目的

Name/description, declared binaries (python3), and the provided tools (decompose_text, decompose_url) are coherent: a Python-based decomposition tool reasonably needs Python and may need network for URL fetching. No unrelated credentials or unusual binaries are requested.

说明范围

SKILL.md stays within the stated purpose: it shows how to install the package, configure an MCP server, and use text- vs URL-based tools. However it instructs running a local MCP server (python -m decompose --serve) which will start process(es) on the host and may accept local network connections. The doc claims SSRF protection and wholly-local text processing for decompose_text — those are implementation details referenced but cannot be valid…

安装机制

Install uses pip (uv) to fetch a PyPI package (decompose-mcp). Pulling from PyPI is common but executes third-party code on the host; this is a moderate-risk install mechanism because the actual package contents are not included here. The SKILL.md links to a GitHub repo (good for audit), but the skill bundle itself provides no code to verify the claims.

证书

No environment variables or credentials are requested. The only declared permission is 'network', which matches the documented decompose_url functionality and is appropriately scoped in the manifest. No other secrets or unrelated service tokens are required.

持久

always:false (normal). The skill suggests adding an MCP server entry to your OpenClaw config (a user action), but does not request forced inclusion or modification of other skills. Running a local service is normal for MCP patterns but increases runtime footprint.

安装(复制给龙虾 AI)

将下方整段复制到龙虾中文库对话中,由龙虾按 SKILL.md 完成安装。

请把本段交给龙虾中文库(龙虾 AI)执行:为本机安装 OpenClaw 技能「Decompose Mcp」。简介:Decompose any text into classified semantic units — authority, risk, attention,…。
请 fetch 以下地址读取 SKILL.md 并按文档完成安装:https://raw.githubusercontent.com/openclaw/skills/refs/heads/main/skills/echology-io/decompose-mcp/SKILL.md
(来源:yingzhi8.cn 技能库)

SKILL.md

打开原始 SKILL.md(GitHub raw)

---
name: decompose-mcp
description: Decompose any text into classified semantic units — authority, risk, attention, entities. No LLM. Deterministic.
homepage: https://echology.io/decompose
metadata: {"clawdbot":{"emoji":"🧩","requires":{"anyBins":["python3","python"]},"install":[{"id":"pip","kind":"uv","pkg":"decompose-mcp","bins":["decompose"],"label":"Install decompose-mcp (pip/uv)"}]}}
---

# Decompose

Decompose any text or URL into classified semantic units. Each unit gets authority level, risk category, attention score, entity extraction, and irreducibility flags. No LLM required. Deterministic. Runs locally.

## Setup

### 1. Install

```bash
pip install decompose-mcp
```

### 2. Configure MCP Server

Add to your OpenClaw MCP config:

```json
{
  "mcpServers": {
    "decompose": {
      "command": "python3",
      "args": ["-m", "decompose", "--serve"]
    }
  }
}
```

### 3. Verify

```bash
python3 -m decompose --text "The contractor shall provide all materials per ASTM C150-20."
```

## Available Tools

### `decompose_text`

Decompose any text into classified semantic units.

**Parameters:**
- `text` (required) — The text to decompose
- `compact` (optional, default: false) — Omit zero-value fields for smaller output
- `chunk_size` (optional, default: 2000) — Max characters per unit

**Example prompt:** "Decompose this spec and tell me which sections are mandatory"

**Returns:** JSON with `units` array. Each unit contains:
- `authority` — mandatory, prohibitive, directive, permissive, conditional, informational
- `risk` — safety_critical, security, compliance, financial, contractual, advisory, informational
- `attention` — 0.0 to 10.0 priority score
- `actionable` — whether someone needs to act on this
- `irreducible` — whether content must be preserved verbatim
- `entities` — referenced standards and codes (ASTM, ASCE, IBC, OSHA, etc.)
- `dates` — extracted date references
- `financial` — extracted dollar amounts and percentages
- `heading_path` — document structure hierarchy

### `decompose_url`

Fetch a URL and decompose its content. Handles HTML, Markdown, and plain text.

**Parameters:**
- `url` (required) — URL to fetch and decompose
- `compact` (optional, default: false) — Omit zero-value fields

**Example prompt:** "Decompose https://spec.example.com/transport and show me the security requirements"

## What It Detects

- **Authority levels** — RFC 2119 keywords: "shall" = mandatory, "should" = directive, "may" = permissive
- **Risk categories** — safety-critical, security, compliance, financial, contractual
- **Attention scoring** — authority weight x risk multiplier, 0-10 scale
- **Standards references** — ASTM, ASCE, IBC, OSHA, ACI, AISC, AWS, ISO, EN
- **Financial values** — dollar amounts, percentages, retainage, liquidated damages
- **Dates** — deadlines, milestones, notice periods
- **Irreducibility** — legal mandates, threshold values, formulas that cannot be paraphrased

## Use Cases

- Pre-process documents before sending to your LLM — save 60-80% of context window
- Classify specs, contracts, policies, regulations by obligation level
- Extract standards references and compliance requirements
- Route high-attention content to specialized analysis chains
- Build structured training data from raw documents

## Performance

- ~14ms average per document on Apple Silicon
- 1,000+ chars/ms throughput
- Zero API calls, zero cost, works offline
- Deterministic — same input always produces same output

## Security & Trust

**Text classification is fully local.** The `decompose_text` tool performs all processing in-process with no network I/O. No data leaves your machine.

**URL fetching performs outbound HTTP requests.** The `decompose_url` tool fetches the target URL, which necessarily involves network I/O to the specified host. This is why the skill declares the `network` permission in `claw.json`. If you do not need URL fetching, you can use `decompose_text` exclusively with no network access required.

**SSRF protection.** URL fetching blocks private/internal IP ranges before connecting: `0.0.0.0/8`, `10.0.0.0/8`, `100.64.0.0/10`, `127.0.0.0/8`, `169.254.0.0/16`, `172.16.0.0/12`, `192.168.0.0/16`, `::1/128`, `fc00::/7`, `fe80::/10`. The implementation resolves the hostname via DNS *before* connecting and checks all returned addresses against the blocklist. See [`src/decompose/mcp_server.py` lines 19-49](https://github.com/echology-io/decompose/blob/main/src/decompose/mcp_server.py#L19-L49).

**No API keys or credentials required.** No external services are contacted except when using `decompose_url` to fetch user-specified URLs.

**Source code is fully auditable.** The complete source is published at [github.com/echology-io/decompose](https://github.com/echology-io/decompose). The PyPI package is built from this repo via GitHub Actions ([`publish.yml`](https://github.com/echology-io/decompose/blob/main/.github/workflows/publish.yml)) using PyPI Trusted Publishers (OIDC), so the published artifact is traceable to a specific commit.

## Resources

- [Source Code (GitHub)](https://github.com/echology-io/decompose) — full source, auditable
- [PyPI](https://pypi.org/project/decompose-mcp/) — published via Trusted Publishers
- [Documentation](https://echology.io/decompose)
- [Blog: When Regex Beats an LLM](https://echology.io/blog/regex-beats-llm)
- [Blog: Why Your Agent Needs a Cognitive Primitive](https://echology.io/blog/cognitive-primitive)