openclaw 网盘下载
OpenClaw

技能详情(站内镜像,无评论)

首页 > 技能库 > agent-architecture-evaluator

Use when evaluating, testing, and optimizing an agent architecture or multi-agent system. Best for reviewing planning, routing, memory, tool use, reliability...

开发与 DevOps

许可证:MIT-0

MIT-0 ·免费使用、修改和重新分发。无需归因。

版本:v1.0.0

统计:⭐ 0 · 96 · 0 current installs · 0 all-time installs

0

安装量(当前) 0

🛡 VirusTotal :良性 · OpenClaw :良性

Package:ada01325150-alt/agent-architecture-evaluator

安全扫描(ClawHub)

  • VirusTotal :良性
  • OpenClaw :良性

OpenClaw 评估

The skill's files, instructions, and minimal runtime artifacts are consistent with an architecture-review helper and do not request unrelated credentials, installs, or persistent privileges.

目的

Name and description match the included assets (templates, references, example input) and a small helper script. There are no unrelated env vars, binaries, or config paths requested.

说明范围

SKILL.md stays focused on mapping architectures, failure modes, tests, and measurements. It does not instruct reading arbitrary system secrets, contacting external endpoints, or performing actions outside the stated scope.

安装机制

No install spec is provided (instruction-only). The only executable is a small local Python renderer; there are no downloads, external package installs, or extracted archives.

证书

The skill requires no environment variables, credentials, or config paths. Nothing requests broad secrets or unrelated service tokens.

持久

always:false and no persistent install behavior. agents/openai.yaml contains allow_implicit_invocation:false which further limits implicit invocation on that interface. The skill does not modify other skills or system-wide settings.

综合结论

This skill appears coherent and low-risk: it ships templates, documentation, and a small Python script that renders a JSON architecture review to Markdown. Before using, review how you supply input to the script: it reads a file path you provide, so avoid pointing it at local files that contain credentials or other sensitive data. If you intend to allow autonomous invocation or run the script in an automated environment, run it in a sandbox or…

安装(复制给龙虾 AI)

将下方整段复制到龙虾中文库对话中,由龙虾按 SKILL.md 完成安装。

请把本段交给龙虾中文库(龙虾 AI)执行:为本机安装 OpenClaw 技能「agent-architecture-evaluator」。简介:Use when evaluating, testing, and optimizing an agent architecture or multi-age…。
请 fetch 以下地址读取 SKILL.md 并按文档完成安装:https://raw.githubusercontent.com/openclaw/skills/refs/heads/main/skills/ada01325150-alt/agent-architecture-evaluator/SKILL.md
(来源:yingzhi8.cn 技能库)

SKILL.md

打开原始 SKILL.md(GitHub raw)

---
name: agent-architecture-evaluator
description: Use when evaluating, testing, and optimizing an agent architecture or multi-agent system. Best for reviewing planning, routing, memory, tool use, reliability, observability, cost, and system-level failure modes.
version: "1.0.0"
---

# Agent Architecture Evaluator

Version: `1.0.0`

## Overview

This skill reviews the architecture of an agent system, not just its prompts or its attached skills.

Use it for architectures involving components such as:

- planner / executor splits
- routers and specialists
- tool-use layers
- memory systems
- human approval gates
- multi-agent coordination

## Use this skill when

- A user wants to assess an existing agent architecture.
- Reliability, latency, cost, or coordination problems appear to be architectural.
- A team needs a structured architecture review and optimization roadmap.
- You need system-level test scenarios rather than single-skill evals.

## Do not use this skill when

- The problem is one isolated skill.
- The task is to create a new skill from scratch.
- The main need is portfolio review across many related skills.

Use `agent-test-measure-refine` or `agent-skill-portfolio-evaluator` in those cases.

## Output contract

Always produce these named outputs:

- `architecture_inventory`
- `failure_mode_map`
- `architecture_test_plan`
- `optimization_roadmap`
- `measurement_plan`
- `architecture_recommendation`

## Review dimensions

Evaluate at least these dimensions:

1. `component clarity`
2. `routing correctness`
3. `memory usefulness`
4. `coordination reliability`
5. `cost and latency efficiency`
6. `observability and debuggability`

## Quick start

1. Map the current architecture.
2. Identify critical paths and failure-prone handoffs.
3. Define architecture-level test scenarios.
4. Identify bottlenecks in routing, memory, tools, or coordination.
5. Recommend the smallest structural changes with the highest leverage.

## Workflow

### 1. Build the architecture inventory

Capture:

- components
- responsibilities
- inputs and outputs
- state or memory boundaries
- human approval points
- observability signals

### 2. Map failure modes

Look for:

- planner produces unusable tasks
- router sends work to the wrong specialist
- memory pollutes current decisions
- tool calls are slow, redundant, or poorly validated
- multi-agent handoffs lose context
- approval gates appear too late

### 3. Design system tests

Cover:

- happy path
- degraded upstream input
- partial component failure
- tool unavailability
- stale or noisy memory
- high-latency coordination
- rollback or recovery behavior

See `references/architecture-review-framework-v1.0.0.md`.

### 4. Prioritize architectural changes

Prefer:

- clarifying responsibilities before adding components
- removing weak indirection
- tightening interface contracts
- adding observability before adding complexity
- isolating state when cross-contamination is likely

### 5. Define measurement

Recommend concrete metrics where available:

- task success rate
- retry rate
- fallback rate
- cost per successful task
- latency by stage
- human intervention rate

## Anti-patterns

- adding new components to hide unclear ownership
- keeping weak memory because it sounds sophisticated
- optimizing one stage without measuring system impact
- blaming prompts for structural routing failures

## Resources

- `references/architecture-review-framework-v1.0.0.md` for system review steps.
- `references/optimization-patterns-v1.0.0.md` for architecture optimization guidance.
- `assets/architecture-review-template.md` for the final report structure.
- `assets/example-architecture-review.md` for a realistic filled review.
- `assets/architecture-input-example.json` for structured input.
- `scripts/render_architecture_review.py` to normalize a structured architecture review into Markdown.