openclaw 网盘下载
OpenClaw

技能详情(站内镜像,无评论)

首页 > 技能库 > Mixture of Agents

Mixture of Agents: Make 3 frontier models argue, then synthesize their best insights into one superior answer. ~$0.03/query.

AI 与大模型

作者:John Scianna @jscianna

许可证:MIT-0

MIT-0 ·免费使用、修改和重新分发。无需归因。

版本:v1.2.0

统计:⭐ 0 · 455 · 1 current installs · 1 all-time installs

0

安装量(当前) 1

🛡 VirusTotal :良性 · OpenClaw :良性

Package:moa

安全扫描(ClawHub)

  • VirusTotal :良性
  • OpenClaw :良性

OpenClaw 评估

The skill does what it says: it queries OpenRouter models in parallel and synthesizes their outputs; aside from a small metadata inconsistency and a demo script that runs a hard-coded example, nothing in the package is disproportionate or unexpected.

目的

The skill's stated purpose (mixing multiple LLMs and synthesizing their outputs) matches the code and instructions. However, registry metadata at the top of the provided package summary lists no required environment variables, while both SKILL.md and manifest.json require OPENROUTER_API_KEY — an inconsistency that should be corrected but is not evidence of malicious behavior.

说明范围

SKILL.md and the code instruct the agent to call OpenRouter's chat completions API with the user's prompt, aggregate model responses, and synthesize them. The instructions do not ask for unrelated files or credentials. One minor operational note: scripts/moa-paid.js contains a hard-coded example prompt and calls runMoA(prompt) immediately (it runs when executed), which is a benign demo but may be surprising if someone runs that file expecting …

安装机制

There is no install spec (instruction-only skill with Node.js files). No remote downloads or archive extraction are used. The package relies on axios (a normal dependency) and runtime Node >=18 as declared in manifest.json.

证书

The only required secret is OPENROUTER_API_KEY, which is appropriate for a skill that calls OpenRouter. The earlier top-level summary incorrectly claimed 'no required env vars', which conflicts with the manifest/SKILL.md — the environment requirement itself is proportionate, but the metadata mismatch should be fixed.

持久

The skill does not request always: true and does not modify other skills or global agent configuration. It behaves as a normal user-invocable skill that makes outbound API calls; autonomous invocation is allowed by default but not unusually privileged here.

综合结论

This skill is coherent and appears to be what it claims: it needs an OpenRouter API key and will send your prompts (and the model responses) to openrouter.ai. Before installing, (1) confirm you are comfortable sending query text and any sensitive context to OpenRouter, (2) set OPENROUTER_API_KEY only with a key scoped/rotatable for this use, (3) inspect scripts/moa-paid.js (it contains a hard-coded demo prompt and runs immediately if executed)…

安装(复制给龙虾 AI)

将下方整段复制到龙虾中文库对话中,由龙虾按 SKILL.md 完成安装。

请把本段交给龙虾中文库(龙虾 AI)执行:为本机安装 OpenClaw 技能「Mixture of Agents」。简介:Mixture of Agents: Make 3 frontier models argue, then synthesize their best ins…。
请 fetch 以下地址读取 SKILL.md 并按文档完成安装:https://raw.githubusercontent.com/openclaw/skills/refs/heads/main/skills/jscianna/moa/SKILL.md
(来源:yingzhi8.cn 技能库)

SKILL.md

打开原始 SKILL.md(GitHub raw)

---
name: moa
description: "Mixture of Agents: Make 3 frontier models argue, then synthesize their best insights into one superior answer. ~$0.03/query."
author: John Scianna (@Scianna)
version: 1.2.0
requires:
  - OPENROUTER_API_KEY
cost: ~$0.03 per query (paid tier)
---

# Mixture of Agents (MoA)

**TL;DR:** Make 3 AI models argue with each other. Get an answer better than any single model. Cost: ~$0.03.

## Two Usage Modes

### A. Standalone CLI (Node.js)
```bash
export OPENROUTER_API_KEY="your-key"
node scripts/moa.js "Your complex question"
```

### B. OpenClaw Skill (Agent-orchestrated)
```bash
# Install
clawhub install moa

# Or copy to ~/clawd/skills/moa/
```

The agent can then invoke MoA for complex analysis tasks.

---

## Origin Story

The concept of "Mixture of Agents" comes from research showing LLMs can improve each other's outputs through collaboration. I built this for VC deal analysis—when evaluating startups, you want multiple perspectives, not one model's opinion.

**The journey:**
1. Started with 5 free OpenRouter models (Llama, Gemini, Mistral, Qwen, Nemotron)
2. Rate limits killed me at 2am during peak hours
3. Switched to 3 paid frontier specialists
4. Result: ~$0.03/query, answers better than any single model

---

## When to Use

- **Complex analysis** — due diligence, market research, technical evaluation
- **Brainstorming** — get diverse ideas, synthesize the best
- **Fact-checking** — cross-reference across models with different training data
- **High-stakes decisions** — when one model's blind spots could hurt you
- **Contrarian thinking** — different models have different biases

**When NOT to use:**
- Quick Q&A (too slow, 30-90s latency)
- Real-time chat (not designed for streaming)
- Simple lookups (overkill)

---

## Model Configuration

### Paid Tier (Default) — Recommended

| Role | Model | ~Latency | Strength |
|------|-------|----------|----------|
| Proposer 1 | `moonshotai/kimi-k2.5` | 23s | Long context, strong reasoning |
| Proposer 2 | `z-ai/glm-5` | 36s | Technical depth, different training corpus |
| Proposer 3 | `minimax/minimax-m2.5` | 64s | Nuance catching, thorough analysis |
| Aggregator | `moonshotai/kimi-k2.5` | 15s | Fast synthesis |

**Why these models?**
- Frontier-class but less congested than GPT-4/Claude
- Different training data = genuinely different perspectives
- Chinese models excel at certain reasoning tasks
- Combined cost still cheaper than single Opus call

**Cost breakdown:**
```
3 proposers × ~$0.008 = $0.024
1 aggregator × ~$0.005 = $0.005
─────────────────────────────
Total: ~$0.029/query
```

### Free Tier (Fallback)
5 models: Llama 3.3 70B, Gemini 2.0 Flash, Mistral Small, Nemotron 70B, Qwen 2.5 72B

⚠️ **Warning:** Free tier hits rate limits during peak hours. Use `--free` flag only for testing.

---

## How It Works

```
        ┌─────────────┐
        │   PROMPT    │
        └──────┬──────┘
               │
    ┌──────────┼──────────┐
    ▼          ▼          ▼
┌────────┐ ┌────────┐ ┌────────┐
│Kimi 2.5│ │ GLM 5  │ │MiniMax │  ← Parallel (they "argue")
│(reason)│ │(depth) │ │(nuance)│
└───┬────┘ └───┬────┘ └───┬────┘
    │          │          │
    └──────────┼──────────┘
               ▼
       ┌──────────────┐
       │  AGGREGATOR  │
       │  (Kimi 2.5)  │
       │              │
       │ • Best of 3  │
       │ • Resolve    │
       │   conflicts  │
       │ • Synthesize │
       └──────┬───────┘
              ▼
       ┌──────────────┐
       │ FINAL ANSWER │
       │ (Synthesized)│
       └──────────────┘
```

---

## API Reference

### Function Signature

```typescript
interface MoAOptions {
  prompt: string;           // Required: The question to analyze
  tier?: 'paid' | 'free';   // Default: 'paid'
}

interface MoAResult {
  synthesis: string;        // The final aggregated answer
}

// Throws on complete failure (all models down, invalid key)
// Returns partial synthesis if 1-2 models fail
async function handle(options: MoAOptions): Promise<string>
```

### CLI Usage

```bash
# Paid tier (default)
node scripts/moa.js "Your complex question"

# Free tier
node scripts/moa.js "Your question" --free
```

### Programmatic Usage

```javascript
const { handle } = require('./scripts/moa.js');

const synthesis = await handle({ 
  prompt: "Analyze the competitive moats in AI code generation",
  tier: 'paid'
});

console.log(synthesis);
```

---

## Failure Modes

| Scenario | Behavior |
|----------|----------|
| **1 proposer fails** | Synthesis from remaining 2 models |
| **2 proposers fail** | Synthesis from 1 model (degraded) |
| **All proposers fail** | Returns error message |
| **Invalid API key** | Immediate error with setup instructions |
| **Rate limit (free tier)** | Returns rate limit error |

The system is designed to degrade gracefully. A 2/3 response is still valuable.

---

## Example Use Cases

### VC Due Diligence
```bash
node scripts/moa.js "Analyze the competitive landscape for AI code generation. 
Who has defensible moats? Who's likely to be commoditized? Be specific."
```

### Technical Evaluation
```bash
node scripts/moa.js "Compare RLHF vs DPO vs RLAIF for LLM alignment. 
Which scales better? What are the failure modes of each?"
```

### Market Research
```bash
node scripts/moa.js "What are the emerging use cases for embodied AI in 2026? 
Focus on robotics, drones, and autonomous systems. Include specific companies."
```

---

## Performance Expectations

| Metric | Paid Tier | Free Tier |
|--------|-----------|-----------|
| **P50 Latency** | ~45s | ~60s |
| **P95 Latency** | ~90s | ~120s+ |
| **Success Rate** | >99% | ~80% (rate limits) |
| **Cost/Query** | ~$0.03 | $0.00 |

---

## Tips

1. **Be specific** — Vague prompts get vague synthesis
2. **Ask for structure** — "Give me pros/cons" or "List top 5" helps the aggregator
3. **Use for analysis, not chat** — MoA shines for complex reasoning
4. **Batch your queries** — 30-90s per query, so plan accordingly

---

## Installation

### Via ClawHub (Recommended)
```bash
clawhub install moa
```

### Manual
1. Copy `skills/moa/` to your `~/clawd/skills/` directory
2. Set `OPENROUTER_API_KEY` in your environment
3. The agent can now invoke MoA for complex queries

---

## Environment Variables

| Variable | Required | Description |
|----------|----------|-------------|
| `OPENROUTER_API_KEY` | Yes | Your OpenRouter API key |

Get your key at: https://openrouter.ai/keys

---

## Credits

- MoA concept: [Together AI Research](https://www.together.ai/blog/together-moa)
- Implementation: [@Scianna](https://x.com/Scianna)
- Built for: [OpenClaw](https://github.com/openclaw/openclaw)