openclaw 网盘下载
OpenClaw

技能详情(站内镜像,无评论)

首页 > 技能库 > LLM Council Router

Route any prompt to the best-performing LLM using peer-reviewed council rankings from LLM Council

开发与 DevOps

许可证:MIT-0

MIT-0 ·免费使用、修改和重新分发。无需归因。

版本:v1.0.0

统计:⭐ 0 · 491 · 0 current installs · 0 all-time installs

0

安装量(当前) 0

🛡 VirusTotal :可疑 · OpenClaw :可疑

Package:ashtiwariasu/llmcouncil-router

安全扫描(ClawHub)

  • VirusTotal :可疑
  • OpenClaw :可疑

OpenClaw 评估

该技能主要执行其所说的操作(路由提示到推荐的LLM ) ,但其运行时指令引用了额外的API ( OpenRouter )和未声明的环境变量( OPENROUTER_API_KEY ) ,并且在安装之前您应该验证一些小的元数据不匹配。

目的

名称/描述与说明相匹配: SKILL.md记录了用于为查询选择顶级模型的API ,并演示了如何调用该模型。为此,申请LLMCOUNCIL_API_KEY是合适的。

说明范围

这些说明正确地描述了调用LLM委员会路由终结点和使用X-API-Key与LLMCOUNCIL_API_KEY。然而,即使技能元数据中未声明OPENROUTER_API_KEY ,示例也会链接到OpenRouter并访问os.environ ['OPENROUTER_API_KEY'] (和OpenRouter端点) ;因此,指令引用了未在requires.env中声明的其他凭据和外部服务。

安装机制

没有安装规范和代码文件的指令性技能—安装过程中不会向磁盘写入任何内容,也不会获取任何第三方软件包。

证书

技能声明单个必需的环境变量( LLMCOUNCIL_API_KEY ) ,这是相称的。但是,提供的使用示例还需要OPENROUTER_API_KEY (并调用openrouter.ai )而不声明它;隐式凭据请求应该是显式的,以便您可以评估其必要性和范围。

持久

始终为false ,该技能是用户可调用的,不会请求持久代理权限或修改系统/其他技能配置。

安装(复制给龙虾 AI)

将下方整段复制到龙虾中文库对话中,由龙虾按 SKILL.md 完成安装。

请把本段交给龙虾中文库(龙虾 AI)执行:为本机安装 OpenClaw 技能「LLM Council Router」。简介:Route any prompt to the best-performing LLM using peer-reviewed council ranking…。
请 fetch 以下地址读取 SKILL.md 并按文档完成安装:https://raw.githubusercontent.com/openclaw/skills/refs/heads/main/skills/ashtiwariasu/llmcouncil-router/SKILL.md
(来源:yingzhi8.cn 技能库)

SKILL.md

打开原始 SKILL.md(GitHub raw)

---
name: llmcouncil-router
description: Route any prompt to the best-performing LLM using peer-reviewed council rankings from LLM Council
homepage: https://llmcouncil.ai
user-invocable: true
metadata: {"openclaw":{"requires":{"env":["LLMCOUNCIL_API_KEY"]},"emoji":"🧠"}}
---

# LLM Council Router

Route any prompt to the best-performing LLM. The API finds the top model for a given query based on thousands of peer-reviewed council deliberations — then you call that model directly.

## Base URL

```
https://clawbot.llmcouncil.ai
```

## Authentication

Use the `X-API-Key` header with your LLM Council API key.

```
X-API-Key: clwb_YOUR_KEY_HERE
```

Get a free key at [llmcouncil.ai/developers](https://llmcouncil.ai/developers).

## Endpoint: POST /v1/route

Find the best-performing model for a query.

### Request

```json
{
  "query": "Explain quantum entanglement simply",
  "k": 20
}
```

| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `query` | string | Yes | The prompt or question to route |
| `k` | integer | No | Number of past evaluations to consider (default: 20) |

### Response

```json
{
  "query": "Explain quantum entanglement simply",
  "nearest_councils": 20,
  "model": "anthropic/claude-sonnet-4",
  "relevance": 0.8234,
  "confidence": 0.65,
  "model_rankings": [
    { "rank": 1, "model": "anthropic/claude-sonnet-4", "nearby_wins": 13, "nearby_appearances": 20 },
    { "rank": 2, "model": "openai/gpt-4.1", "nearby_wins": 5, "nearby_appearances": 18 }
  ]
}
```

| Field | Type | Description |
|-------|------|-------------|
| `model` | string | Top recommended model ID (matches OpenRouter catalogue) |
| `relevance` | float (0–1) | How closely matched evaluations relate to your query. Above 0.75 is strong. |
| `confidence` | float (0–1) | How decisively the top model outperforms alternatives. Higher = clearer winner. |
| `nearest_councils` | integer | Number of relevant past evaluations used |
| `model_rankings` | array | All models ranked by performance across matched evaluations |

## How to use this skill

When the user asks you to find the best model for a task, or when you need to decide which LLM to use:

1. **Call the routing API** with the user's query:

```bash
curl -X POST https://clawbot.llmcouncil.ai/v1/route 
  -H "Content-Type: application/json" 
  -H "X-API-Key: $LLMCOUNCIL_API_KEY" 
  -d '{"query": "USER_QUERY_HERE"}'
```

2. **Read the response** — the `model` field is the best-performing model for that query type.

3. **Chain with OpenRouter** — model IDs match the OpenRouter catalogue directly, no mapping needed:

```python
import requests, os

# Step 1: Get the best model from LLM Council
route = requests.post(
    "https://clawbot.llmcouncil.ai/v1/route",
    headers={"X-API-Key": os.environ["LLMCOUNCIL_API_KEY"]},
    json={"query": "Write a Python web scraper"},
).json()

best_model = route["model"]       # e.g. "anthropic/claude-sonnet-4"
confidence = route["confidence"]   # e.g. 0.85

# Step 2: Call that model via OpenRouter
answer = requests.post(
    "https://openrouter.ai/api/v1/chat/completions",
    headers={"Authorization": f"Bearer {os.environ['OPENROUTER_API_KEY']}"},
    json={
        "model": best_model,
        "messages": [{"role": "user", "content": "Write a Python web scraper"}],
    },
).json()

print(answer["choices"][0]["message"]["content"])
```

## Rate Limits

| Tier | Daily Limit | Attribution |
|------|-------------|-------------|
| Free | 100 requests/day | Required |
| Pro | 10,000 requests/day | None |

## When to use this

- User asks "which model is best for X?"
- You need to pick the optimal model for a specific task type
- You want data-driven model selection instead of guessing
- You want to chain model routing with OpenRouter for automatic best-model dispatch