openclaw 网盘下载
OpenClaw

技能详情(站内镜像,无评论)

首页 > 技能库 > AI守门人

LLM API 代理服务管理工具。支持多 Provider 转发(百炼/OpenRouter/NVIDIA)、内容安全审计、健康监控。使用场景:(1)启动/停止/重启代理服务 (2)查看代理状态和统计 (3)配置内容过滤规则。

开发与 DevOps

许可证:MIT-0

MIT-0 ·免费使用、修改和重新分发。无需归因。

版本:v1.0.4

统计:⭐ 0 · 68 · 0 current installs · 0 all-time installs

0

安装量(当前) 0

🛡 VirusTotal :可疑 · OpenClaw :可疑

Package:394286006/llm-proxy

安全扫描(ClawHub)

  • VirusTotal :可疑
  • OpenClaw :可疑

OpenClaw 评估

The skill is broadly what it claims (a local LLM proxy with filtering), but it contains behaviors that merit caution—most notably local logging of proxied requests (which may include API keys / sensitive payloads) and an optional third-layer review that could make outbound calls if enabled.

目的

Name/description (LLM proxy + content auditing) match the files and runtime instructions: control scripts, a Python proxy, config and filtering rules. The listed provider endpoints align with a multi-provider proxy. There are no surprising external service credentials requested in metadata.

说明范围

SKILL.md and README instruct running the provided scripts (./scripts/llm-proxy-ctl.sh) and editing scripts/llm-proxy-config.json; the code reads that config and writes logs under ~/.openclaw/logs/llm-proxy. That is expected for a proxy. Caveat: the skill appears to log request information (request logs described in README and a LogWriter in code). Logged request entries may include headers or bodies that contain API keys or other secrets unles…

安装机制

No install spec; this is instruction-only with bundled scripts. Nothing is downloaded from remote hosts during install. The code is included in the skill bundle, so no external install URLs or archive extraction risks are present.

证书

The registry metadata declares no required env vars or credentials, which is consistent with a local proxy. However the code does read optional environment variables (LLM_PROXY_CONFIG, LLM_PROXY_PORT, RULES_FILE) and will forward whichever Authorization header the client supplies to upstream providers. Users may therefore send provider API keys through the proxy; combined with local logging, those keys could be stored. No unrelated credentials…

持久

always:false and no special platform privileges requested. The skill writes logs and PID files to user-owned locations (~/.openclaw, /tmp) — normal for a local service. It does not request to auto-enable itself or modify other skills.

安装(复制给龙虾 AI)

将下方整段复制到龙虾中文库对话中,由龙虾按 SKILL.md 完成安装。

请把本段交给龙虾中文库(龙虾 AI)执行:为本机安装 OpenClaw 技能「AI守门人」。简介:LLM API 代理服务管理工具。支持多 Provider 转发(百炼/OpenRouter/NVIDIA)、内容安全审计、健康监控。使用场景:(1)启动/停…。
请 fetch 以下地址读取 SKILL.md 并按文档完成安装:https://raw.githubusercontent.com/openclaw/skills/refs/heads/main/skills/394286006/llm-proxy/SKILL.md
(来源:yingzhi8.cn 技能库)

SKILL.md

打开原始 SKILL.md(GitHub raw)

# LLM Proxy Skill

LLM API 代理服务,统一管理多个 LLM Provider,支持内容安全审计。

## 功能

- 多 Provider 统一代理(22+ 提供商)
- 内容安全审计(恶意指令检测、敏感内容过滤)
- 流式响应实时检测
- 健康状态监控

---

## 使用方法

### 启动代理

```
启动llm-proxy
```

### 停止代理

```
停止llm-proxy
```

### 查看状态

```
llm-proxy状态
```

### 重启代理

```
重启llm-proxy
```

---

## 手动操作

进入 skill 目录后执行:

### 启动

```bash
./scripts/llm-proxy-ctl.sh start
```

### 停止

```bash
./scripts/llm-proxy-ctl.sh stop
```

### 状态

```bash
./scripts/llm-proxy-ctl.sh status
```

### 重启

```bash
./scripts/llm-proxy-ctl.sh restart
```

---

## 配置说明

配置文件:`scripts/llm-proxy-config.json`

### 基本配置

| 字段 | 默认值 | 说明 |
|------|--------|------|
| `listen_host` | `127.0.0.1` | 监听地址 |
| `proxy_port` | `18888` | 代理端口 |
| `read_timeout` | `60` | 读取超时(秒) |
| `max_body_size_mb` | `10` | 最大请求体(MB) |
| `max_threads` | `50` | 最大线程数 |

### 安全检测配置

| 配置项 | 说明 |
|--------|------|
| `rules_file` | 内容过滤规则文件 |
| `quick_check_keywords` | 快速检测关键词列表 |

### 修改端口

编辑 `llm-proxy-config.json` 中的 `proxy_port` 字段,重启服务生效。

---

## 支持的 Provider

### 免费/免费额度

- `ollama` - 本地 Ollama
- `gemini` - Google Gemini
- `groq` - Groq
- `cloudflare` - Workers AI
- `deepseek` - DeepSeek
- `moonshot` - 月之暗面
- `zhipu` - 智谱
- `siliconflow` - SiliconFlow

### 付费

- `openai` - OpenAI
- `anthropic` - Anthropic
- `openrouter` - OpenRouter
- `nvd` - NVIDIA NIM
- `bailian` - 阿里百炼
- `baidu` - 百度文心
- `spark` - 讯飞星火
- `minimax` - MiniMax
- `yi` - 零一万物
- `baichuan` - 百川
- `together` - Together AI
- `fireworks` - Fireworks AI
- `replicate` - Replicate

---

## 健康检查

```bash
curl http://127.0.0.1:18888/health
```

响应示例:

```json
{
  "status": "ok",
  "uptime": 3600,
  "rules_loaded": {
    "layer1": 10,
    "layer2": 7,
    "whitelist": 6
  },
  "stats": {
    "total_requests": 100,
    "total_responses": 98,
    "blocked": 0,
    "errors": 2
  }
}
```

---

## 安全检测机制

### 三层审核

1. **L1 - 恶意指令检测**:危险命令、提权操作、SQL注入、后门等
2. **L2 - 敏感内容检测**:个人身份信息、凭证密钥、违法内容
3. **快速关键词检测**:流式响应实时检测风险关键词

### 流式响应检测

- 每 100 字符检测一次
- 发现风险关键词时注入警告提醒
- 严重违规时阻断响应

### 自定义关键词

编辑 `llm-proxy-config.json` 中的 `quick_check_keywords` 数组添加新关键词。

---

## 日志

日志目录:`~/.openclaw/logs/llm-proxy/`

- `proxy-YYYY-MM-DD.jsonl` - 请求日志
- `ctl-service.log` - 服务日志(手动启动时)

---

## 注意事项

- 默认端口 `18888`
- 仅监听本地 `127.0.0.1`
- 无自动监控,需手动管理
- 修改配置后需重启服务