openclaw 网盘下载
OpenClaw

技能详情(站内镜像,无评论)

首页 > 技能库 > OnDeckLLM

Localhost dashboard for managing LLM providers, model routing, and batting-order fallback chains. Auto-discovers providers from OpenClaw config or works stan...

AI 与大模型

许可证:MIT-0

MIT-0 ·免费使用、修改和重新分发。无需归因。

版本:v1.4.3

统计:⭐ 0 · 27 · 0 current installs · 0 all-time installs

0

安装量(当前) 0

🛡 VirusTotal:Pending · OpenClaw :良性

Package:canonflip-git/ondeckllm

安全扫描(ClawHub)

  • VirusTotal:Pending
  • OpenClaw :良性

OpenClaw 评估

The skill's requests and runtime instructions are consistent with a local dashboard manager for LLM providers; nothing in the bundle indicates it is trying to do an unrelated task.

目的

The name/description (local dashboard, provider discovery, model routing) match the instructions and included helper (status.js). The SKILL.md explicitly tells the agent to install the external npm package (ondeckllm) which is where the dashboard functionality would live. Requested artifacts (no env vars, references to ~/.openclaw and ~/.ondeckllm) are consistent with a local LLM provider manager.

说明范围

Runtime instructions are narrowly scoped: install the ondeckllm npm package, run the daemon, check status with node scripts/status.js, and open http://localhost:3900. The SKILL.md does reference reading and syncing ~/.openclaw/openclaw.json and writing data to ~/.ondeckllm/, which is expected for a config/dashboard tool. There are no vague instructions to exfiltrate data or read arbitrary system files.

安装机制

The skill is instruction-only but directs users to run `npm install -g ondeckllm`. Installing a global npm package is a normal way to provide a local dashboard, but npm packages run arbitrary code at install/run time — review the package before installing (audit, verify publisher, inspect package contents). The registry metadata does not embed an install artifact; the SKILL.md's install command is the sole install guidance.

证书

No environment variables or external credentials are requested by the skill bundle. The SKILL.md mentions using PORT and configuring a remote Ollama URL via the UI; both are reasonable for this tool. Note: config files (~/ .openclaw/openclaw.json and ~/.ondeckllm/) may contain provider API keys or sensitive entries — the dashboard's ability to read/write those files is consistent with its purpose but is a sensitive operation the user should co…

持久

The skill is not set to always:true and is user-invocable. It does not request to modify other skills or system-wide agent settings. It stores data under ~/.ondeckllm/ per SKILL.md which is appropriate for a local dashboard.

scripts/status.js:15

Shell command execution detected (child_process).

综合结论

This skill appears coherent for a local LLM dashboard. Before installing/using it: (1) review the ondeckllm npm package (publisher, package contents, recent activity) because global npm packages can execute code; (2) inspect the contents of ~/.openclaw/openclaw.json and be aware the dashboard will read and can write it (it may contain API keys); (3) check ~/.ondeckllm/ after first run to see what is logged (usage.jsonl may contain metadata abo…

安装(复制给龙虾 AI)

将下方整段复制到龙虾中文库对话中,由龙虾按 SKILL.md 完成安装。

请把本段交给龙虾中文库(龙虾 AI)执行:为本机安装 OpenClaw 技能「OnDeckLLM」。简介:Localhost dashboard for managing LLM providers, model routing, and batting-orde…。
请 fetch 以下地址读取 SKILL.md 并按文档完成安装:https://raw.githubusercontent.com/openclaw/skills/refs/heads/main/skills/canonflip-git/ondeckllm/SKILL.md
(来源:yingzhi8.cn 技能库)

SKILL.md

打开原始 SKILL.md(GitHub raw)

---
name: ondeckllm
description: >
  Localhost dashboard for managing LLM providers, model routing, and batting-order
  fallback chains. Auto-discovers providers from OpenClaw config or works standalone.
  Use when: (1) user wants to manage or view their LLM providers, (2) user wants to
  change model priority/batting order, (3) user wants to check provider health or
  latency, (4) user wants to add/remove/configure LLM providers, (5) user mentions
  OnDeckLLM or model lineup. Requires: Node.js 22+, OnDeckLLM installed
  (`npm install -g ondeckllm`).
install: npm install -g ondeckllm
---

# OnDeckLLM — AI Model Lineup Manager

## Prerequisites

```bash
npm install -g ondeckllm
```

Verify: `ondeckllm --help` or check install with `npm list -g ondeckllm`.

## What It Does

OnDeckLLM is a localhost web dashboard that:

- **Auto-discovers** LLM providers from OpenClaw config (`~/.openclaw/openclaw.json`)
- **Manages** a batting-order priority list for model routing (primary + fallbacks)
- **Tests** provider health and latency
- **Syncs** model lineup back to OpenClaw config with one click
- **Tracks** session costs (JSONL usage log + Chart.js)
- **Supports** Anthropic, OpenAI, Google AI, Groq, xAI/Grok, Ollama (local + remote), Mistral, DeepSeek, Together, OpenRouter

## Starting the Dashboard

```bash
# Default port 3900
ondeckllm

# Custom port
PORT=3901 ondeckllm
```

The dashboard runs at `http://localhost:3900` (or custom port).

### As a Background Service

Use the helper script to check status or start OnDeckLLM:

```bash
node scripts/status.js
```

Output: JSON with `running` (bool), `port`, `url`, and `pid` if active.

## Agent Workflow

### Check if OnDeckLLM is running

```bash
node scripts/status.js
```

### Open the dashboard for the user

Direct them to `http://localhost:3900` (or the configured port/URL).

### Provider management

OnDeckLLM reads provider config from `~/.openclaw/openclaw.json` automatically.
Changes made in the dashboard sync back to OpenClaw config.
No separate API or CLI commands needed — it's a web UI tool.

## Configuration

OnDeckLLM stores its data in `~/.ondeckllm/`:

- `config.json` — provider settings, port, Ollama URL
- `usage.jsonl` — cost tracking log
- `profiles/` — saved batting-order profiles

### Remote Ollama

To connect to a remote Ollama instance, configure in the dashboard UI:
Settings → Ollama → Remote URL (e.g., `http://192.168.55.80:11434`)

## Links

- 🌐 [ondeckllm.com](https://ondeckllm.com)
- 📦 [npm](https://www.npmjs.com/package/ondeckllm)
- 🐛 [GitHub Issues](https://github.com/canonflip/ondeckllm/issues)