openclaw 网盘下载
OpenClaw

技能详情(站内镜像,无评论)

首页 > 技能库 > ModelReady

Start using a local or Hugging Face model instantly, directly from chat.

通信与消息

许可证:MIT-0

MIT-0 ·免费使用、修改和重新分发。无需归因。

版本:v1.0.0

统计:⭐ 0 · 1.2k · 2 current installs · 2 all-time installs

0

安装量(当前) 2

🛡 VirusTotal :可疑 · OpenClaw :可疑

Package:modelready

安全扫描(ClawHub)

  • VirusTotal :可疑
  • OpenClaw :可疑

OpenClaw 评估

The skill largely does what it says (start a vLLM/OpenAI-compatible endpoint locally) but has several coherence and safety issues: undeclared runtime dependencies, no install steps, and a default network bind that can expose the server to the LAN without authentication.

目的

The script implements exactly the advertised functionality (starting a vLLM/openai-style server and proxying chat requests). However the declared requirements in the registry/metadata are incomplete: the runtime requires python3 and the vllm Python package, but 'python3' and vllm are not listed in required binaries or install specs. SKILL.md metadata also lists an env var 'URL' that isn't used as a required external credential. These mismatche…

说明范围

Instructions and the script read/write files under $HOME/.model2skill (defaults.env, pid/log files) which is reasonable. However the script binds by default to HOST=0.0.0.0 (DEFAULT_HOST) exposing the OpenAI-compatible endpoint to the network/LAN unless changed; SKILL.md does not warn about this. The skill will start an unauthenticated HTTP API that, if reachable, could be invoked by other machines on the network. The chat path uses local HTTP…

安装机制

There is no install spec and no code is downloaded — the skill is instruction+script only. That is low-risk from supply-chain perspective but problematic operationally: the script expects python3 and the 'vllm' package to be available. The skill does not provide installation steps or check for vllm; a user may run it and see failures or run an untrusted vllm binary if present.

证书

The skill does not request external credentials and only writes a small defaults file under ~/.model2skill. It does use HOME and network information (hostname/IP) to resolve bind targets. The SKILL.md metadata lists an 'URL' env entry that is inconsistent with the rest of the package; otherwise there are no unexplained SECRET/TOKEN env requirements.

持久

The skill does persist state to $HOME/.model2skill (defaults, logs, PID files) which is expected for a local server manager. It does not request always:true, does not modify other skills, and does not request elevated privileges.

安装(复制给龙虾 AI)

将下方整段复制到龙虾中文库对话中,由龙虾按 SKILL.md 完成安装。

请把本段交给龙虾中文库(龙虾 AI)执行:为本机安装 OpenClaw 技能「ModelReady」。简介:Start using a local or Hugging Face model instantly, directly from chat.。
请 fetch 以下地址读取 SKILL.md 并按文档完成安装:https://raw.githubusercontent.com/openclaw/skills/refs/heads/main/skills/carol-gutianle/modelready/SKILL.md
(来源:yingzhi8.cn 技能库)

SKILL.md

打开原始 SKILL.md(GitHub raw)

---
name: modelready
description: Start using a local or Hugging Face model instantly, directly from chat.
metadata: {"openclaw":{"requires":{"bins":["bash", "curl"]}, "env": ["URL"]}}
---  

# ModelReady

ModelReady lets you **start using a local or Hugging Face model immediately**, without leaving clawdbot.

It turns a model into a running, OpenAI-compatible endpoint and allows you to chat with it directly from a conversation.


## When to use

Use this skill when you want to:
- Quickly start using a local or Hugging Face model
- Chat with a locally running model
- Test or interact with a model directly from chat


## Commands

### Start a model server

```text
/modelready start repo=<path-or-hf-repo> port=<port> [tp=<n>] [dtype=<dtype>]
````

Examples:

```text
/modelready start repo=Qwen/Qwen2.5-7B-Instruct port=19001
/modelready start repo=/home/user/models/Qwen-2.5 port=8010 tp=4 dtype=bfloat16
```


### Chat with a running model

```text
/modelready chat port=<port> text="<message>"
```

Example:

```text
/modelready chat port=8010 text="hello"
```


### Check status or stop the server

```text
/modelready status port=<port>
/modelready stop port=<port>
```

### Set default host or port
```text
/modelready set_ip   ip=<host>
/modelready set_port port=<port>
```


## Notes

* The model is served locally using vLLM.
* The exposed endpoint follows the OpenAI API format.
* The server must be started before sending chat requests.