技能详情(站内镜像,无评论)
许可证:MIT-0
MIT-0 ·免费使用、修改和重新分发。无需归因。
版本:v1.0.0
统计:⭐ 11 · 8.1k · 60 current installs · 62 all-time installs
⭐ 11
安装量(当前) 62
🛡 VirusTotal :良性 · OpenClaw :良性
Package:araa47/local-whisper
安全扫描(ClawHub)
- VirusTotal :良性
- OpenClaw :良性
OpenClaw 评估
The skill appears to do what it claims — a local Whisper-based STT tool — with no unexplained credentials or network endpoints in the code; minor documentation mismatches and expected model-download behavior are noted.
目的
Name, description, declared binary (ffmpeg), package dependencies (openai-whisper, torch) and the included Python transcription code all align with a local Whisper STT tool. There are no unrelated credentials or config paths requested.
说明范围
SKILL.md stays within the STT task, showing venv creation and pip installation. Two small inconsistencies: the README examples call ~/.clawdbot/skills/local-whisper/scripts/local-whisper but the repository provides scripts/transcribe.py (no wrapper named local-whisper included), and the instructions use the 'uv' command (uv venv, uv pip) but 'uv' is not listed in required binaries. Also note: models are downloaded at runtime by whisper.load_mo…
安装机制
No install spec in the registry (instruction-only), so nothing is forced onto disk by the registry. SKILL.md recommends pip installing openai-whisper and torch (torch download uses the official PyTorch index URL). This is a standard approach; the user will execute these installs locally in a venv.
证书
The skill requests no environment variables or credentials. That is proportionate for a local transcription utility.
持久
always is false and the skill does not request elevated or persistent platform-wide privileges. Runtime behavior will store downloaded model weights/cache on the host (normal for ML models) but the skill does not modify other skills or system-wide agent settings.
综合结论
This skill is internally coherent for local Whisper STT, but check a few things before installing: (1) The SKILL.md references a scripts/local-whisper wrapper but only transcribe.py is included — you may need to run the Python file directly or add a small launcher. (2) The instructions use the 'uv' helper tool but 'uv' is not declared as a required binary; ensure you understand or replace those commands (you can use python -m venv / pip). (3) …
安装(复制给龙虾 AI)
将下方整段复制到龙虾中文库对话中,由龙虾按 SKILL.md 完成安装。
请把本段交给龙虾中文库(龙虾 AI)执行:为本机安装 OpenClaw 技能「Local Whisper」。简介:Local speech-to-text using OpenAI Whisper. Runs fully offline after model downl…。
请 fetch 以下地址读取 SKILL.md 并按文档完成安装:https://raw.githubusercontent.com/openclaw/skills/refs/heads/main/skills/araa47/local-whisper/SKILL.md
(来源:yingzhi8.cn 技能库)
SKILL.md
---
name: local-whisper
description: Local speech-to-text using OpenAI Whisper. Runs fully offline after model download. High quality transcription with multiple model sizes.
metadata: {"clawdbot":{"emoji":"🎙️","requires":{"bins":["ffmpeg"]}}}
---
# Local Whisper STT
Local speech-to-text using OpenAI's Whisper. **Fully offline** after initial model download.
## Usage
```bash
# Basic
~/.clawdbot/skills/local-whisper/scripts/local-whisper audio.wav
# Better model
~/.clawdbot/skills/local-whisper/scripts/local-whisper audio.wav --model turbo
# With timestamps
~/.clawdbot/skills/local-whisper/scripts/local-whisper audio.wav --timestamps --json
```
## Models
| Model | Size | Notes |
|-------|------|-------|
| `tiny` | 39M | Fastest |
| `base` | 74M | **Default** |
| `small` | 244M | Good balance |
| `turbo` | 809M | Best speed/quality |
| `large-v3` | 1.5GB | Maximum accuracy |
## Options
- `--model/-m` — Model size (default: base)
- `--language/-l` — Language code (auto-detect if omitted)
- `--timestamps/-t` — Include word timestamps
- `--json/-j` — JSON output
- `--quiet/-q` — Suppress progress
## Setup
Uses uv-managed venv at `.venv/`. To reinstall:
```bash
cd ~/.clawdbot/skills/local-whisper
uv venv .venv --python 3.12
uv pip install --python .venv/bin/python click openai-whisper torch --index-url https://download.pytorch.org/whl/cpu
```