技能详情(站内镜像,无评论)
许可证:MIT-0
MIT-0 ·免费使用、修改和重新分发。无需归因。
版本:v1.0.0
统计:⭐ 0 · 23 · 0 current installs · 0 all-time installs
⭐ 0
安装量(当前) 0
🛡 VirusTotal :良性 · OpenClaw :良性
Package:benheee/youtube-digest
安全扫描(ClawHub)
- VirusTotal :良性
- OpenClaw :良性
OpenClaw 评估
The skill's code, README, and runtime instructions are coherent with its stated purpose (extracting subtitles/transcripts and summarizing YouTube videos); it asks for no unrelated credentials and contains no hidden network endpoints or surprising install behavior.
目的
The name/description match the included fetch_youtube.py script and usage docs. Requested tools (yt-dlp, deno, ffmpeg) are appropriate for subtitle/video extraction and optional frame extraction; nothing extraneous (no cloud credentials or unrelated binaries) is required.
说明范围
SKILL.md tells the agent to run the provided script, read summary.json/transcript.txt, and optionally run ffmpeg for frames. It does not instruct the agent to read unrelated system files or hidden config, nor does it send collected content to unexpected external endpoints. The proxy option is explicitly for yt-dlp and is reasonable for bypassing rate limits.
安装机制
There is no formal install spec in the skill bundle (instruction-only), but README recommends installing yt-dlp from GitHub releases and deno via deno.land install.sh. Those are well-known release hosts; however, running curl|sh or installing binaries from the network is always a moderate operational risk—verify sources and consider package-manager installs where possible.
证书
The skill requests no environment variables or credentials. The optional --proxy argument is the only network-related parameter; it does not demand tokens or secrets from unrelated services. The files written (summary.json, transcript.txt) are expected outputs for the described functionality.
持久
always is false and the skill is user-invocable. It writes outputs to the user-specified directory and does not modify other skills, system-wide agent settings, or require permanent presence. Autonomous invocation defaults are unchanged and not a concern by themselves.
综合结论
This skill appears to do what it says: it runs yt-dlp (an external binary) to fetch subtitles/metadata, converts subtitles to a transcript, and relies on the agent's LLM to summarize/translate. Before installing, consider: (1) you will need yt-dlp and deno (README shows curl|sh installs—verify sources or use your distro/package manager), (2) transcripts and summary.json are written to disk and will be read by the agent and sent to the LLM—trea…
安装(复制给龙虾 AI)
将下方整段复制到龙虾中文库对话中,由龙虾按 SKILL.md 完成安装。
请把本段交给龙虾中文库(龙虾 AI)执行:为本机安装 OpenClaw 技能「YouTube Digest」。简介:Understand, summarize, translate, and extract key points from YouTube videos. U…。
请 fetch 以下地址读取 SKILL.md 并按文档完成安装:https://raw.githubusercontent.com/openclaw/skills/refs/heads/main/skills/benheee/youtube-digest/SKILL.md
(来源:yingzhi8.cn 技能库)
SKILL.md
---
name: youtube-digest
description: "Understand, summarize, translate, and extract key points from YouTube videos. Use when a user provides a YouTube URL and wants: (1) a Chinese summary, (2) a transcript or subtitle extraction, (3) translation of spoken content, (4) timestamps / chapter notes, (5) visual understanding via key frames, or (6) question answering about a video. Prefer this skill for transcript-first workflows."
---
# YouTube Digest
Use a transcript-first workflow.
## Quick workflow
1. Run `scripts/fetch_youtube.py <url> --out <dir>` to collect metadata and subtitles.
If behind a proxy, add `--proxy <proxy-url>`.
2. If subtitles exist, read `summary.json` and the generated transcript file first.
3. If the user only wants a quick answer, summarize directly from the transcript.
4. If the user needs stronger visual grounding, extract key frames with ffmpeg after downloading the video or by using an existing local video file.
5. If no subtitles are available, report that transcript extraction needs `yt-dlp` + a speech-to-text path (for example Whisper) before promising a result.
## Default behavior
- Prefer manual subtitles over auto subtitles.
- Prefer Chinese subtitles when available; otherwise use English auto/manual subtitles.
- Keep downloads minimal: subtitles + metadata first, full video only when visual analysis is necessary.
- For long videos, produce:
- 3-line executive summary
- bullet timeline with timestamps
- key insights / actionable points
- open questions or uncertainties
## Outputs
For normal requests, return:
- Video topic
- Summary (in user's language)
- Key timestamps
- Notable quotes / insights
- If confidence is limited, say whether the result came from manual subtitles, auto subtitles, or partial metadata only.
## Files produced by the script
The fetch script writes an output directory containing:
- `summary.json` — chosen subtitle file, title, uploader, duration, and extraction status
- `transcript.txt` — plain text transcript when subtitles are available
- raw subtitle files from `yt-dlp` (VTT/SRT)
Read `summary.json` first to decide what to do next.
## Required runtime tools
- `yt-dlp` for metadata + subtitle extraction
- `deno` as JS runtime (required by yt-dlp 2026+)
- `ffmpeg` for media conversion / optional frame extraction (optional)
## Key commands
Basic extraction:
```bash
python3 scripts/fetch_youtube.py "<youtube-url>" --out /tmp/youtube-digest
```
With proxy:
```bash
python3 scripts/fetch_youtube.py "<youtube-url>" --proxy http://your-proxy:port --out /tmp/youtube-digest
```
Prefer specific subtitle languages:
```bash
python3 scripts/fetch_youtube.py "<youtube-url>" --langs zh.*,en.* --out /tmp/youtube-digest
```
## Failure handling
- If `yt-dlp` is missing, stop and install it instead of improvising.
- If YouTube blocks the request (429 or bot detection), try using a proxy or report the limitation.
- If only metadata is available, do not pretend you understood the full video.
- If subtitles are auto-generated, mention that wording may be noisy.
## References
- Read `references/install-and-deploy.md` for deployment instructions.
- Read `references/usage-patterns.md` for output templates for summaries, translations, or Q&A.