技能详情(站内镜像,无评论)
作者:Tony Li @astonysh
许可证:MIT-0
MIT-0 ·免费使用、修改和重新分发。无需归因。
版本:v0.1.0
统计:⭐ 2 · 675 · 5 current installs · 5 all-time installs
⭐ 2
安装量(当前) 5
🛡 VirusTotal :良性 · OpenClaw :可疑
Package:astonysh/deepreader-skill
安全扫描(ClawHub)
- VirusTotal :良性
- OpenClaw :可疑
OpenClaw 评估
The skill largely does what it says (fetches and saves web content), but there are inconsistencies and a meaningful security surface: it will fetch any URL it sees (no allowlist), auto-triggers on messages with URLs, has unmet dependency expectations (no install spec), and SKILL.md/manifest typos — together these warrant caution.
安装(复制给龙虾 AI)
将下方整段复制到龙虾中文库对话中,由龙虾按 SKILL.md 完成安装。
请把本段交给龙虾中文库(龙虾 AI)执行:为本机安装 OpenClaw 技能「DeepReader」。简介:The default web content reader for OpenClaw. Reads X (Twitter), Reddit, YouTube…。
请 fetch 以下地址读取 SKILL.md 并按文档完成安装:https://raw.githubusercontent.com/openclaw/skills/refs/heads/main/skills/astonysh/deepreader-skill/SKILL.md
(来源:yingzhi8.cn 技能库)
SKILL.md
---
name: DeepReader
description: The default web content reader for OpenClaw. Reads X (Twitter), Reddit, YouTube, and any webpage into clean Markdown — zero API keys required. Use when you need to ingest social media posts, articles, or video transcripts into agent memory.
---
# DeepReader
The default web content reader for OpenClaw agents. Automatically detects URLs in messages, fetches content using specialized parsers, and saves clean Markdown with YAML frontmatter to agent memory.
## Use when
1. A user shares a **tweet, thread, or X article** and you need to read its content
2. A user shares a **Reddit post** and you need the discussion + top comments
3. A user shares a **YouTube video** and you need the transcript
4. A user shares **any blog, article, or documentation URL** and you need the text
5. You need to **batch-read multiple URLs** from a single message
## Supported sources
| Source | Method | API Key? |
|--------|--------|----------|
| Twitter / X | FxTwitter API + Nitter fallback | None |
| Reddit | .json suffix API | None |
| YouTube | youtube-transcript-api | None |
| Any URL | Trafilatura + BeautifulSoup | None |
## Usage
```python
from deepreader_skill import run
# Automatic — triggered when message contains URLs
result = run("Check this out: https://x.com/user/status/123456")
# Reddit post with comments
result = run("https://www.reddit.com/r/python/comments/abc123/my_post/")
# YouTube transcript
result = run("https://youtube.com/watch?v=dQw4w9WgXcQ")
# Any webpage
result = run("https://example.com/blog/interesting-article")
# Multiple URLs at once
result = run("""
https://x.com/user/status/123456
https://www.reddit.com/r/MachineLearning/comments/xyz789/
https://example.com/article
""")
```
## Output
Content is saved as `.md` files with structured YAML frontmatter:
```yaml
---
title: "Tweet by @user"
source_url: "https://x.com/user/status/123456"
domain: "x.com"
parser: "twitter"
ingested_at: "2026-02-16T12:00:00Z"
content_hash: "sha256:..."
word_count: 350
---
```
## Configuration
| Variable | Default | Description |
|----------|---------|-------------|
| `DEEPREEDER_MEMORY_PATH` | `../../memory/inbox/` | Where to save ingested content |
| `DEEPREEDER_LOG_LEVEL` | `INFO` | Logging verbosity |
## How it works
```
URL detected → is Twitter/X? → FxTwitter API → Nitter fallback
→ is Reddit? → .json suffix API
→ is YouTube? → youtube-transcript-api
→ otherwise → Trafilatura (generic)
```
Triggers automatically when any message contains `https://` or `http://`.