技能详情(站内镜像,无评论)
许可证:MIT-0
MIT-0 ·免费使用、修改和重新分发。无需归因。
版本:v1.1.0
统计:⭐ 0 · 110 · 0 current installs · 0 all-time installs
⭐ 0
安装量(当前) 0
🛡 VirusTotal :良性 · OpenClaw :良性
Package:aurthes/aurthes-web-fetcher-v2
安全扫描(ClawHub)
- VirusTotal :良性
- OpenClaw :良性
OpenClaw 评估
The skill's code, instructions, and requirements are coherent with a web-page fetching/markdown-extraction purpose and do not request unrelated credentials or installs.
目的
Name and description match what is implemented: lightweight remote conversion services + browser fallback. The included Python script and SKILL.md only perform HTTP fetches and classification; no unrelated credentials, binaries, or system access are requested.
说明范围
Instructions stay within the purpose (try conversion services, then browser or search fallback). They advise using OpenClaw browser tools or a Chrome relay/extension and to ask users to attach tabs when necessary — this is expected for accessing authenticated or JS-heavy pages, but it implies the user may share a live tab with the agent, so the user should be aware of what context they attach.
安装机制
No install spec — instruction-only with a small included Python script. Nothing is downloaded from third-party URLs during install, so there is no on-install code-fetch risk.
证书
Skill requests no environment variables, credentials, or config paths. The behavior (fetching public URLs and recommending browser attach for protected pages) is proportionate to the stated purpose.
持久
always is false and the skill does not request persistent system-wide privileges or modify other skill configs. Autonomous invocation is allowed by default and is not in itself a concern here.
综合结论
This skill appears to only fetch and return page text; it does not ask for credentials or install software. However, note two practical privacy points before installing: (1) The preferred fetch methods send the target URL (and the fetched page content as processed) to third-party conversion services (r.jina.ai, markdown.new, defuddle). If the page is sensitive or behind a private network, do not use those services — prefer a browser attach or …
安装(复制给龙虾 AI)
将下方整段复制到龙虾中文库对话中,由龙虾按 SKILL.md 完成安装。
请把本段交给龙虾中文库(龙虾 AI)执行:为本机安装 OpenClaw 技能「Web Fetcher」。简介:Fetch web pages and extract readable content for AI use. Use when reading, summ…。
请 fetch 以下地址读取 SKILL.md 并按文档完成安装:https://raw.githubusercontent.com/openclaw/skills/refs/heads/main/skills/aurthes/aurthes-web-fetcher-v2/SKILL.md
(来源:yingzhi8.cn 技能库)
SKILL.md
---
name: web-fetcher
version: 1.2.0
description: Fetch web pages and extract readable content for AI use. Use when reading, summarizing, or crawling a specific URL or small set of URLs. Prefer low-friction URL-to-Markdown services first, then fall back to browser-based retrieval, search snippets, or cached/indexed copies when sites are protected by Cloudflare or similar bot checks.
---
# Web Fetcher
Fetch readable web content with a reliability-first fallback chain.
## Core rule
Do **not** promise direct access to every site. Some sites use Cloudflare, login walls, bot detection, or legal restrictions. In those cases, switch to the next fallback instead of insisting the first method should work.
## Preferred fetch order
### 1) Direct readable fetch
Try lightweight conversion services first:
1. **r.jina.ai**
```
https://r.jina.ai/http://example.com
```
2. **markdown.new**
```
https://markdown.new/https://example.com
```
3. **defuddle**
```
https://defuddle.md/https://example.com
```
For deterministic retries, use the bundled script:
```bash
python {baseDir}/scripts/fetch_url.py "https://example.com/article"
```
The script returns JSON with:
- chosen method
- attempt history
- blocked/thin-content detection
- final content when successful
Use these when the user wants article text, page summaries, or structured extraction from normal public pages.
### 2) Detect failure modes early
Treat the fetch as failed or unreliable if you see signs like:
- `Just a moment...`
- `Performing security verification`
- `Enable JavaScript and cookies`
- CAPTCHA / challenge pages
- login wall instead of target content
- obvious truncation / missing article body
When this happens, **stop treating the result as the page content**.
### 3) Browser fallback for protected sites
For sites blocked behind Cloudflare or requiring real browser execution:
- Prefer a real browser session via OpenClaw browser tools when available.
- If the user is using the Chrome relay/extension, ask them to attach the tab and then inspect the live rendered page.
- Snapshot the page and extract only the needed fields.
Use browser fallback for:
- JS-heavy pages
- Cloudflare-protected pages
- sites that render key content after load
- pages where the direct markdown services return verification screens
### 4) Search / indexed fallback
If direct fetch and browser fetch are not available or still fail:
- search for the exact page / journal / article title
- use search snippets, publisher mirror pages, cached summaries, or secondary sources
- prefer official publisher pages when search can surface the needed field
- clearly label data as secondary-source derived if it was not read directly from the target page
This is often enough for metadata tasks like:
- editor-in-chief names
- journal impact factors
- publication frequency
- ISSN
- institutional affiliations
### 5) Partial-completion mode
If a site is inconsistent, return a mixed result instead of stalling:
- fill the rows that can be verified directly
- mark blocked / unresolved rows clearly
- explain what failed and which fallback was used
## Practical extraction strategy
### For one page
1. Try `r.jina.ai`
2. If blocked, try `markdown.new`
3. If blocked, try `defuddle`
4. If still blocked, use browser tools
5. If browser unavailable, use search/indexed fallback
6. Report confidence level
### For many similar pages
1. Fetch the index/list page first
2. Extract all target URLs or codes
3. Process pages in batches
4. Record success/failure per row
5. Retry only failures with stronger fallback methods
6. Deliver the best complete table possible
## Output guidance
When extracting structured data, prefer columns like:
- source URL
- extraction method (`direct`, `browser`, `search`, `secondary`)
- confidence (`high`, `medium`, `low`)
- note for blocked/unverified rows
## Examples
- User: "Read this article" → direct fetch first
- User: "What does this page say?" → direct fetch, then browser fallback if blocked
- User: "Crawl this journal site" → index page first, then batched extraction with fallback chain
- User: "Cloudflare blocked it" → switch to browser or search fallback, do not keep retrying the same failed method