技能详情(站内镜像,无评论)
作者:azzar budiyanto @1999AZZAR
许可证:MIT-0
MIT-0 ·免费使用、修改和重新分发。无需归因。
版本:v3.5.1
统计:⭐ 0 · 974 · 4 current installs · 4 all-time installs
⭐ 0
安装量(当前) 4
🛡 VirusTotal :可疑 · OpenClaw :良性
Package:1999azzar/search-cluster
安全扫描(ClawHub)
- VirusTotal :可疑
- OpenClaw :良性
OpenClaw 评估
The skill's code, environment variables, and instructions are consistent with an aggregated search tool; no disproportionate credentials, hidden endpoints, or install-time downloads are present, though there are minor documentation/path mismatches and the scrapling fetcher should be run in an isolated venv.
目的
Name/description match the implemented behavior: the code queries Google CSE (optional), Wikipedia, Reddit, GNews RSS, and a local scrapling-based scraper. Optional env vars (GOOGLE_*, SCRAPLING_PYTHON_PATH, REDIS_*, SEARCH_USER_AGENT) are appropriate for these providers. Minor inconsistency: registry metadata listed no homepage while skill.json contains a GitHub homepage; SKILL.md refers to scripts/ subpaths (scripts/search-cluster.py, script…
说明范围
SKILL.md instructs creating a dedicated venv for scrapling and setting SCRAPLING_PYTHON_PATH; the runtime instructions and code keep network activity limited to provider endpoints (Google APIs, Wikipedia, Reddit, Google News RSS, DuckDuckGo via scrapling). The code uses subprocess.run to execute stealth_fetch.py with the query as an argument (explicit, not reading arbitrary files). There are no instructions to read unrelated system files or ex…
安装机制
There is no install spec (instruction-only for the platform), which is low risk. SKILL.md requires creating a venv and pip-installing 'scrapling' there; skill.json lists python dependencies ('redis', 'scrapling') and binary 'python3' — this is consistent with the code (redis is optional and only imported when REDIS_HOST is set). No remote arbitrary downloads or extract steps are present.
证书
All requested/declared environment variables are proportional and directly tied to functionality: optional Google API credentials for CSE, SCRAPLING_PYTHON_PATH for the scraper venv, REDIS_HOST/PORT for caching, and SEARCH_USER_AGENT for HTTP requests. No unrelated secrets or broad credential requests are present.
持久
The skill does not request always:true, does not modify other skills, and asks for no system-wide configuration or persistent privileges. It executes a local helper script via subprocess but that helper is packaged with the skill; this is expected behavior for the scrapling provider and is limited to the skill's scope.
综合结论
This skill appears to do what it claims: aggregate searches across Google CSE, Wikipedia, Reddit, Google News RSS, and a scrapling-based DuckDuckGo scraper. Before installing, consider the following: (1) Run the scrapling provider in a dedicated, isolated virtual environment as instructed and set SCRAPLING_PYTHON_PATH to that venv's python to avoid executing unreviewed code with your system python. (2) The SKILL.md references a scripts/ path w…
安装(复制给龙虾 AI)
将下方整段复制到龙虾中文库对话中,由龙虾按 SKILL.md 完成安装。
请把本段交给龙虾中文库(龙虾 AI)执行:为本机安装 OpenClaw 技能「Search Cluster」。简介:Aggregated search aggregator using Google CSE, GNews RSS, Wikipedia, Reddit, an…。
请 fetch 以下地址读取 SKILL.md 并按文档完成安装:https://raw.githubusercontent.com/openclaw/skills/refs/heads/main/skills/1999azzar/search-cluster/SKILL.md
(来源:yingzhi8.cn 技能库)
SKILL.md
暂无本地缓存内容,可在后台执行详情同步。