技能详情(站内镜像,无评论)
许可证:MIT-0
MIT-0 ·免费使用、修改和重新分发。无需归因。
版本:v0.1.1
统计:⭐ 0 · 105 · 0 current installs · 0 all-time installs
⭐ 0
安装量(当前) 0
🛡 VirusTotal :良性 · OpenClaw :良性
Package:aquarius-wing/grok-scraper
安全扫描(ClawHub)
- VirusTotal :良性
- OpenClaw :良性
OpenClaw 评估
The skill's code, files, and instructions are internally consistent with its stated purpose (automating Grok queries via a Playwright browser session), but it requires storing and using your logged-in x.com browser session and supports scheduled/automated runs—so be aware of privacy/abuse risks before installing.
目的
The name/description claim (use Playwright to query Grok without an X API key) matches the included scripts (login.js, scrape.js, inspect-dom.js, run.sh) and README. There are no unrelated env vars, binaries, or surprising dependencies. The design (persisted browser session + Playwright) is a coherent method for the stated goal.
说明范围
SKILL.md and README instruct the agent/operator to run npm install, run playwright, perform an interactive login to x.com that saves a local session directory, and then run scripts/run.sh to execute queries. This stays within the scraper's purpose, but the instructions also encourage cron scheduling and say to 'ALWAYS use this skill' when free Grok access is requested — which could cause automated, repeated use of the user's logged-in account …
安装机制
No binary download/install spec in the skill registry; install is via npm (package.json) and npx playwright install chromium. Those are standard and traceable (npm/Playwright). The repository does not pull arbitrary archives or use obscure URLs.
证书
The skill requests no environment variables, which is proportional. However it requires and will store a browser session (cookies/credentials) in the skill's session/ directory after the manual login — this grants the skill the ability to act as the logged-in user on x.com. That is necessary for the scraper's method but is a sensitive capability the user should understand and protect.
持久
always:false (normal). The skill can be invoked autonomously (disable-model-invocation:false) which is the platform default. Combined with the saved session and the provided run.sh + cron examples, the skill can be scheduled to run automated queries as the user's account. This is expected for this use case but increases the blast radius if the session or skill is compromised.
综合结论
This skill appears to do what it claims (automate Grok queries by controlling a real browser). Before installing: (1) Understand it requires an interactive login to x.com and will store session cookies under the skill's session/ directory — treat that folder as sensitive (do not install on shared or untrusted hosts). (2) The skill recommends and supports cron scheduling; scheduled runs will act as your logged-in account—only enable scheduling …
安装(复制给龙虾 AI)
将下方整段复制到龙虾中文库对话中,由龙虾按 SKILL.md 完成安装。
请把本段交给龙虾中文库(龙虾 AI)执行:为本机安装 OpenClaw 技能「Grok Scraper」。简介:Execute queries to Grok AI via Playwright browser automation without requiring …。
请 fetch 以下地址读取 SKILL.md 并按文档完成安装:https://raw.githubusercontent.com/openclaw/skills/refs/heads/main/skills/aquarius-wing/grok-scraper/SKILL.md
(来源:yingzhi8.cn 技能库)
SKILL.md
---
name: grok-scraper
description: Execute queries to Grok AI via Playwright browser automation without requiring an X API KEY. Use when the user wants to "ask Grok", search X for real-time info, or specifically requests to use Grok for free without API billing.
---
# Grok Scraper
## Preview
[<video src="./assets/grok-2026-03-15T10-01-45.webm" controls width="100%"></video>](https://github.com/user-attachments/assets/d48c7948-11d5-4606-baf8-db0a0b0a095f)
**Agent Context**: This is a zero-cost alternative to official X APIs. It uses a real browser session (Playwright) via an X Premium account. ALWAYS use this skill when the user wants to query Grok but does not have or want to use an X API KEY.
## Prerequisites
- **OpenClaw** must be installed on the host machine.
- **A display/GUI environment is required.** This skill launches a real browser window for login. It **cannot run on headless cloud servers** (no screen). It must be used on a local machine or a remote desktop with a display.
- The user must be logged in to **x.com** via the browser session saved by `npm run login`. Without a valid session, all queries will fail.
## First-Time Setup
Run these commands once after cloning the repo, before doing anything else:
```bash
cd scripts
npm install
npx playwright install chromium
```
Then log in to x.com to create a session:
```bash
npm run login
# A browser window will open — log in to x.com manually, then return to the terminal and press Enter
```
The `session/` directory will be created automatically after a successful login.
## Workflow
**Step 1: Check Login State**
- If `session/` directory does not exist: stop and ask the user to run `cd scripts && npm run login`.
- If it exists: proceed.
**Step 2: Execute Query**
```bash
scripts/run.sh "The user's detailed prompt"
```
`run.sh` handles logging, automatic retry on Grok service errors, and login-expiry detection. It is the canonical entry point for all queries.
**Step 3: Read Output**
- Exit Code 0 → read `output/latest.md` and present the result.
- Other exit codes → see Error Handling below.
## Error Handling
| Exit Code | Meaning | Action |
|-----------|---------|--------|
| 0 | Success | Read `output/latest.md` |
| 2 | Session expired | Ask user to run `cd scripts && npm run login` |
| 3 | Grok service error | `run.sh` already retried once; report failure to user |
| 1 | Extraction failed | Check if `output/debug-dom.json` was written → if yes, DOM selectors may have broken — see [dom-selector-fix.md](dom-selector-fix.md) |
## DOM Selectors Breaking
Twitter/X redeploys its front-end regularly, which changes the CSS class names this scraper relies on. If extraction fails with `Method: none`, follow the fix guide:
→ **[dom-selector-fix.md](dom-selector-fix.md)**
## Examples
**Standard query**
```bash
scripts/run.sh "Search for the latest AI news and format as markdown"
# → read output/latest.md
```
**Session expired**
1. Run `scripts/run.sh` → Exit Code 2
2. Tell user: "Session expired, please run `cd scripts && npm run login`"
**DOM selectors broken**
1. Run `scripts/run.sh` → Exit Code 1, `output/debug-dom.json` exists
2. Follow [dom-selector-fix.md](dom-selector-fix.md) to identify new classes and update `SELECTORS` in `scripts/scrape.js`
---
## Debugging
When diagnosing scraper issues directly, use the bare command — it skips logging and retry logic, making failures easier to inspect.
| Flag | Example | Description |
|------|---------|-------------|
| _(none)_ | `npm run scrape` | Run with default prompt |
| `"prompt"` | `npm run scrape -- "Your question"` | Custom prompt |
| `--record` | `npm run scrape -- --record` | Record video to `output/grok-<timestamp>.webm` |
| `--record <path>` | `npm run scrape -- --record out.webm` | Record video to custom path (relative → `output/`) |
| `--size WxH` | `npm run scrape -- --record --size 1920x1080` | Set recording resolution (default: `1280x800`) |
All flags can be combined:
```bash
cd scripts
npm run scrape -- "Your prompt" --record --size 1920x1080
```
When `--record` is active, the browser runs in **headed mode** (visible window) with `slowMo: 50ms`; without it, headless mode is used.