openclaw 网盘下载
OpenClaw

技能详情(站内镜像,无评论)

首页 > 技能库 > Sentiment Radar

Multi-platform sentiment monitoring and analysis for products/brands/topics. Collect public opinions from Chinese platforms (小红书/XHS via MediaCrawler) and En...

媒体与内容

许可证:MIT-0

MIT-0 ·免费使用、修改和重新分发。无需归因。

版本:v1.0.0

统计:⭐ 0 · 281 · 0 current installs · 0 all-time installs

0

安装量(当前) 0

🛡 VirusTotal :可疑 · OpenClaw :可疑

Package:danielwangyy/sentiment-radar

安全扫描(ClawHub)

  • VirusTotal :可疑
  • OpenClaw :可疑

OpenClaw 评估

The skill's code and instructions broadly match a social-sentiment scraper, but there are important inconsistencies and privacy-sensitive behaviors (browser CDP usage, undeclared token/config paths) that you should understand before installing.

目的

The name/description (multi-platform sentiment monitoring) matches what the included scripts do (XHS crawler integration, Douyin scraping, analysis). However the skill metadata declared no required env/config items while the runtime instructions and code expect several local artifacts (MediaCrawler repo, MEDIA_CRAWLER_PATH, ~/.mcporter/xpoz/tokens.json, and a Chrome instance with CDP). This mismatch between declared requirements and actual run…

说明范围

Runtime instructions and scripts instruct the agent/user to run a third‑party crawler (MediaCrawler) in CDP mode using the user's Chrome browser (QR login/scan), modify the crawler's config file, connect to a local Chrome CDP endpoint (localhost:9222), and read/write JSON data produced by those tools. Using CDP with the user's browser can expose browser session state (cookies, logged-in sessions) to the crawler; the skill asks you to modify co…

安装机制

There is no packaged installer (lower risk). The SKILL.md recommends cloning a GitHub repo (github.com/NanmiCoder/MediaCrawler) and installing Playwright — both are normal for web scraping. No obscure downloads or URL-shortened/external binary fetches are used in the instructions. The absence of an install spec in registry metadata is inconsistent with the fact that the skill relies on external projects, but the install steps themselves are fr…

证书

The skill metadata lists no required credentials, but the instructions expect access to: (1) MediaCrawler installation path (MEDIA_CRAWLER_PATH or specific locations), (2) mcporter/Xpoz OAuth token file at ~/.mcporter/xpoz/tokens.json for Twitter/Reddit access, and (3) a local Chrome instance with CDP enabled. Requesting or relying on locally stored OAuth tokens and a user's browser debugging endpoint is proportionate to scraping/sentiment ana…

持久

The skill does not request always:true and does not attempt to modify agent-wide configuration. It updates configuration files within the third-party MediaCrawler repo (which is expected for that workflow) but does not persistently alter other skills or platform settings.

安装(复制给龙虾 AI)

将下方整段复制到龙虾中文库对话中,由龙虾按 SKILL.md 完成安装。

请把本段交给龙虾中文库(龙虾 AI)执行:为本机安装 OpenClaw 技能「Sentiment Radar」。简介:Multi-platform sentiment monitoring and analysis for products/brands/topics. Co…。
请 fetch 以下地址读取 SKILL.md 并按文档完成安装:https://raw.githubusercontent.com/openclaw/skills/refs/heads/main/skills/danielwangyy/sentiment-radar/SKILL.md
(来源:yingzhi8.cn 技能库)

SKILL.md

打开原始 SKILL.md(GitHub raw)

---
name: sentiment-radar
description: "Multi-platform sentiment monitoring and analysis for products/brands/topics. Collect public opinions from Chinese platforms (小红书/XHS via MediaCrawler) and English platforms (Twitter/Reddit via Xpoz MCP). Generate structured sentiment reports with product mention tracking, pricing complaints, comparison analysis, and actionable insights. Use when: (1) monitoring competitor sentiment, (2) tracking product launch reception, (3) analyzing user pain points across social media, (4) building market intelligence reports."
---

# Sentiment Radar

Multi-platform social media sentiment collection and analysis.

## Supported Platforms

| Platform | Method | Auth Required |
|---|---|---|
| 小红书 (XHS) | MediaCrawler (CDP browser) | QR code login |
| Twitter | Xpoz MCP (`xpoz.getTwitterPostsByKeywords`) | OAuth token |
| Reddit | Xpoz MCP (`xpoz.getRedditPostsByKeywords`) | OAuth token |

## Prerequisites

### MediaCrawler (for 小红书)
If not installed:
```bash
git clone https://github.com/NanmiCoder/MediaCrawler ~/.openclaw/workspace/skills/media-crawler
cd ~/.openclaw/workspace/skills/media-crawler
uv sync
playwright install chromium
```
Config: `config/base_config.py` — set `ENABLE_CDP_MODE = True`, `SAVE_DATA_OPTION = "json"`

### Xpoz MCP (for Twitter/Reddit)
Requires mcporter with Xpoz OAuth configured. Token at `~/.mcporter/xpoz/tokens.json`.

## Workflow

### Step 1: Define targets

Identify products/brands and search keywords. Example:
```
Products: Plaud录音笔, 钉钉闪记, 飞书录音豆
Keywords (XHS): Plaud录音笔,钉钉闪记,飞书妙记,AI录音笔评测,录音豆
Keywords (Twitter): Plaud NotePin, DingTalk recorder, Lark voice
```

### Step 2: Collect data

#### XHS collection
Run MediaCrawler with keywords. Use CDP mode (user's Chrome browser) for anti-detection.
The crawler needs QR code scan for login — run in background with `exec(background=true)`.

```bash
cd skills/media-crawler
# Update keywords in config/base_config.py, then:
.venv/bin/python main.py --platform xhs --lt qrcode
```

Environment fixes for macOS:
```bash
export MPLBACKEND=Agg
export PATH="/usr/sbin:$PATH"
```

Data output: `data/xhs/json/search_contents_YYYY-MM-DD.json` and `search_comments_YYYY-MM-DD.json`

#### Twitter/Reddit collection
Use Xpoz MCP tools directly:
- `xpoz.getTwitterPostsByKeywords` — returns posts with engagement metrics
- `xpoz.getRedditPostsByKeywords` — returns posts with comments

### Step 3: Analyze

Run the analysis script on collected data:
```bash
python3 scripts/analyze.py 
  --data ./data 
  --products '{"Plaud": ["plaud","notepin"], "钉钉": ["钉钉","dingtalk","闪记"]}' 
  --output report.md
```

The script performs:
- Keyword distribution analysis (notes per keyword, total likes/collects)
- Product mention frequency in comments
- Sentiment classification (positive/negative/concern/neutral)
- Top notes ranking by engagement
- Price/subscription complaint extraction
- Product comparison comment extraction

### Step 4: Report

The analysis outputs:
1. JSON results to stdout (for programmatic use)
2. Markdown report to `--output` path

Combine XHS + Twitter data into a comprehensive report. See `references/report-template.md` for structure.

## Key Analysis Dimensions

1. **Sentiment split** — positive vs negative vs concern ratio
2. **Product mentions** — which products get discussed most
3. **Pricing complaints** — subscription fatigue, value perception
4. **Comparison comments** — head-to-head user opinions
5. **User pain points** — feature requests, complaints, unmet needs
6. **Engagement metrics** — likes, collects, shares as popularity signals

## Notes

- XHS data uses Chinese number format (e.g., "1.1万") — `parse_count()` in analyze.py handles this
- MediaCrawler has 2s sleep between requests to avoid rate limiting
- Each keyword returns ~20 notes per page (configurable in MediaCrawler config)
- Comments are fetched per note automatically
- For recurring monitoring, schedule via cron and compare against previous reports