技能详情(站内镜像,无评论)
许可证:MIT-0
MIT-0 ·免费使用、修改和重新分发。无需归因。
版本:v1.0.0
统计:⭐ 0 · 501 · 0 current installs · 0 all-time installs
⭐ 0
安装量(当前) 0
🛡 VirusTotal :良性 · OpenClaw :良性
Package:alinxus/usewhisper-autohook
安全扫描(ClawHub)
- VirusTotal :良性
- OpenClaw :良性
OpenClaw 评估
The skill is internally consistent with its stated purpose (injecting and ingesting Whisper Context memory and optionally acting as a local OpenAI/Anthropic proxy); there are no unexplained credentials, installs, or actions — but it will transmit full user and assistant messages to an external Whisper Context service and persist a small state file locally, so review privacy implications before use.
目的
The skill's name and SKILL.md describe automatic pre-query context retrieval and post-response ingestion for a Whisper Context service; the code implements those exact actions, plus an optional local proxy to reduce tokens. Required env vars (WHISPER_CONTEXT_API_KEY, WHISPER_CONTEXT_PROJECT, optional WHISPER_CONTEXT_API_URL) match the described external service. No unrelated credentials or binaries are requested.
说明范围
Instructions ask the agent to call get_whisper_context before responding and ingest_whisper_turn after responses (and provide a system-prompt snippet to enforce this). This is consistent with the skill's goal but is prescriptive ('Always do this. Never skip.') — functionally normal for a memory helper, but it means the agent will routinely send user messages and assistant replies to an external service (privacy/PD concerns). The SKILL.md docum…
安装机制
This is an instruction-only skill with an included Node script; there is no install spec that downloads arbitrary code. The repository ships a single .mjs file which is run via node; no external install URLs or archive extracts are used.
证书
Declared env vars (WHISPER_CONTEXT_API_KEY, WHISPER_CONTEXT_PROJECT, optional WHISPER_CONTEXT_API_URL) are proportionate to the purpose. The script optionally uses OPENAI_API_KEY or ANTHROPIC_API_KEY when run as a proxy; SKILL.md documents this. Users should be aware that running the proxy requires providing an upstream API key (the script will use it to call the upstream provider) and that the Whisper Context API key will be used to send full…
持久
The skill persists a per-user/session context_hash to the local filesystem (in the user's home directory) to enable delta compression — this is consistent with its stated behavior but creates local files. The skill does not request always:true and does not modify other skills or system-wide agent settings. If you run the HTTP proxy, it will accept requests and forward them to an upstream provider using your upstream API key — run it only on tr…
综合结论
This skill appears to do what it says: it queries a Whisper Context service before responses and ingests turns afterwards, and it can run a local proxy to reduce token usage. Before installing or running it, consider: 1) Privacy: the script will transmit full user messages and assistant replies to the external Whisper Context API (and may auto-create the project); do not use for sensitive or regulated data unless you trust the provider and und…
安装(复制给龙虾 AI)
将下方整段复制到龙虾中文库对话中,由龙虾按 SKILL.md 完成安装。
请把本段交给龙虾中文库(龙虾 AI)执行:为本机安装 OpenClaw 技能「usewhisper-autohook」。简介:Automatically fetches and injects Whisper memory context before responses and i…。
请 fetch 以下地址读取 SKILL.md 并按文档完成安装:https://raw.githubusercontent.com/openclaw/skills/refs/heads/main/skills/alinxus/usewhisper-autohook/SKILL.md
(来源:yingzhi8.cn 技能库)
SKILL.md
---
name: usewhisper-autohook
version: 1.0.0
description: Auto-hook tools for OpenClaw: query Whisper Context before every generation, ingest after every turn. Built for Telegram agents (stable user_id/session_id).
author: "usewhisper"
metadata:
openclaw:
requires:
bins: ["node"]
env: ["WHISPER_CONTEXT_API_KEY", "WHISPER_CONTEXT_PROJECT"]
optional_env: ["WHISPER_CONTEXT_API_URL"]
security:
notes:
- Makes outbound HTTPS requests to the Whisper Context API using a user-provided API key.
- Does not require additional npm dependencies.
- Review the script before use.
---
# usewhisper-autohook (OpenClaw Skill)
This skill is a thin wrapper designed to make "automatic memory" easy:
- `get_whisper_context(user_id, session_id, current_query)` for pre-response context injection
- `ingest_whisper_turn(user_id, session_id, user_msg, assistant_msg)` for post-response ingestion
It defaults to the token-saving settings you almost always want:
- `compress: true`
- `compression_strategy: "delta"`
- `use_cache: true`
- `include_memories: true`
It also persists the last `context_hash` locally (per `api_url + project + user_id + session_id`) so delta compression works by default without you needing to pass `previous_context_hash`.
## Install (ClawHub)
```bash
npx clawhub@latest install usewhisper-autohook
```
## Setup
Set env vars wherever OpenClaw runs your agent:
```bash
WHISPER_CONTEXT_API_URL=https://context.usewhisper.dev
WHISPER_CONTEXT_API_KEY=YOUR_KEY
WHISPER_CONTEXT_PROJECT=openclaw-yourname
```
Notes:
- `WHISPER_CONTEXT_API_URL` is optional (defaults to `https://context.usewhisper.dev`).
- The helper will auto-create the project on first use if it does not exist yet.
## The "Auto Loop" Prompt (Copy/Paste)
Add this to your agent's **system instruction** (or equivalent):
```text
Before you think or respond to any message:
1) Call get_whisper_context with:
user_id = "telegram:{from_id}"
session_id = "telegram:{chat_id}"
current_query = the user's message text
2) If the returned context is not empty, prepend it to your prompt as:
"Relevant long-term memory:n{context}nnNow respond to:n{user_message}"
After you generate your final response:
1) Call ingest_whisper_turn with the same user_id and session_id and:
user_msg = the full user message
assistant_msg = your full final reply
Always do this. Never skip.
```
If you are not on Telegram, keep the same structure: the important part is that `user_id` and `session_id` are stable.
## If Your Agent Still Replays Full Chat History (Proxy Mode)
If you cannot control how your agent/framework constructs prompts (it always sends the full conversation history), a system prompt cannot reduce token spend: the tokens are already sent to the model.
In that case, run the built-in OpenAI-compatible proxy so the **network payload is actually reduced**. The proxy:
- receives `POST /v1/chat/completions`
- queries Whisper memory
- strips chat history down to system + last user message
- injects `Relevant long-term memory: ...`
- calls your upstream OpenAI-compatible provider
- ingests the turn back into Whisper
Start the proxy:
```bash
export OPENAI_API_KEY="YOUR_UPSTREAM_KEY"
node usewhisper-autohook.mjs serve_openai_proxy --port 8787
```
Then point your agent’s OpenAI base URL to `http://127.0.0.1:8787` (exact env/config depends on your agent).
If your agent supports overriding the upstream base URL, you can set:
- `OPENAI_BASE_URL` (for OpenAI-compatible upstreams)
- `ANTHROPIC_BASE_URL` (for Anthropic upstreams)
Or pass `--upstream_base_url` when starting the proxy.
For correct per-user/session memory, pass headers on each request:
- `x-whisper-user-id: telegram:{from_id}`
- `x-whisper-session-id: telegram:{chat_id}`
### Anthropic Native Proxy (`/v1/messages`)
If your agent uses **Anthropic's native API** (not OpenAI-compatible), run the Anthropic proxy instead:
```bash
export ANTHROPIC_API_KEY="YOUR_ANTHROPIC_KEY"
node usewhisper-autohook.mjs serve_anthropic_proxy --port 8788
```
Then point your agent’s Anthropic base URL to `http://127.0.0.1:8788`.
Pass IDs via headers (recommended):
- `x-whisper-user-id: telegram:{from_id}`
- `x-whisper-session-id: telegram:{chat_id}`
If you do not pass headers, the proxies will attempt to infer stable IDs from OpenClaw's system prompt / session key if present. This is best-effort; headers are still the most reliable.
## CLI Usage (what the tools call)
All commands print JSON to stdout.
### Get packed context
```bash
node usewhisper-autohook.mjs get_whisper_context
--current_query "What did we decide last time?"
--user_id "telegram:123"
--session_id "telegram:456"
```
### Ingest a completed turn
```bash
node usewhisper-autohook.mjs ingest_whisper_turn
--user_id "telegram:123"
--session_id "telegram:456"
--user_msg "..."
--assistant_msg "..."
```
For large content, pass JSON via stdin:
```bash
echo '{ "user_msg": "....", "assistant_msg": "...." }' | node usewhisper-autohook.mjs ingest_whisper_turn --session_id "telegram:456" --user_id "telegram:123" --turn_json -
```
## Output Format
`get_whisper_context` returns:
- `context`: the packed context string to prepend
- `context_hash`: a short hash you can store and pass back as `previous_context_hash` next time (optional)
- `meta`: cache hit and compression info (useful for debugging)