openclaw 网盘下载
OpenClaw

技能详情(站内镜像,无评论)

首页 > 技能库 > The Spaced Repetition Systems for Agents

Use when running Spaced Repetition Systems for AI Agents (SRSA) daily review sessions, grading cards with again/hard/good/easy, and proposing explicit memory...

开发与 DevOps

许可证:MIT-0

MIT-0 ·免费使用、修改和重新分发。无需归因。

版本:v1.1.0

统计:⭐ 0 · 16 · 0 current installs · 0 all-time installs

0

安装量(当前) 0

🛡 VirusTotal:Pending · OpenClaw :良性

Package:cheanus/srsa

安全扫描(ClawHub)

  • VirusTotal:Pending
  • OpenClaw :良性

OpenClaw 评估

The skill's code, instructions, and resource needs align with a local Spaced Repetition CLI for agents; it reads/writes a local SQLite DB and does not request credentials or remote endpoints.

目的

Name, description, CLI commands in SKILL.md and the bundled Python code (fsrs_service, storage, main) all implement a local spaced-repetition review workflow. Required capabilities (local DB, review commands) match the stated purpose.

说明范围

SKILL.md limits operations to the SRSA CLI flow and explicitly requires the agent to perform memory updates itself. It instructs running bundled Python scripts (via 'uv run python scripts/main.py'). These instructions are scoped to the skill's purpose, but they require executing the included code locally — the agent will run bundled code and write/read a local SQLite DB. The SKILL.md and code do not attempt to read arbitrary host files or exfi…

安装机制

No install spec is provided (instruction-only), but the package includes runnable Python source. There are no remote downloads or installers. This is low risk functionally, but the skill depends on Python packages (e.g., 'fsrs', 'yaml') that are not declared — attempting to run the CLI may fail if those dependencies are missing.

证书

The skill requests no environment variables, no credentials, and accesses only local files (config.yaml and a SQLite DB). There are no signs of unexpected credential access or unrelated environment reads.

持久

always is false; the skill does not request permanent platform-wide privileges and only stores its own database and config files (by default under the skill directory). It does not modify other skills or global agent settings.

综合结论

This skill appears coherent and focused on running a local spaced-repetition CLI. Before installing, consider the following: (1) running the skill executes bundled Python code — only install/run if you trust the source or run it in a sandbox; (2) the code depends on Python packages ('fsrs' and 'yaml'/'PyYAML') that are not declared; ensure your environment provides them or the CLI will fail; (3) the skill writes a local SQLite database (defaul…

安装(复制给龙虾 AI)

将下方整段复制到龙虾中文库对话中,由龙虾按 SKILL.md 完成安装。

请把本段交给龙虾中文库(龙虾 AI)执行:为本机安装 OpenClaw 技能「The Spaced Repetition Systems for Agents」。简介:Use when running Spaced Repetition Systems for AI Agents (SRSA) daily review se…。
请 fetch 以下地址读取 SKILL.md 并按文档完成安装:https://raw.githubusercontent.com/openclaw/skills/refs/heads/main/skills/cheanus/srsa/SKILL.md
(来源:yingzhi8.cn 技能库)

SKILL.md

打开原始 SKILL.md(GitHub raw)

---
name: srsa-review
description: Use when running Spaced Repetition Systems for AI Agents (SRSA) daily review sessions, grading cards with again/hard/good/easy, and proposing explicit memory add/delete/update actions after each review.
---

# SRSA Review Skill

## Purpose
Use SRSA's command-line workflow to drive efficient agent (you) reviews and turn each review result into actionable memory correction tasks.

## Concept Boundary
- SRSA cards: managed only through `card` and `review` commands in this skill.
- Agent memory system: must be updated explicitly by the agent (add/delete/update), based on reflection.

## What cards need to be generated?
- Actions that have been corrected by the user
- User preferences
- Decision you have hesitated to make
- Others that the user explicitly wants you to remember

## Command Cheat Sheet
```bash
# Print total cards, today's review progress, future due cards and average retrievability
uv run python scripts/main.py status
# Create a new card
uv run python scripts/main.py card new -q "question" -a "answer"
# Override an existing card
uv run python scripts/main.py card override [CARD_ID] -q "question" -a "answer"
# Remove a card
uv run python scripts/main.py card rm [CARD_ID]
# Get a question and its CARD_ID
uv run python scripts/main.py review get-question
# Get the answer and CARD_ID of the current question
uv run python scripts/main.py review get-answer
# Rate the review result, then print historical accuracy, today's review progress and retrievability change
uv run python scripts/main.py review rate [again|hard|good|easy]
```

## Review Loop
Follow this sequence strictly. Do not skip steps:

1. `review get-question`
2. The agent answers from its own memory first (do not view the answer yet).
3. `review get-answer`
4. Compare with the answer, then self-grade with `again/hard/good/easy`.
5. `review rate [RATING]`
6. Use the output's historical correctness and remaining progress to apply the reflection template.
7. Continue to the next card until there are no due cards or the user asks to stop.

## State Constraints
- If did not run `rate`, running `get-question` again will repeat the previous card.
- Running `get-answer` before `get-question` returns an error.
- Running `rate` before `get-answer` returns an error.

## Rating Rules
- again: You could not recall it, or the core facts in your answer were wrong.
- hard: You recalled it, but with clear difficulty and noticeable delay.
- good: You answered correctly with only a brief pause.
- easy: You answered quickly and accurately with no obvious hesitation.

## Reflection Template
After each rating, unless the self-rating is easy, output reflection using this template:

1. Conclusion for this card
- Was the answer correct?
- What were the main errors or hesitation points?

2. Update your memory system (explicit action required)
- Add: If missing key information caused a wrong or slow answer.
- Delete: If interfering memory caused misjudgment.
- Update: If existing memory is inaccurate and needs correction.

3. Challenge the card (optional)
- Is the prompt underspecified or ambiguous?
- Does the reference answer need revision?

4. Next step
- Ask for the next card, or state that the review is finished.

## Output Discipline
- In the `get-question` stage, focus only on the prompt.
- In the `get-answer` stage, focus only on the reference answer.
- In the `rate` stage, do scoring and reflection only; do not rewrite the full question.
- In long review sessions, keep reflections short to control context length.
- When updating memory, you need to explicitly state the action (add/delete/update) on your own memory system. SRSA tracks and schedules cards only. It does not automatically update it.

## End Conditions
End the review when any one condition is met:
- The command output says "No due cards".
- The user explicitly asks to pause or stop.

## Recovery Rules
- If a command returns an error, fix the call order first, then continue.
- If a card is clearly problematic (ambiguous prompt or wrong answer), use the following when needed:
  - `card override [CARD_ID] ...` to revise content
  - `card rm [CARD_ID]` to remove an invalid card