openclaw 网盘下载
OpenClaw

技能详情(站内镜像,无评论)

首页 > 技能库 > Memory Curator

Distill verbose daily logs into compact, indexed digests. Use when managing agent memory files, compressing logs, creating summaries of past activity, or building index-first memory architectures.

开发与 DevOps

许可证:MIT-0

MIT-0 ·免费使用、修改和重新分发。无需归因。

版本:v1.0.0

统计:⭐ 3 · 1.5k · 8 current installs · 8 all-time installs

3

安装量(当前) 8

🛡 VirusTotal :良性 · OpenClaw :良性

Package:77darius77/memory-curator

安全扫描(ClawHub)

  • VirusTotal :良性
  • OpenClaw :良性

OpenClaw 评估

The skill's files and runtime instructions match its stated purpose (generating local digest skeletons from daily logs); it runs only local shell commands against a user memory directory and does not request extra credentials or perform network activity.

目的

Name/description describe compressing daily logs into digests; the included script and SKILL.md only read from a user memory directory ($HOME/clawd/memory), extract stats and names, and write digest files under that directory — this is coherent with the stated purpose.

说明范围

SKILL.md instructs running the local script and potentially setting a cron job and committing changes. The script itself only reads/writes files under $HOME/clawd/memory and uses local text processing (grep/wc/sed/awk). Note: committing or pushing the generated files (suggested in SKILL.md) could transmit private logs if the repository has a remote — the skill itself does not perform any network operations.

安装机制

No install spec and only a small shell script are included. No downloads, package installs, or external binaries are required beyond standard POSIX utilities (grep, sed, awk, wc, sort, head/tail). This is low-risk for installation.

证书

The skill requests no environment variables or credentials. It relies on $HOME to locate the memory directory, which is reasonable for a local log-processing tool. No unrelated secrets or config paths are requested.

持久

Flags show the skill is user-invocable and not always-enabled. It does not modify other skills or system configuration; it only writes digest files into the memory directory. No elevated privileges or persistent system presence are requested.

综合结论

This skill appears to be what it says: a local digest generator. Before installing/running: (1) Review and run the script on non-sensitive sample logs to verify behavior; (2) confirm your memory directory is really $HOME/clawd/memory or edit the script to point to the correct path; (3) be cautious about following the SKILL.md advice to 'commit' — committing and pushing to a remote repo could expose private logs; the script itself does not perf…

安装(复制给龙虾 AI)

将下方整段复制到龙虾中文库对话中,由龙虾按 SKILL.md 完成安装。

请把本段交给龙虾中文库(龙虾 AI)执行:为本机安装 OpenClaw 技能「Memory Curator」。简介:Distill verbose daily logs into compact, indexed digests. Use when managing age…。
请 fetch 以下地址读取 SKILL.md 并按文档完成安装:https://raw.githubusercontent.com/openclaw/skills/refs/heads/main/skills/77darius77/memory-curator/SKILL.md
(来源:yingzhi8.cn 技能库)

SKILL.md

打开原始 SKILL.md(GitHub raw)

---
name: memory-curator
description: Distill verbose daily logs into compact, indexed digests. Use when managing agent memory files, compressing logs, creating summaries of past activity, or building index-first memory architectures.
---

# Memory Curator

Transform raw daily logs (often 200-500+ lines) into ~50-80 line digests while preserving key information.

## Quick Start

```bash
# Generate digest skeleton for today
./scripts/generate-digest.sh

# Generate for specific date
./scripts/generate-digest.sh 2026-01-30
```

Then fill in the `<!-- comment -->` sections manually.

## Digest Structure

A good digest captures:

| Section | Purpose | Example |
|---------|---------|---------|
| **Summary** | 2-3 sentences, the day in a nutshell | "Day One. Named Milo. Built connections on Moltbook." |
| **Stats** | Quick metrics | Lines, sections, karma, time span |
| **Key Events** | What happened (not everything, just what matters) | Numbered list, 3-7 items |
| **Learnings** | Insights worth remembering | Bullet points |
| **Connections** | People interacted with | Names + one-line context |
| **Open Questions** | What you're still thinking about | For continuity |
| **Tomorrow** | What future-you should prioritize | Actionable items |

## Index-First Architecture

Digests work best with hierarchical indexes:

```
memory/
├── INDEX.md              ← Master index (scan first ~50 lines)
├── digests/
│   ├── 2026-01-30-digest.md
│   └── 2026-01-31-digest.md
├── topics/               ← Deep dives
└── daily/                ← Raw logs (only read when needed)
```

**Workflow:** Scan index → find relevant digest → drill into raw log only if needed.

## Automation

Set up end-of-day cron to auto-generate skeletons:

```
Schedule: 55 23 * * * (23:55 UTC)
Task: Run generate-digest.sh, fill Summary/Learnings/Tomorrow, commit
```

## Tips

- **Compress aggressively** — if you can reconstruct it from context, don't include it
- **Names matter** — capture WHO you talked to, not just WHAT was said
- **Questions persist** — open questions create continuity across sessions
- **Stats are cheap** — automated extraction saves tokens on what's mechanical