openclaw 网盘下载
OpenClaw

技能详情(站内镜像,无评论)

首页 > 技能库 > Openclaw Memories

Agent memory with ALMA meta-learning, LLM fact extraction, and full-text search. Observer calls remote LLM APIs (OpenAI/Anthropic/Gemini). ALMA and Indexer w...

开发与 DevOps

作者:Artale @arosstale

许可证:MIT-0

MIT-0 ·免费使用、修改和重新分发。无需归因。

版本:v2.0.1

统计:⭐ 0 · 306 · 0 current installs · 0 all-time installs

0

安装量(当前) 0

🛡 VirusTotal :可疑 · OpenClaw :可疑

Package:arosstale/openclaw-memory-2

安全扫描(ClawHub)

  • VirusTotal :可疑
  • OpenClaw :可疑

OpenClaw 评估

The skill appears to implement the described memory components and only needs LLM API keys for the Observer, but the registry metadata does not declare those environment variables and there are a few small mismatches you should understand before installing.

目的

The code implements ALMA (local), Indexer (local file indexing), and Observer (remote LLM calls) which matches the skill name/description. However the registry metadata lists no required env vars/credentials while the SKILL.md and the observer code clearly require an LLM API key (OpenAI/Anthropic/Google GEMINI key passed as apiKey). This metadata mismatch is unexpected and should be corrected by the publisher.

说明范围

Runtime instructions and SKILL.md confine network calls to LLM provider APIs (OpenAI, Anthropic, Gemini) and file reads to workspace Markdown files. The Observer sends conversation text to third‑party LLM endpoints (expected behavior). The SKILL.md documents limitations (in-memory DB, simplified ranking) which align with the code.

安装机制

There is no install spec in the registry (instruction-only), and the README suggests installing/publishing via npm or cloning the GitHub repo. No unusual download URLs, extract steps, or native binaries are present; package.json lists no runtime dependencies. Low install risk from this package itself.

证书

Observer requires an LLM API key (the code checks process.env.OPENAI_API_KEY or process.env.ANTHROPIC_API_KEY or accepts apiKey in config). The registry metadata nevertheless lists no required env vars, so the skill will operate only if keys are provided but a user or system might not be warned. Also SKILL.md mentions Gemini but does not name a specific environment variable for the Google API key — the code expects the caller to pass apiKey or…

持久

The skill is not force-included (always: false), does not request system-level privileges, and does not modify other skills or global configuration. It reads files from the workspace only when the indexer is invoked with a workspace path supplied by the caller.

安装(复制给龙虾 AI)

将下方整段复制到龙虾中文库对话中,由龙虾按 SKILL.md 完成安装。

请把本段交给龙虾中文库(龙虾 AI)执行:为本机安装 OpenClaw 技能「Openclaw Memories」。简介:Agent memory with ALMA meta-learning, LLM fact extraction, and full-text search…。
请 fetch 以下地址读取 SKILL.md 并按文档完成安装:https://raw.githubusercontent.com/openclaw/skills/refs/heads/main/skills/arosstale/openclaw-memory-2/SKILL.md
(来源:yingzhi8.cn 技能库)

SKILL.md

打开原始 SKILL.md(GitHub raw)

---
name: openclaw-memories
description: Agent memory with ALMA meta-learning, LLM fact extraction, and full-text search. Observer calls remote LLM APIs (OpenAI/Anthropic/Gemini). ALMA and Indexer work offline.
---

# OpenClaw Memory System

Three components for agent memory:

1. **ALMA** — Evolves memory designs through mutation + evaluation (offline)
2. **Observer** — Extracts structured facts from conversations via LLM API (requires API key)
3. **Indexer** — Full-text search over workspace Markdown files (offline)

## Environment Variables

Observer requires one of:
- `OPENAI_API_KEY`
- `ANTHROPIC_API_KEY`
- Or pass `apiKey` in config

ALMA and Indexer require no keys or network access.

## How It Works

### ALMA (Algorithm Learning via Meta-learning Agents)
Proposes memory system designs, evaluates them, keeps the best. Uses gaussian mutation and simulated annealing to explore the design space.

```
alma.propose() → design
alma.evaluate(design.id, metrics) → score  
alma.best() → top design
alma.top(5) → leaderboard
```

### Observer
Sends conversation history to an LLM, gets back structured facts:
- Kind: world fact / biographical / opinion / observation
- Priority: high / medium / low
- Entities: mentioned people/places
- Confidence: 0.0–1.0 for opinions

Fails gracefully — returns empty array if LLM is unavailable.

### Indexer
Chunks workspace Markdown files and indexes them for search:
- `MEMORY.md` — core facts
- `memory/YYYY-MM-DD.md` — daily logs
- `bank/entities/*.md` — entity summaries
- `bank/opinions.md` — beliefs with confidence

```
indexer.index() → count of chunks indexed
indexer.search('query') → ranked results
indexer.rebuild() → re-index from scratch
```

## Install

```bash
npm install @artale/openclaw-memory
```

## Limitations

- Indexer uses an in-memory mock database, not real SQLite FTS5. Search works but ranking is simplified.
- Observer calls remote APIs — not offline. Only ALMA and Indexer work without network.
- No dashboard — removed in v2 for simplicity.

## Source

5 files, 578 lines, 0 runtime dependencies.

https://github.com/arosstale/openclaw-memory