技能详情(站内镜像,无评论)
许可证:MIT-0
MIT-0 ·免费使用、修改和重新分发。无需归因。
版本:v1.0.4
统计:⭐ 4 · 2k · 8 current installs · 9 all-time installs
⭐ 4
安装量(当前) 9
🛡 VirusTotal :良性 · OpenClaw :可疑
Package:alexunitario-sketch/prompt-assemble
安全扫描(ClawHub)
- VirusTotal :良性
- OpenClaw :可疑
OpenClaw 评估
The skill's purpose (token-safe prompt assembly) matches the included code and instructions, but there are a few inconsistencies and red flags (a flagged prompt-injection pattern, a truncated/broken code path, and contradictory token-margin guidance) that warrant manual review before you use it in production or give it access to sensitive memories.
目的
Name, description, SKILL.md and the included Python implementation all describe the same functionality (two-phase prompt assembly, memory retrieval, token safety). The skill does not request unrelated binaries, environment variables, or config paths — the declared requirements are proportionate to the stated purpose.
说明范围
Instructions are narrowly focused on assembling prompts and memory handling. They do instruct you to copy the provided script into your agent and call its build() API, which is expected. Two points to review: (1) a pre-scan flag indicates 'system-prompt-override' patterns in SKILL.md — while the doc mostly says 'Never downgrade system prompt', the scanner flagged content that could be used for prompt injection strategies and should be manually…
安装机制
There is no install spec and no downloads; the skill is instruction-only plus a Python file. That is low-risk from an install perspective because nothing external is pulled in at install time. The code would be copied into the agent's codebase when used, so standard code-audit precautions apply.
证书
The skill requests no environment variables or credentials. Its memory guidelines permit storing personal data (name, timezone, preferences), which is functionally reasonable for a memory system but requires you to ensure appropriate access controls and retention policies; nothing in the skill asks for unrelated secrets or cloud credentials.
持久
always is false and the skill does not demand persistent platform privileges. It suggests copying code into your agent (normal). It does not attempt to modify other skills or system-wide settings in the provided materials.
安装(复制给龙虾 AI)
将下方整段复制到龙虾中文库对话中,由龙虾按 SKILL.md 完成安装。
请把本段交给龙虾中文库(龙虾 AI)执行:为本机安装 OpenClaw 技能「Prompt Safe」。简介:Token-safe prompt assembly with memory orchestration. Use for any agent that ne…。
请 fetch 以下地址读取 SKILL.md 并按文档完成安装:https://raw.githubusercontent.com/openclaw/skills/refs/heads/main/skills/alexunitario-sketch/prompt-assemble/SKILL.md
(来源:yingzhi8.cn 技能库)
SKILL.md
---
name: prompt-assemble
description: Token-safe prompt assembly with memory orchestration. Use for any agent that needs to construct LLM prompts with memory retrieval. Guarantees no API failure due to token overflow. Implements two-phase context construction, memory safety valve, and hard limits on memory injection.
---
# Prompt Assemble
## Overview
A standardized, token-safe prompt assembly framework that guarantees API stability. Implements **Two-Phase Context Construction** and **Memory Safety Valve** to prevent token overflow while maximizing relevant context.
**Design Goals:**
- ✅ Never fail due to memory-related token overflow
- ✅ Memory is always discardable enhancement, never rigid dependency
- ✅ Token budget decisions centralized at prompt assemble layer
## When to Use
Use this skill when:
1. Building or modifying any agent that constructs prompts
2. Implementing memory retrieval systems
3. Adding new prompt-related logic to existing agents
4. Any scenario where token budget safety is required
## Core Workflow
```
User Input
↓
Need-Memory Decision
↓
Minimal Context Build
↓
Memory Retrieval (Optional)
↓
Memory Summarization
↓
Token Estimation
↓
Safety Valve Decision
↓
Final Prompt → LLM Call
```
## Phase Details
### Phase 0: Base Configuration
```python
# Model Context Windows (2026-02-04)
# - MiniMax-M2.1: 204,000 tokens (default)
# - Claude 3.5 Sonnet: 200,000 tokens
# - GPT-4o: 128,000 tokens
MAX_TOKENS = 204000 # Set to your model's context limit
SAFETY_MARGIN = 0.75 * MAX_TOKENS # Conservative: 75% threshold = 153,000 tokens
MEMORY_TOP_K = 3 # Max 3 memories
MEMORY_SUMMARY_MAX = 3 lines # Max 3 lines per memory
```
**Design Philosophy**:
- Leave 25% buffer for safety (model overhead, estimation errors, spikes)
- Better to underutilize capacity than to overflow
### Phase 1: Minimal Context
- System prompt
- Recent N messages (N=3, trimmed)
- Current user input
- **No memory by default**
### Phase 2: Memory Need Decision
```python
def need_memory(user_input):
triggers = [
"previously",
"earlier we discussed",
"do you remember",
"as I mentioned before",
"continuing from",
"before we",
"last time",
"previously mentioned"
]
for trigger in triggers:
if trigger.lower() in user_input.lower():
return True
return False
```
### Phase 3: Memory Retrieval (Optional)
```python
memories = memory_search(query=user_input, top_k=MEMORY_TOP_K)
for mem in memories:
summarized_memories.append(summarize(mem, max_lines=MEMORY_SUMMARY_MAX))
```
### Phase 4: Token Estimation
Calculate estimated tokens for base_context + summarized_memories.
### Phase 5: Safety Valve (Critical)
```python
if estimated_tokens > SAFETY_MARGIN:
base_context.append("[System Notice] Relevant memory skipped due to token budget.")
return assemble(base_context)
```
**Hard Rules:**
- ❌ Never downgrade system prompt
- ❌ Never truncate user input
- ❌ No "lucky splicing"
- ✅ Only memory layer is expendable
### Phase 6: Final Assembly
```python
final_prompt = assemble(base_context + summarized_memories)
return final_prompt
```
## Memory Data Standards
### Allowed in Long-Term Memory
- ✅ User preferences / identity / long-term goals
- ✅ Confirmed important conclusions
- ✅ System-level settings and rules
### Forbidden in Long-Term Memory
- ❌ Raw conversation logs
- ❌ Reasoning traces
- ❌ Temporary discussions
- ❌ Information recoverable from chat history
## Quick Start
Copy `scripts/prompt_assemble.py` to your agent and use:
```python
from prompt_assemble import build_prompt
# In your agent's prompt construction:
final_prompt = build_prompt(user_input, memory_search_fn, get_recent_dialog_fn)
```
## Resources
### scripts/
- `prompt_assemble.py` - Complete implementation with all phases (PromptAssembler class)
### references/
- `memory_standards.md` - Detailed memory content guidelines
- `token_estimation.md` - Token counting strategies