openclaw 网盘下载
OpenClaw

技能详情(站内镜像,无评论)

首页 > 技能库 > Data Migration Planner

Plans and documents detailed data migrations, including schema mapping, ETL pipeline design, validation tests, rollback strategies, and runbook creation.

开发与 DevOps

许可证:MIT-0

MIT-0 ·免费使用、修改和重新分发。无需归因。

版本:v1.0.0

统计:⭐ 0 · 508 · 1 current installs · 1 all-time installs

0

安装量(当前) 1

🛡 VirusTotal :良性 · OpenClaw :良性

Package:1kalin/afrexai-data-migration

安全扫描(ClawHub)

  • VirusTotal :良性
  • OpenClaw :良性

OpenClaw 评估

This instruction-only skill is internally consistent with its name and purpose (planning data migrations) and does not request extra credentials, binaries, or install any code.

目的

The SKILL.md describes planning, schema mapping, ETL design, validation, and cutover runbooks which matches the 'Data Migration Planner' name. The skill requires no environment variables, binaries, or installs, which is proportionate for a planning/consulting tool.

说明范围

Instructions are limited to producing migration plans, templates, and checklists. They do instruct the agent to 'inventory' source objects and record counts given source/target details — which implies the agent (or user) will provide or use data/system access. The SKILL.md does not instruct any explicit data collection, file reads, or transmission to external endpoints, but it also lacks explicit guidance about handling credentials, redacting …

安装机制

No install spec and no code files — instruction-only. Nothing will be written to disk or downloaded during install.

证书

The skill declares no required environment variables, credentials, or config paths. There are no unexplained or disproportionate credential requests.

持久

Default privileges are used (always: false, model invocation enabled). The skill does not request permanent presence or system-level changes.

综合结论

This skill appears coherent and low-risk because it is instruction-only and does not request credentials or install code. Before using it in production: 1) Avoid pasting live production credentials into any agent UI unless you trust the agent runtime and have least-privilege credentials (use read-only, scoped accounts). 2) Prefer running discovery on anonymized or copy datasets when possible; redact PII/HIPAA-sensitive fields. 3) Confirm any d…

安装(复制给龙虾 AI)

将下方整段复制到龙虾中文库对话中,由龙虾按 SKILL.md 完成安装。

请把本段交给龙虾中文库(龙虾 AI)执行:为本机安装 OpenClaw 技能「Data Migration Planner」。简介:Plans and documents detailed data migrations, including schema mapping, ETL pip…。
请 fetch 以下地址读取 SKILL.md 并按文档完成安装:https://raw.githubusercontent.com/openclaw/skills/refs/heads/main/skills/1kalin/afrexai-data-migration/SKILL.md
(来源:yingzhi8.cn 技能库)

SKILL.md

打开原始 SKILL.md(GitHub raw)

# Data Migration Planner

Plan, execute, and validate data migrations between systems. Covers schema mapping, ETL pipeline design, rollback strategies, and post-migration validation.

## What It Does

Given source and target system details, this skill:
1. Maps source → target schemas with field-level transformation rules
2. Generates an ETL pipeline plan with staging, transform, and load phases
3. Creates validation queries (row counts, checksum, referential integrity)
4. Builds a rollback plan with point-of-no-return criteria
5. Produces a migration runbook with go/no-go gates

## Usage

Tell your agent:
- "Plan a migration from Salesforce to HubSpot CRM"
- "Create a data migration runbook for moving from MySQL to PostgreSQL"
- "Map our legacy ERP data to the new system schema"

## Migration Framework

### Phase 1: Discovery
- Inventory all source tables/objects and record counts
- Document data types, constraints, and relationships
- Identify data quality issues (nulls, duplicates, orphans)
- Map business rules that affect data interpretation

### Phase 2: Schema Mapping
For each source entity, document:
| Source Field | Type | Target Field | Type | Transform | Notes |
|---|---|---|---|---|---|
| (field) | (type) | (field) | (type) | (rule) | (edge cases) |

### Phase 3: ETL Pipeline
```
Extract → Stage (raw) → Clean → Transform → Validate → Load → Verify
```
- **Extract**: Full vs incremental, API vs direct DB, rate limits
- **Stage**: Raw landing zone, no transforms, audit trail
- **Clean**: Dedup, null handling, encoding fixes
- **Transform**: Type conversions, lookups, calculated fields
- **Validate**: Pre-load checks (counts, checksums, business rules)
- **Load**: Batch size, parallelism, error handling
- **Verify**: Post-load reconciliation

### Phase 4: Validation
- Row count match (source vs target, per table)
- Checksum validation on key columns
- Referential integrity checks
- Business rule validation (e.g., all active accounts migrated)
- User acceptance sampling (random 5% manual review)

### Phase 5: Cutover
- Go/no-go criteria checklist
- Point-of-no-return definition
- Rollback procedure and time estimate
- Communication plan (users, stakeholders)
- Parallel run period (if applicable)

## Risk Factors
- **Data volume**: >10M rows = batch strategy required
- **Downtime window**: Zero-downtime needs CDC/dual-write
- **Data quality**: Garbage in = garbage out. Clean BEFORE migrating
- **Dependencies**: Other systems reading from source during migration
- **Compliance**: GDPR/HIPAA data handling during transit

## Output Format
Deliver a migration runbook as structured markdown with:
1. Executive summary (what, why, when, risk level)
2. Schema mapping tables
3. ETL pipeline specification
4. Validation test suite
5. Cutover runbook with rollback
6. Timeline with milestones

## Cost Estimation
Typical migration costs by complexity:
- Simple (1-5 tables, <1M rows): $5K-$15K or 1-2 weeks internal
- Medium (10-50 tables, 1-10M rows): $25K-$75K or 1-2 months
- Complex (50+ tables, 10M+ rows, multiple systems): $100K-$500K or 3-6 months

---

**Built by [AfrexAI](https://afrexai-cto.github.io/context-packs/)** — AI Context Packs for business automation.

Calculate your AI automation ROI: [Revenue Calculator](https://afrexai-cto.github.io/ai-revenue-calculator/)