openclaw 网盘下载
OpenClaw

技能详情(站内镜像,无评论)

首页 > 技能库 > Strategy Workflow

Comprehensive strategy development workflow from ideation to validation. Use when creating trading strategies, running backtests, parameter optimization, or...

开发与 DevOps

作者:Dan Repaci @ahuserious

许可证:MIT-0

MIT-0 ·免费使用、修改和重新分发。无需归因。

版本:v0.1.0

统计:⭐ 1 · 287 · 0 current installs · 0 all-time installs

1

安装量(当前) 0

🛡 VirusTotal :可疑 · OpenClaw :可疑

Package:ahuserious/strategy-workflow

安全扫描(ClawHub)

  • VirusTotal :可疑
  • OpenClaw :可疑

OpenClaw 评估

The skill's prose describes a large, always-on, system-level orchestration workflow (ssh/scp, tmux watch‑dogs, PID management, persistent state files, cloud GPU use) but it declares no credentials, binaries, or install steps — the declared surface doesn't match what the instructions require.

目的

The stated purpose (strategy development, backtests, distributed optimization) plausibly needs SSH, GPUs, tmux, python, and access to remote storage/DBs; those tools and credentials are referenced repeatedly in the instructions but are not declared in the registry metadata. It's plausible but the skill should have required env vars (SSH keys/host, Vast.ai/API creds, DB URL) and required binaries listed.

说明范围

SKILL.md instructs the agent to run system-level scripts (start_swarm_watchdogs.sh, launch tmux loops), run pgrep/kill/relaunch logic, read/write many persistent state/log files under workspace/docs, scp/ssh to remote hosts, and ingest private docs — all of which grant broad read/write and network access. The instructions are prescriptive about autonomous, always-on remediation and continuous communication; that scope is large and not constrai…

安装机制

This is an instruction-only skill with no install spec or code files to execute. That reduces direct installation risk, but the runtime commands it tells the agent to run will invoke external scripts (which are not bundled) and system tools on the host.

证书

The skill declares no required env vars or primary credential, yet the workflow expects SSH/scp access (HOST/PORT), potential cloud provider credentials (Vast.ai or vendor APIs), and optional DB connection strings (Postgres RDBStorage). The absence of declared credentials is a mismatch and could lead an agent to request or use secrets ad-hoc.

持久

Although registry flags do not force always-on inclusion, the skill's instructions push for always-on, autonomous watchdogs that persist state, auto-heal, and relaunch processes. That behavior would write persistent logs/state, modify system process state, and perform network operations — a high privilege footprint that should be explicitly declared and limited.

安装(复制给龙虾 AI)

将下方整段复制到龙虾中文库对话中,由龙虾按 SKILL.md 完成安装。

请把本段交给龙虾中文库(龙虾 AI)执行:为本机安装 OpenClaw 技能「Strategy Workflow」。简介:Comprehensive strategy development workflow from ideation to validation. Use wh…。
请 fetch 以下地址读取 SKILL.md 并按文档完成安装:https://raw.githubusercontent.com/openclaw/skills/refs/heads/main/skills/ahuserious/strategy-workflow/SKILL.md
(来源:yingzhi8.cn 技能库)

SKILL.md

打开原始 SKILL.md(GitHub raw)

---
name: strategy-workflow
description: >
  Comprehensive strategy development workflow from ideation to validation.
  Use when creating trading strategies, running backtests, parameter optimization, or walk-forward validation.
version: "2.0.0"
allowed-tools: Read, Write, Edit, Bash, Glob, Grep
---

# Strategy Workflow

Comprehensive strategy development workflow for quantitative trading, from hypothesis to validated production deployment.

## Overview

This skill provides a complete framework for developing, testing, and validating trading strategies. It supports:

- Hypothesis-driven strategy development
- Multi-GPU backtesting on Vast.ai
- Bayesian hyperparameter optimization with Optuna
- Walk-forward validation and out-of-sample testing
- Automated tearsheet generation

## Entry Points

### Control Plane (Swarm Orchestration)

Always-on watchdog loops that manage hardware utilization and self-healing:

```bash
bash scripts/start_swarm_watchdogs.sh
```

For local environments, set explicit paths:

```bash
VENV_PATH=/path/to/.venv/bin/activate 
RESULTS_ROOT=/path/to/backtests 
STATE_ROOT=/path/to/backtests/state 
LOGS_ROOT=/path/to/backtests/logs 
bash scripts/start_swarm_watchdogs.sh
```

### Work Plane (Parallel Execution)

Unified wrapper that starts control plane and launches parallel work:

```bash
scripts/backtest-optimize --parallel
```

Multi-GPU, multi-symbol execution:

```bash
cd WORKFLOW && ./launch_parallel.sh
```

### Single-Symbol Pipeline

For focused optimization on a single asset:

```bash
scripts/backtest-optimize --single --symbol SYMBOL --engine native --prescreen 50000 --paths 1000 --by-regime
```

## Strategy Development

### 1. Hypothesis Formulation

Define your strategy hypothesis in measurable terms:

- What market inefficiency are you exploiting?
- What is the expected holding period?
- What are the entry/exit conditions?
- What is the target risk-adjusted return?

### 2. Feature Selection

Identify relevant features for signal generation:

- Price-based (OHLCV, returns, volatility)
- Technical indicators (EMA, RSI, Bollinger Bands)
- Multi-timeframe features (MTF resampling)
- Volume analysis (PVSRA, VWAP)
- Market microstructure (order flow, spread)

### 3. Signal Generation

Convert features into actionable signals:

- Directional bias (trend following, mean reversion)
- Entry conditions (threshold crossings, pattern recognition)
- Exit conditions (take-profit, stop-loss, trailing stops)
- Position sizing rules

### 4. Position Sizing

Implement risk-aware position sizing:

- Fixed fractional
- Kelly criterion
- Volatility-adjusted
- Regime-dependent scaling

## Backtesting

### Pre-Flight Validation

**MANDATORY** before every optimization run:

```bash
python validation.py --check-all --data-path DATA_PATH --symbol SYMBOL
```

Validation checks:
- Data >= 90 days with no gaps/NaN
- Min trades >= 30 for statistical significance
- MTF resampling implemented correctly
- No look-ahead bias

### Multi-GPU Execution on Vast.ai

Deploy to cloud GPU instances for large-scale parameter sweeps:

```bash
# Copy workflow files
scp -P PORT workflow_files root@HOST:/root/WORKFLOW/

# Run optimization
ssh -p PORT root@HOST "cd /root/WORKFLOW && python optimize_strategy.py 
  --data-path /root/data --symbol SYMBOL --mode aggressive 
  --prescreen 5000 --paths 200 --engine gpu"
```

### Prescreening with Vectorized Backtests

Phase 0: GPU-accelerated parameter screening:

- Generate N random parameter combinations
- Batch evaluate on GPU
- Filter by minimum trades (30+)
- Return top K by Sharpe ratio

Performance baseline (RTX 5090, 730d lookback, 250k combos): ~4s per mode.

### Full Backtests with NautilusTrader

Phase 1: Event-driven backtesting for top candidates:

- High-fidelity simulation with realistic execution
- Slippage and commission modeling
- Multi-asset portfolio backtests

## Parameter Optimization

### Optuna for Hyperparameter Search

Phase 2: Bayesian optimization with warm-start from prescreening:

```python
import optuna

study = optuna.create_study(
    direction="maximize",
    sampler=optuna.samplers.TPESampler(seed=42),
    pruner=optuna.pruners.MedianPruner()
)

study.optimize(objective, n_trials=1000)
```

### Grid Search vs Bayesian Optimization

| Method | Use Case |
|--------|----------|
| Grid Search | Small parameter space, exhaustive coverage needed |
| Random Search | Large space, quick exploration |
| Bayesian (TPE) | Efficient optimization, exploitation/exploration balance |
| CMA-ES | Continuous parameters, smooth objective |

### Pruning Strategies

- **MedianPruner**: Prune if worse than median of completed trials
- **PercentilePruner**: Prune bottom X% of trials
- **HyperbandPruner**: Multi-fidelity optimization
- **SuccessiveHalvingPruner**: Aggressive early stopping

### Distributed Optimization

For large-scale runs, use persistent storage:

```python
# JournalStorage for multi-process
storage = optuna.storages.JournalStorage(
    optuna.storages.JournalFileStorage("journal.log")
)

# RDBStorage for distributed clusters
storage = optuna.storages.RDBStorage("postgresql://...")
```

## Walk-Forward Validation

### Rolling Window Validation

Slide the training/test window through time:

```
[Train 1][Test 1]
    [Train 2][Test 2]
        [Train 3][Test 3]
```

Parameters:
- `train_window`: Training period length
- `test_window`: Out-of-sample test length
- `step_size`: Window advancement increment

### Anchored Walk-Forward

Expand training window while sliding test window:

```
[Train 1      ][Test 1]
[Train 1 + 2      ][Test 2]
[Train 1 + 2 + 3      ][Test 3]
```

Use when historical regime diversity improves model robustness.

### Epoch Selection Criteria

Intelligent selection of training periods:

- **Regime-aware**: Match training regimes to expected deployment conditions
- **Volatility-adjusted**: Include both high and low volatility periods
- **Event-inclusive**: Ensure major market events are represented
- **Recency-weighted**: Emphasize recent data while maintaining diversity

### Out-of-Sample Testing

Final validation phase:
- Hold out 20-30% of data for final OOS test
- No parameter tuning on OOS data
- Monte Carlo stress testing
- Regime-conditional performance analysis

## SLOs and Guardrails

### Utilization Targets

- CPU utilization target: >= 70%
- GPU utilization target: >= 70%
- No silent GPU fallback for GPU sweeps

### Hardware Watchdog Hooks

Enforced by:
- `hooks/hardware_capacity_watchdog.py`
- `scripts/process_auditor.py`

### Capacity Monitoring

Control plane loops monitor:
- Worker health and liveness
- Progress artifact freshness
- Resource utilization
- Job queue depth

Self-healing actions:
- Automatic worker restart on crash
- Fill lanes for underutilized resources
- Cooldown guardrails to prevent thrashing

## Tearsheet Generation

Generate QuantStats-style performance reports:

```bash
scripts/generate-tearsheet STRATEGY_NAME 
  --trades /path/to/trades.csv 
  --capital 10000 
  --output ./tearsheets
```

See `tearsheet-generator` skill for detailed visualization options.

## Multi-Provider Orchestration

### PAL MCP Integration

Attach PAL as an MCP server for research/consensus across multiple model providers:

- Config template: `config/mcp/pal.mcp.json.example`
- Docs: `docs/reference/PAL_MCP_INTEGRATION.md`
- Providers: OpenRouter, OpenAI, Anthropic, xAI, local models

## Resources

### Documentation

- [VectorBT Documentation](https://vectorbt.dev/)
- [NautilusTrader Docs](https://nautilustrader.io/)
- [Optuna Documentation](https://optuna.readthedocs.io/)
- [QuantStats](https://github.com/ranaroussi/quantstats)

### Project References

- `config/workflow_defaults.yaml` - Default configuration
- `config/model_policy.yaml` - Model policy (advisory)
- `docs/guides/SWARM_OPTIMIZATION_RUNBOOK.md` - Detailed runbook
- `hooks/pipeline-hooks.md` - Hook contracts
- `docs/reference/VECTORBT_GRAPH_INGEST.md` - VectorBT PRO integration

### Results Structure

```
Backtests/optimizations/{SYMBOL}/{MODE}/
  best_sharpe/
    config.json      # Best Sharpe configuration
    metrics.json     # Performance metrics
  best_returns/
  lowest_drawdown/
  best_winrate/
  all_trials.json    # All Optuna trials
  phase0_top500.json # Prescreening results
```