openclaw 网盘下载
OpenClaw

技能详情(站内镜像,无评论)

首页 > 技能库 > Offline Llama

Manage local Ollama models autonomously with health monitoring, automatic fallback, self-healing, and offline operation without internet dependency.

AI 与大模型

许可证:MIT-0

MIT-0 ·免费使用、修改和重新分发。无需归因。

版本:v1.0.0

统计:⭐ 0 · 548 · 4 current installs · 6 all-time installs

0

安装量(当前) 6

🛡 VirusTotal :可疑 · OpenClaw :可疑

Package:and-ray-m/offline-llama

安全扫描(ClawHub)

  • VirusTotal :可疑
  • OpenClaw :可疑

OpenClaw 评估

The skill's high-level purpose (local Ollama model management) is plausible, but the instructions imply system-level actions (restarting services, reinstalling models, reading logs, network downloads) while declaring no required binaries, paths, or permissions — that mismatch and the vague, unconstrained instructions are concerning.

目的

The skill claims to manage Ollama models (health checks, restarts, reinstallations, cache clearing). Performing these tasks normally requires specific binaries/CLIs (e.g., ollama CLI, systemctl or init scripts), filesystem paths, and potentially network access. The skill declares no required binaries, config paths, or credentials, which is inconsistent with the stated capabilities.

说明范围

SKILL.md gives broad runtime instructions (continuous monitoring, restarting services, clearing caches, reinstalling models, log analysis) but does not specify exact commands, files, or limits. The instructions are open-ended about which logs/files to read and allow autonomous decisions (e.g., when to reinstall), granting the agent wide discretion to access system state and perform potentially destructive actions.

安装机制

No install spec and no code files are present (instruction-only). That minimizes risk from arbitrary downloads or written artifacts, but it also means the SKILL.md is the sole runtime authority — increasing importance of clear, constrained instructions which are currently lacking.

证书

The skill requests no environment variables or credentials, which superficially limits exfiltration risk. However, the described behaviors (reinstalling models, switching to remote models when internet is present) imply network access and possibly access to model registries; the lack of declared requirements or credential needs is an omission and reduces transparency about what privileges the agent will need.

持久

The skill is not forced-always and allows normal autonomous invocation. Autonomous invocation combined with system-management actions increases blast radius if misused, but 'always: false' and default invocation settings are reasonable. There's no evidence it attempts to modify other skills or system-wide configs from the provided text.

安装(复制给龙虾 AI)

将下方整段复制到龙虾中文库对话中,由龙虾按 SKILL.md 完成安装。

请把本段交给龙虾中文库(龙虾 AI)执行:为本机安装 OpenClaw 技能「Offline Llama」。简介:Manage local Ollama models autonomously with health monitoring, automatic fallb…。
请 fetch 以下地址读取 SKILL.md 并按文档完成安装:https://raw.githubusercontent.com/openclaw/skills/refs/heads/main/skills/and-ray-m/offline-llama/SKILL.md
(来源:yingzhi8.cn 技能库)

SKILL.md

打开原始 SKILL.md(GitHub raw)

# offline-llama

Autonomously manage and use local Ollama models for continuous operation without internet dependency. Includes model health monitoring, automatic fallback, and self-healing capabilities.

## Overview

This skill enables autonomous operation with local Ollama models. It monitors model health, automatically switches between models when issues occur, and maintains functionality even without internet connectivity. The skill includes self-healing capabilities to restart services and clear resources when needed.

## Core Features

### Model Management
- **Health Monitoring**: Continuously check model availability and performance
- **Automatic Fallback**: Switch to alternative models when primary fails
- **Model Switching**: Dynamically select best available model for task

### Self-Healing
- **Service Restart**: Automatically restart Ollama when models become unavailable
- **Resource Management**: Clear cache and temporary files to free resources
- **Model Reinstallation**: Reinstall problematic models automatically

### Connectivity Awareness
- **Internet Detection**: Monitor internet connectivity status
- **Smart Fallback**: Switch to remote models when local models unavailable and internet is present
- **Offline Mode**: Maintain full functionality without internet

## Configuration

### Models
- **Primary**: llama-3.1-8b-instruct (general tasks)
- **Secondary**: mistral-7b-instruct (faster responses)
- **Specialized**: code-llama-7b (coding tasks)

### Health Checks
- **Model Status**: Monitor availability every 30 seconds
- **Latency Tracking**: Monitor response times every minute
- **Resource Usage**: Monitor GPU/CPU and memory every 5 minutes

### Fallback Strategies
1. **Model Switching**: Automatically switch to alternative local models
2. **Response Retry**: Retry failed requests with exponential backoff
3. **Degraded Mode**: Continue with limited functionality if all models unavailable

## Usage

### When Internet is Available
- Use local models primarily
- Fallback to remote models if local models unavailable
- Maintain optimal performance

### When Internet is Unavailable
- Use local models exclusively
- Continue all operations without interruption
- Provide degraded functionality if needed

## Commands

### Model Management
- `model_status` - Check current model health
- `switch_model` - Manually switch between models
- `restart_ollama` - Restart Ollama service

### Health Monitoring
- `check_health` - Run comprehensive health check
- `monitor_resources` - Monitor system resources
- `clear_cache` - Clear model cache and temporary files

## Self-Healing

### Automatic Actions
- **Service Restart**: Triggered when model becomes unavailable
- **Resource Cleanup**: Triggered when high memory usage detected
- **Model Reinstallation**: Triggered when persistent failures occur

### Manual Intervention
- **Manual Restart**: User can manually restart services
- **Cache Clearing**: User can manually clear resources
- **Model Updates**: User can update models as needed

## Security Considerations

- All operations performed locally
- No external dependencies required
- Secure model management
- Privacy-preserving by default

## Performance Optimization

- **Resource Monitoring**: Track GPU/CPU usage and memory
- **Latency Tracking**: Monitor response times and performance
- **Model Selection**: Choose optimal model based on task requirements

## Maintenance

### Regular Tasks
- **Health Checks**: Run periodic health checks
- **Cache Management**: Clear unused cache regularly
- **Model Updates**: Keep models updated when possible

### Troubleshooting
- **Log Analysis**: Monitor logs for issues
- **Performance Metrics**: Track performance over time
- **Error Handling**: Graceful error handling and recovery

## Integration

This skill integrates with:
- **Ollama**: Local model management
- **System Resources**: Monitor and manage system resources
- **Network**: Detect internet connectivity
- **OpenClaw**: Seamless integration with existing tools

## Future Enhancements

- **Model Training**: Support for custom model training
- **Advanced Routing**: Intelligent model selection based on task
- **Multi-GPU Support**: Scale across multiple GPUs
- **Cloud Sync**: Optional cloud backup and synchronization

## License

This skill is part of the OpenClaw ecosystem and follows the same licensing terms as OpenClaw itself.