openclaw 网盘下载
OpenClaw

技能详情(站内镜像,无评论)

首页 > 技能库 > Prometheus

Query Prometheus monitoring data to check server metrics, resource usage, and system health. Use when the user asks about server status, disk space, CPU/memo...

数据与表格

许可证:MIT-0

MIT-0 ·免费使用、修改和重新分发。无需归因。

版本:v1.1.0

统计:⭐ 2 · 1.5k · 12 current installs · 12 all-time installs

2

安装量(当前) 12

🛡 VirusTotal :良性 · OpenClaw :良性

Package:akellacom/prometheus

安全扫描(ClawHub)

  • VirusTotal :良性
  • OpenClaw :良性

OpenClaw 评估

The skill's code, runtime instructions, and requested resources are consistent with a Prometheus query CLI — nothing indicates covert or unrelated behavior.

目的

Name/description (query Prometheus metrics) align with the files and code: the CLI reads a config or environment, performs Prometheus HTTP API calls, supports multi-instance queries and basic auth — all expected for this purpose.

说明范围

SKILL.md directs running the provided node scripts and storing a config in the OpenClaw workspace; the code also loads optional .env files from the workspace and CWD which could populate process.env. Reading local .env files and allowing PROMETHEUS_* env fallbacks is plausible for convenience, but it means the skill will read local environment files beyond only a single dedicated config file.

安装机制

No install spec; the skill is instruction + Node.js scripts. That is low-risk and consistent with a CLI-style skill.

证书

The skill declares no required env vars, and the code only uses PROMETHEUS_URL/USER/PASSWORD and any entries from a local prometheus.json. However, the loader will parse .env files in the workspace and CWD and set any keys not already present in process.env — this can pull unrelated secrets (if present in .env) into process.env for the process, though they are used only as fallbacks and not exfiltrated by the code.

持久

always is false and the skill writes only its own config (prometheus.json) under the OpenClaw workspace or a path provided by PROMETHEUS_CONFIG. It does not modify other skills or system-wide agent settings.

综合结论

This skill appears to do exactly what it says: query Prometheus HTTP APIs for metrics. Before installing, note that the CLI will read and set environment variables from local .env files (workspace and current directory) and will write a config file (prometheus.json) into your OpenClaw workspace (or the path you provide). Don’t point it at untrusted Prometheus endpoints; review the generated prometheus.json to ensure no sensitive credentials ar…

安装(复制给龙虾 AI)

将下方整段复制到龙虾中文库对话中,由龙虾按 SKILL.md 完成安装。

请把本段交给龙虾中文库(龙虾 AI)执行:为本机安装 OpenClaw 技能「Prometheus」。简介:Query Prometheus monitoring data to check server metrics, resource usage, and s…。
请 fetch 以下地址读取 SKILL.md 并按文档完成安装:https://raw.githubusercontent.com/openclaw/skills/refs/heads/main/skills/akellacom/prometheus/SKILL.md
(来源:yingzhi8.cn 技能库)

SKILL.md

打开原始 SKILL.md(GitHub raw)

---
name: prometheus
description: Query Prometheus monitoring data to check server metrics, resource usage, and system health. Use when the user asks about server status, disk space, CPU/memory usage, network stats, or any metrics collected by Prometheus. Supports multiple Prometheus instances with aggregated queries, config file or environment variables, and HTTP Basic Auth.
---

# Prometheus Skill

Query Prometheus monitoring data from one or multiple instances. Supports federation across multiple Prometheus servers with a single command.

## Quick Start

### 1. Initial Setup

Run the interactive configuration wizard:

```bash
cd ~/.openclaw/workspace/skills/prometheus
node scripts/cli.js init
```

This will create a `prometheus.json` config file in your OpenClaw workspace (`~/.openclaw/workspace/prometheus.json`).

### 2. Start Querying

```bash
# Query default instance
node scripts/cli.js query 'up'

# Query all instances at once
node scripts/cli.js query 'up' --all

# List configured instances
node scripts/cli.js instances
```

## Configuration

### Config File Location

By default, the skill looks for config in your OpenClaw workspace:

```
~/.openclaw/workspace/prometheus.json
```

**Priority order:**
1. Path from `PROMETHEUS_CONFIG` environment variable
2. `~/.openclaw/workspace/prometheus.json`
3. `~/.openclaw/workspace/config/prometheus.json`
4. `./prometheus.json` (current directory)
5. `~/.config/prometheus/config.json`

### Config Format

Create `prometheus.json` in your workspace (or use `node cli.js init`):

```json
{
  "instances": [
    {
      "name": "production",
      "url": "https://prometheus.example.com",
      "user": "admin",
      "password": "secret"
    },
    {
      "name": "staging",
      "url": "http://prometheus-staging:9090"
    }
  ],
  "default": "production"
}
```

**Fields:**
- `name` — unique identifier for the instance
- `url` — Prometheus server URL
- `user` / `password` — optional HTTP Basic Auth credentials
- `default` — which instance to use when none specified

### Environment Variables (Legacy)

For single-instance setups, you can use environment variables:

```bash
export PROMETHEUS_URL=https://prometheus.example.com
export PROMETHEUS_USER=admin        # optional
export PROMETHEUS_PASSWORD=secret   # optional
```

## Usage

### Global Flags

| Flag | Description |
|------|-------------|
| `-c, --config <path>` | Path to config file |
| `-i, --instance <name>` | Target specific instance |
| `-a, --all` | Query all configured instances |

### Commands

#### Setup

```bash
# Interactive configuration wizard
node scripts/cli.js init
```

#### Query Metrics

```bash
cd ~/.openclaw/workspace/skills/prometheus

# Query default instance
node scripts/cli.js query 'up'

# Query specific instance
node scripts/cli.js query 'up' -i staging

# Query ALL instances at once
node scripts/cli.js query 'up' --all

# Custom config file
node scripts/cli.js query 'up' -c /path/to/config.json
```

#### Common Queries

**Disk space usage:**
```bash
node scripts/cli.js query '100 - (node_filesystem_avail_bytes / node_filesystem_size_bytes * 100)' --all
```

**CPU usage:**
```bash
node scripts/cli.js query '100 - (avg by (instance) (irate(node_cpu_seconds_total{mode="idle"}[5m])) * 100)' --all
```

**Memory usage:**
```bash
node scripts/cli.js query '(node_memory_MemTotal_bytes - node_memory_MemAvailable_bytes) / node_memory_MemTotal_bytes * 100' --all
```

**Load average:**
```bash
node scripts/cli.js query 'node_load1' --all
```

### List Configured Instances

```bash
node scripts/cli.js instances
```

Output:
```json
{
  "default": "production",
  "instances": [
    { "name": "production", "url": "https://prometheus.example.com", "hasAuth": true },
    { "name": "staging", "url": "http://prometheus-staging:9090", "hasAuth": false }
  ]
}
```

### Other Commands

```bash
# List all metrics matching pattern
node scripts/cli.js metrics 'node_memory_*'

# Get label names
node scripts/cli.js labels --all

# Get values for a label
node scripts/cli.js label-values instance --all

# Find time series
node scripts/cli.js series '{__name__=~"node_cpu_.*", instance=~".*:9100"}' --all

# Get active alerts
node scripts/cli.js alerts --all

# Get scrape targets
node scripts/cli.js targets --all
```

## Multi-Instance Output Format

When using `--all`, results include data from all instances:

```json
{
  "resultType": "vector",
  "results": [
    {
      "instance": "production",
      "status": "success",
      "resultType": "vector",
      "result": [...]
    },
    {
      "instance": "staging",
      "status": "success",
      "resultType": "vector",
      "result": [...]
    }
  ]
}
```

Errors on individual instances don't fail the entire query — they appear with `"status": "error"` in the results array.

## Common Queries Reference

| Metric | PromQL Query |
|--------|--------------|
| Disk free % | `node_filesystem_avail_bytes / node_filesystem_size_bytes * 100` |
| Disk used % | `100 - (node_filesystem_avail_bytes / node_filesystem_size_bytes * 100)` |
| CPU idle % | `avg by (instance) (irate(node_cpu_seconds_total{mode="idle"}[5m])) * 100` |
| Memory used % | `(node_memory_MemTotal_bytes - node_memory_MemAvailable_bytes) / node_memory_MemTotal_bytes * 100` |
| Network RX | `rate(node_network_receive_bytes_total[5m])` |
| Network TX | `rate(node_network_transmit_bytes_total[5m])` |
| Uptime | `node_time_seconds - node_boot_time_seconds` |
| Service up | `up` |

## Notes

- Time range defaults to last 1 hour for instant queries
- Use range queries `[5m]` for rate calculations
- All queries return JSON with `data.result` containing the results
- Instance labels typically show `host:port` format
- When using `--all`, queries run in parallel for faster results
- Config is stored outside the skill directory so it persists across skill updates