openclaw 网盘下载
OpenClaw

技能详情(站内镜像,无评论)

首页 > 技能库 > Dl Transformer Finetune

Build transformer fine-tuning run plans with task settings, hyperparameters, and model-card outputs. Use for repeatable Hugging Face or PyTorch finetuning wo...

开发与 DevOps

作者:Muhammad Mazhar Saeed @0x-professor

许可证:MIT-0

MIT-0 ·免费使用、修改和重新分发。无需归因。

版本:v0.1.0

统计:⭐ 0 · 324 · 2 current installs · 2 all-time installs

0

安装量(当前) 2

🛡 VirusTotal :可疑 · OpenClaw :良性

Package:0x-professor/dl-transformer-finetune

安全扫描(ClawHub)

  • VirusTotal :可疑
  • OpenClaw :良性

OpenClaw 评估

The skill is internally consistent: it provides an instruction and a small bundled script to generate reproducible fine-tuning run plans and does not request credentials, external downloads, or network access.

目的

The name/description match the included artifacts: SKILL.md, a finetune guidance doc, and a Python script that builds run plans and model-card skeletons. No unrelated binaries, env vars, or services are requested.

说明范围

SKILL.md instructs the agent to run the bundled scripts and consult the reference guide; the script only reads an optional JSON input and writes an output file (json/md/csv). This stays within the stated purpose, but note the script will create/overwrite files at the provided output path and can load a user-specified input file (size-limited).

安装机制

No install spec; this is instruction-only with a small included script. Nothing is downloaded or extracted from external URLs.

证书

No environment variables, credentials, or config paths are requested. The script does not access other system credentials or external services.

持久

always:false and no modifications to other skills or system-wide settings. The skill can be invoked autonomously per platform defaults, but it does not request elevated or persistent privileges.

综合结论

This skill appears to do what it says: generate finetuning run plans. Before installing or running it, consider: (1) review the bundled script yourself — it writes files to whatever output path you provide and can overwrite existing files, so avoid privileged/system paths; (2) prefer running with --dry-run first to inspect output without side effects; (3) do not pass secrets or credentials in the optional input JSON; (4) validate any datasets …

安装(复制给龙虾 AI)

将下方整段复制到龙虾中文库对话中,由龙虾按 SKILL.md 完成安装。

请把本段交给龙虾中文库(龙虾 AI)执行:为本机安装 OpenClaw 技能「Dl Transformer Finetune」。简介:Build transformer fine-tuning run plans with task settings, hyperparameters, an…。
请 fetch 以下地址读取 SKILL.md 并按文档完成安装:https://raw.githubusercontent.com/openclaw/skills/refs/heads/main/skills/0x-professor/dl-transformer-finetune/SKILL.md
(来源:yingzhi8.cn 技能库)

SKILL.md

打开原始 SKILL.md(GitHub raw)

---
name: dl-transformer-finetune
description: Build transformer fine-tuning run plans with task settings, hyperparameters, and model-card outputs. Use for repeatable Hugging Face or PyTorch finetuning workflows.
---

# DL Transformer Finetune

## Overview

Generate reproducible fine-tuning run plans for transformer models and downstream tasks.

## Workflow

1. Define base model, task type, and dataset.
2. Set training hyperparameters and evaluation cadence.
3. Produce run plan plus model card skeleton.
4. Export configuration-ready artifacts for training pipelines.

## Use Bundled Resources

- Run `scripts/build_finetune_plan.py` for deterministic plan output.
- Read `references/finetune-guide.md` for hyperparameter baseline guidance.

## Guardrails

- Keep run plans reproducible with explicit seeds and output directories.
- Include evaluation and rollback criteria.