openclaw 网盘下载
OpenClaw

技能详情(站内镜像,无评论)

首页 > 技能库 > Create and manage a sorted directory structure in AWS S3

Upload many files to S3 with automatic organization by first-character prefixes.

数据与表格

许可证:MIT-0

MIT-0 ·免费使用、修改和重新分发。无需归因。

版本:v1.0.1

统计:⭐ 0 · 449 · 0 current installs · 0 all-time installs

0

安装量(当前) 0

🛡 VirusTotal :良性 · OpenClaw :良性

Package:6mile-puppet/s3-sort

安全扫描(ClawHub)

  • VirusTotal :良性
  • OpenClaw :良性

OpenClaw 评估

The skill is coherent with its stated purpose: a bash-based bulk S3 uploader that needs the AWS CLI and AWS credentials and does nothing outside that scope.

目的

Name/description, required binaries (aws), declared env vars (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY), brew install (awscli), SKILL.md, and the included bash script all match the stated goal of uploading files to S3 organized by first-character prefixes.

说明范围

The runtime instructions and script operate only on a user-supplied source directory and a target S3 bucket and invoke aws s3 cp / aws s3 sync. They do not contact other external endpoints. One important behavioral note: the sync/staging flow creates symlinks to the realpath of files and will cause files referenced by symlinks (including files outside the provided directory) to be staged and uploaded — so ensure SOURCE_DIR contains only the in…

安装机制

Install uses Homebrew to install the official awscli formula (creates the 'aws' binary). This is a standard, low-risk install method for macOS/homebrew environments.

证书

Only AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are required, which is appropriate for a tool that calls the AWS CLI. The primaryEnv is set to AWS_ACCESS_KEY_ID. No unrelated credentials or config paths are requested.

持久

The skill does not request always:true, does not modify other skills or system-wide configs, and contains no mechanism to persist beyond normal installation. Autonomous invocation is allowed (platform default) but not combined with other red flags.

综合结论

This skill appears to do exactly what it claims, but review these practical safety steps before installing or running it: 1) Use least-privilege AWS credentials — grant only the S3 permissions needed (PutObject, List, optionally Delete) and scope them to the target bucket(s). 2) Test with --dry-run and/or a disposable test bucket first to verify behavior. 3) Be aware the staging flow follows symlinks (it creates symlinks to real file paths), s…

安装(复制给龙虾 AI)

将下方整段复制到龙虾中文库对话中,由龙虾按 SKILL.md 完成安装。

请把本段交给龙虾中文库(龙虾 AI)执行:为本机安装 OpenClaw 技能「Create and manage a sorted directory structure in AWS S3」。简介:Upload many files to S3 with automatic organization by first-character prefixes.。
请 fetch 以下地址读取 SKILL.md 并按文档完成安装:https://raw.githubusercontent.com/openclaw/skills/refs/heads/main/skills/6mile-puppet/s3-sort/SKILL.md
(来源:yingzhi8.cn 技能库)

SKILL.md

打开原始 SKILL.md(GitHub raw)

---
name: s3-bulk-upload
description: Upload many files to S3 with automatic organization by first-character prefixes.
version: 1.0.0
metadata:
  openclaw:
    requires:
      env:
        - AWS_ACCESS_KEY_ID
        - AWS_SECRET_ACCESS_KEY
      bins:
        - aws
    primaryEnv: AWS_ACCESS_KEY_ID
    emoji: "U0001F4E6"
    install:
      - kind: brew
        formula: awscli
        bins: [aws]
---

# S3 Bulk Upload

Upload files to S3 with automatic organization using first-character prefixes (e.g., `a/apple.txt`, `b/banana.txt`, `0-9/123.txt`).

## Quick Start

Use the included script for bulk uploads:

```bash
# Basic upload
./s3-bulk-upload.sh ./files my-bucket

# Dry run to preview
./s3-bulk-upload.sh ./files my-bucket --dry-run

# Use sync mode (faster for many files)
./s3-bulk-upload.sh ./files my-bucket --sync

# With storage class
./s3-bulk-upload.sh ./files my-bucket --storage-class STANDARD_IA
```

## Prerequisites

Verify AWS credentials are configured:

```bash
aws sts get-caller-identity
```

If this fails, ensure `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` are set, or configure via `aws configure`.

## Organization Logic

Files are organized by the first character of their filename:

| First Character | Prefix |
|-----------------|--------|
| `a-z` | Lowercase letter (e.g., `a/`, `b/`) |
| `A-Z` | Lowercase letter (e.g., `a/`, `b/`) |
| `0-9` | `0-9/` |
| Other | `_other/` |

## Single File Upload

Upload a single file with automatic prefix:

```bash
FILE="example.txt"
BUCKET="my-bucket"

# Compute prefix from first character
FIRST_CHAR=$(echo "${FILE}" | cut -c1 | tr '[:upper:]' '[:lower:]')
if [[ "$FIRST_CHAR" =~ [a-z] ]]; then
  PREFIX="$FIRST_CHAR"
elif [[ "$FIRST_CHAR" =~ [0-9] ]]; then
  PREFIX="0-9"
else
  PREFIX="_other"
fi

aws s3 cp "$FILE" "s3://${BUCKET}/${PREFIX}/${FILE}"
```

## Bulk Upload

Upload all files from a directory:

```bash
SOURCE_DIR="./files"
BUCKET="my-bucket"

for FILE in "$SOURCE_DIR"/*; do
  [ -f "$FILE" ] || continue
  BASENAME=$(basename "$FILE")
  FIRST_CHAR=$(echo "$BASENAME" | cut -c1 | tr '[:upper:]' '[:lower:]')

  if [[ "$FIRST_CHAR" =~ [a-z] ]]; then
    PREFIX="$FIRST_CHAR"
  elif [[ "$FIRST_CHAR" =~ [0-9] ]]; then
    PREFIX="0-9"
  else
    PREFIX="_other"
  fi

  aws s3 cp "$FILE" "s3://${BUCKET}/${PREFIX}/${BASENAME}"
done
```

## Efficient Bulk Sync

For large uploads, stage files with symlinks then use `aws s3 sync`:

```bash
SOURCE_DIR="./files"
STAGING_DIR="./staging"
BUCKET="my-bucket"

# Create staging directory with prefix structure
rm -rf "$STAGING_DIR"
mkdir -p "$STAGING_DIR"

for FILE in "$SOURCE_DIR"/*; do
  [ -f "$FILE" ] || continue
  BASENAME=$(basename "$FILE")
  FIRST_CHAR=$(echo "$BASENAME" | cut -c1 | tr '[:upper:]' '[:lower:]')

  if [[ "$FIRST_CHAR" =~ [a-z] ]]; then
    PREFIX="$FIRST_CHAR"
  elif [[ "$FIRST_CHAR" =~ [0-9] ]]; then
    PREFIX="0-9"
  else
    PREFIX="_other"
  fi

  mkdir -p "$STAGING_DIR/$PREFIX"
  ln -s "$(realpath "$FILE")" "$STAGING_DIR/$PREFIX/$BASENAME"
done

# Sync entire staging directory to S3
aws s3 sync "$STAGING_DIR" "s3://${BUCKET}/"

# Clean up
rm -rf "$STAGING_DIR"
```

## Verification

List files by prefix:

```bash
BUCKET="my-bucket"
PREFIX="a"

aws s3 ls "s3://${BUCKET}/${PREFIX}/" --recursive
```

Generate a manifest of all uploaded files:

```bash
BUCKET="my-bucket"

aws s3 ls "s3://${BUCKET}/" --recursive | awk '{print $4}'
```

Count files per prefix:

```bash
BUCKET="my-bucket"

for PREFIX in {a..z} 0-9 _other; do
  COUNT=$(aws s3 ls "s3://${BUCKET}/${PREFIX}/" --recursive 2>/dev/null | wc -l | tr -d ' ')
  [ "$COUNT" -gt 0 ] && echo "$PREFIX: $COUNT files"
done
```

## Error Handling

Common issues and solutions:

| Error | Cause | Solution |
|-------|-------|----------|
| `AccessDenied` | Insufficient permissions | Check IAM policy has `s3:PutObject` on bucket |
| `NoSuchBucket` | Bucket doesn't exist | Create bucket or check bucket name spelling |
| `InvalidAccessKeyId` | Bad credentials | Verify `AWS_ACCESS_KEY_ID` is correct |
| `ExpiredToken` | Session token expired | Refresh credentials or re-authenticate |

Test bucket access before bulk upload:

```bash
BUCKET="my-bucket"
echo "test" | aws s3 cp - "s3://${BUCKET}/_test_access.txt" && 
  aws s3 rm "s3://${BUCKET}/_test_access.txt" && 
  echo "Bucket access OK"
```

## Storage Classes

Optimize costs with storage classes:

```bash
# Standard (default)
aws s3 cp file.txt s3://bucket/prefix/file.txt

# Infrequent Access (cheaper storage, retrieval fee)
aws s3 cp file.txt s3://bucket/prefix/file.txt --storage-class STANDARD_IA

# Glacier Instant Retrieval (archive with fast access)
aws s3 cp file.txt s3://bucket/prefix/file.txt --storage-class GLACIER_IR

# Intelligent Tiering (auto-optimize based on access patterns)
aws s3 cp file.txt s3://bucket/prefix/file.txt --storage-class INTELLIGENT_TIERING
```

Add `--storage-class` to bulk upload loops for cost optimization on infrequently accessed files.