openclaw 网盘下载
OpenClaw

技能详情(站内镜像,无评论)

首页 > 技能库 > Pydantic Ai Model Integration

Configure LLM providers, use fallback models, handle streaming, and manage model settings in PydanticAI. Use when selecting models, implementing resilience,...

开发与 DevOps

作者:Kevin Anderson @anderskev

许可证:MIT-0

MIT-0 ·免费使用、修改和重新分发。无需归因。

版本:v1.0.0

统计:⭐ 0 · 16 · 1 current installs · 1 all-time installs

0

安装量(当前) 1

🛡 VirusTotal :良性 · OpenClaw :可疑

Package:anderskev/pydantic-ai-model-integration

安全扫描(ClawHub)

  • VirusTotal :良性
  • OpenClaw :可疑

OpenClaw 评估

The SKILL.md is a coherent, instruction-only integration guide for PydanticAI, but it references provider API keys and environment-based model validation without declaring any required env vars or a source — this mismatch and the unknown origin warrant caution.

目的

The name/description (PydanticAI model integration, fallback models, streaming, settings) matches the SKILL.md content: examples show configuring models, streaming, fallbacks, and usage limits. There is no unrelated functionality in the instructions.

说明范围

Instructions are focused on using the pydantic_ai library (Agent, ModelSettings, FallbackModel, streaming). They reference reading environment variables (os.getenv) and model-validation behavior that 'checks env vars' but do not instruct the agent to read arbitrary files or exfiltrate data. The guidance to supply api_key parameters or rely on provider env vars means runtime code will access secrets if present.

安装机制

No install spec and no code files are included (instruction-only). This is low risk from installation perspective — nothing is downloaded or written to disk by the skill itself.

证书

The SKILL.md explicitly references provider API keys and env vars (OPENAI_API_KEY, ANTHROPIC_API_KEY, os.getenv('PYDANTIC_AI_MODEL')), and says model validation 'checks env vars', but the skill metadata declares no required environment variables or primary credential. That mismatch means the skill may expect sensitive keys at runtime even though none are declared — users could unintentionally expose API keys to code running under the Agent whe…

持久

always is false and there is no install-time behavior or requests for permanent presence. The skill does not request elevated agent-wide privileges or modify other skills/configurations.

安装(复制给龙虾 AI)

将下方整段复制到龙虾中文库对话中,由龙虾按 SKILL.md 完成安装。

请把本段交给龙虾中文库(龙虾 AI)执行:为本机安装 OpenClaw 技能「Pydantic Ai Model Integration」。简介:Configure LLM providers, use fallback models, handle streaming, and manage mode…。
请 fetch 以下地址读取 SKILL.md 并按文档完成安装:https://raw.githubusercontent.com/openclaw/skills/refs/heads/main/skills/anderskev/pydantic-ai-model-integration/SKILL.md
(来源:yingzhi8.cn 技能库)

SKILL.md

打开原始 SKILL.md(GitHub raw)

---
name: pydantic-ai-model-integration
description: Configure LLM providers, use fallback models, handle streaming, and manage model settings in PydanticAI. Use when selecting models, implementing resilience, or optimizing API calls.
---

# PydanticAI Model Integration

## Provider Model Strings

Format: `provider:model-name`

```python
from pydantic_ai import Agent

# OpenAI
Agent('openai:gpt-4o')
Agent('openai:gpt-4o-mini')
Agent('openai:o1-preview')

# Anthropic
Agent('anthropic:claude-sonnet-4-5')
Agent('anthropic:claude-haiku-4-5')

# Google (API Key)
Agent('google-gla:gemini-2.0-flash')
Agent('google-gla:gemini-2.0-pro')

# Google (Vertex AI)
Agent('google-vertex:gemini-2.0-flash')

# Groq
Agent('groq:llama-3.3-70b-versatile')
Agent('groq:mixtral-8x7b-32768')

# Mistral
Agent('mistral:mistral-large-latest')

# Other providers
Agent('cohere:command-r-plus')
Agent('bedrock:anthropic.claude-3-sonnet')
```

## Model Settings

```python
from pydantic_ai import Agent
from pydantic_ai.settings import ModelSettings

agent = Agent(
    'openai:gpt-4o',
    model_settings=ModelSettings(
        temperature=0.7,
        max_tokens=1000,
        top_p=0.9,
        timeout=30.0,  # Request timeout
    )
)

# Override per-run
result = await agent.run(
    'Generate creative text',
    model_settings=ModelSettings(temperature=1.0)
)
```

## Fallback Models

Chain models for resilience:

```python
from pydantic_ai.models.fallback import FallbackModel

# Try models in order until one succeeds
fallback = FallbackModel(
    'openai:gpt-4o',
    'anthropic:claude-sonnet-4-5',
    'google-gla:gemini-2.0-flash'
)

agent = Agent(fallback)
result = await agent.run('Hello')

# Custom fallback conditions
from pydantic_ai.exceptions import ModelAPIError

def should_fallback(error: Exception) -> bool:
    """Only fallback on rate limits or server errors."""
    if isinstance(error, ModelAPIError):
        return error.status_code in (429, 500, 502, 503)
    return False

fallback = FallbackModel(
    'openai:gpt-4o',
    'anthropic:claude-sonnet-4-5',
    fallback_on=should_fallback
)
```

## Streaming Responses

```python
async def stream_response():
    async with agent.run_stream('Tell me a story') as response:
        # Stream text output
        async for chunk in response.stream_output():
            print(chunk, end='', flush=True)

    # Access final result after streaming
    print(f"nTokens used: {response.usage().total_tokens}")
```

### Streaming with Structured Output

```python
from pydantic import BaseModel

class Story(BaseModel):
    title: str
    content: str
    moral: str

agent = Agent('openai:gpt-4o', output_type=Story)

async with agent.run_stream('Write a fable') as response:
    # For structured output, stream_output yields partial JSON
    async for partial in response.stream_output():
        print(partial)  # Partial Story object as parsed

    # Final validated result
    story = response.output
```

## Dynamic Model Selection

```python
import os

# Environment-based selection
model = os.getenv('PYDANTIC_AI_MODEL', 'openai:gpt-4o')
agent = Agent(model)

# Runtime model override
result = await agent.run(
    'Hello',
    model='anthropic:claude-sonnet-4-5'  # Override default
)

# Context manager override
with agent.override(model='google-gla:gemini-2.0-flash'):
    result = agent.run_sync('Hello')
```

## Deferred Model Checking

Delay model validation for testing:

```python
# Default: Validates model immediately (checks env vars)
agent = Agent('openai:gpt-4o')

# Deferred: Validates only on first run
agent = Agent('openai:gpt-4o', defer_model_check=True)

# Useful for testing with override
with agent.override(model=TestModel()):
    result = agent.run_sync('Test')  # No OpenAI key needed
```

## Usage Tracking

```python
result = await agent.run('Hello')

# Request usage (last request)
usage = result.usage()
print(f"Input tokens: {usage.input_tokens}")
print(f"Output tokens: {usage.output_tokens}")
print(f"Total tokens: {usage.total_tokens}")

# Full run usage (all requests in run)
run_usage = result.run_usage()
print(f"Total requests: {run_usage.requests}")
```

## Usage Limits

```python
from pydantic_ai.usage import UsageLimits

# Limit token usage
result = await agent.run(
    'Generate content',
    usage_limits=UsageLimits(
        total_tokens=1000,
        request_tokens=500,
        response_tokens=500,
    )
)
```

## Provider-Specific Features

### OpenAI

```python
from pydantic_ai.models.openai import OpenAIModel

model = OpenAIModel(
    'gpt-4o',
    api_key='your-key',  # Or use OPENAI_API_KEY env var
    base_url='https://custom-endpoint.com'  # For Azure, proxies
)
```

### Anthropic

```python
from pydantic_ai.models.anthropic import AnthropicModel

model = AnthropicModel(
    'claude-sonnet-4-5',
    api_key='your-key'  # Or ANTHROPIC_API_KEY
)
```

## Common Model Patterns

| Use Case | Recommendation |
|----------|---------------|
| General purpose | `openai:gpt-4o` or `anthropic:claude-sonnet-4-5` |
| Fast/cheap | `openai:gpt-4o-mini` or `anthropic:claude-haiku-4-5` |
| Long context | `anthropic:claude-sonnet-4-5` (200k) or `google-gla:gemini-2.0-flash` |
| Reasoning | `openai:o1-preview` |
| Cost-sensitive prod | `FallbackModel` with fast model first |