AI Integration

The openviper.ai package provides a unified, async-native AI provider registry that abstracts multiple inference backends (OpenAI, Anthropic, Gemini, Ollama, Grok, and custom providers) behind a single interface.

Installation

The AI providers are an optional dependency. Install them with:

pip install openviper[ai]

This pulls in the openai, anthropic, and google-genai SDKs. Providers that only need httpx (Ollama, Grok) work without the extra.

If you are developing locally from a clone of the repository:

pip install -e '.[ai]'

Overview

The package is organized around three concepts:

  1. AIProvider — the abstract base class every provider implements.

  2. ProviderRegistry — a thread-safe, model-centric registry that maps model IDs to provider instances and is auto-populated from settings.AI_PROVIDERS.

  3. ModelRouter — a high-level runtime-swappable client that resolves the active provider from the registry on each call.

A stable extension API (openviper.ai.extension) is provided for third-party provider authors.

Key Classes & Functions

openviper.ai.base

class AIProvider(config)

Abstract base class for all AI providers.

name

Provider identifier string (e.g. "openai").

generate(prompt, **kwargs) Awaitable[str]

Generate a text response for prompt. Must be implemented by every concrete provider.

stream(prompt, **kwargs) AsyncIterator[str]

Stream response chunks. Optional — default implementation calls generate() and yields the full string as a single chunk.

openviper.ai.registry

class ProviderRegistry

Thread-safe registry mapping model IDs to AIProvider instances.

register(provider, model_id, allow_override=False)

Register provider under model_id. Raises ModelCollisionError if model_id is already taken and allow_override=False.

get_by_model(model_id) AIProvider

Return the provider for model_id. Raises ModelNotFoundError if not found.

list_models() list[str]

Return all registered model IDs.

register_from_module(dotted_path)

Import a module and call its get_providers() function to register providers from third-party packages.

load_plugins(directory)

Scan a directory for provider modules and register them.

discover_entrypoints()

Discover and register providers from installed package entry-points under the openviper.ai.providers group.

openviper.ai.registry.provider_registry

The global ProviderRegistry singleton.

openviper.ai.router

class ModelRouter(registry=None, default_model=None)

Runtime-swappable AI inference client. All method calls are delegated to the provider registered for the current model.

set_model(model_id) None

Switch the active model (thread-safe).

get_model() str | None

Return the currently active model ID.

generate(prompt, **kwargs) Awaitable[str]

Generate text using the active model’s provider.

stream(prompt, **kwargs) AsyncIterator[str]

Stream response chunks from the active model’s provider.

openviper.ai.router.model_router

The global ModelRouter singleton.

openviper.ai.extension

Stable public API for third-party provider authors. Import from this module to avoid depending on internal symbols:

from openviper.ai.extension import (
    AIProvider,
    provider_registry,
    ModelCollisionError,
    EXTENSION_API_VERSION,
)

openviper.ai.devkit

Helpers for provider authors:

class SimpleProvider(AIProvider)

Abstract base with sensible defaults. Accepts name as a constructor keyword argument.

normalize_response(text) str

Strip leading/trailing whitespace and normalize line endings.

map_http_error(status_code) AIException

Convert an HTTP status code to the appropriate AI exception subclass.

Example Usage

See also

Working projects that use the AI integration:

Registering & Using a Provider

from openviper.ai.extension import AIProvider, provider_registry
from typing import Any

class EchoProvider(AIProvider):
    name = "echo"

    async def generate(self, prompt: str, **kwargs: Any) -> str:
        return f"[Echo] {prompt}"

# Register
provider_registry.register(
    EchoProvider({"models": {"Echo Model": "echo-v1"}}),
    model_id="echo-v1",
)

# Use via model router
from openviper.ai.router import model_router

model_router.set_model("echo-v1")
result = await model_router.generate("Hello, world!")
print(result)   # "[Echo] Hello, world!"

Configuration via Settings

import dataclasses, os
from openviper.conf import Settings

@dataclasses.dataclass(frozen=True)
class MySettings(Settings):
     AI_PROVIDERS: dict[str, Any] = dataclasses.field(
        default_factory=lambda: {
              "ollama": {
                 "base_url": os.environ.get("OLLAMA_BASE_URL", "http://localhost:11434"),
                 "models": {
                    "Granite Code 3B": "granite-code:3b",
                    "Qwen3 4B": "qwen3:4b",
                 },
              },
              "gemini": {
                 "api_key": os.environ.get("GEMINI_API_KEY"),
                 "model": {
                    "GEMINI 2.5 FLASH": "gemini-2.5-flash",
                    "GEMINI 3 PRO PREVIEW": "gemini-3-pro-preview",
                    "GEMINI 3 FLASH PREVIEW": "gemini-3-flash-preview",
                    "GEMINI 3.1 PRO PREVIEW": "gemini-3.1-pro-preview",
                 },
                 "embed_model": "models/text-embedding-004",
                 "temperature": 1.0,
                 "max_output_tokens": 2048,
                 "candidate_count": 1,
                 "top_p": 0.95,
                 "top_k": 40,
              },
        }
     )

Streaming Response

from openviper.http.response import StreamingResponse
from openviper.ai.router import model_router

@router.post("/ai/stream")
async def stream_ai(request) -> StreamingResponse:
    body = await request.json()
    prompt = body.get("prompt", "")

    async def generate():
        async for chunk in model_router.stream(prompt):
            yield chunk.encode()

    return StreamingResponse(generate(), media_type="text/plain")