AI Providers

ServerAssistantAI supports a wide range of AI providers for both language models (LLMs) and embeddings.

What Are Providers?

Providers are configurable components that enable ServerAssistantAI's customizability and flexibility. They serve as the backbone of the plugin's ability to integrate with various AI services, giving server owners the power to tailor the AI assistant's capabilities to their specific needs.

ServerAssistantAI offers several ways to configure and extend its functionality through a flexible provider system for embedding models, chat models (LLMs), and question detection, including Built-in Providers (ready-to-use), Addon Providers (additional providers installed via addons), Pre-configured OpenAI-compatible Providers (built-in providers with predefined endpoint URLs), and Custom Providers with Custom Base URLs (integration with any OpenAI API-compatible service using a custom endpoint URL).

To configure a provider, simply specify its name and any required options in the config.yml file using the following format:

section_name:
    provider: 'openai' # Example provider
    option1: value

Each provider has its own set of options. Some options are shared across providers, while others may have the same name but behave differently depending on the provider. The specific functionality and configuration are determined by the selected provider.

Supported AI Providers

All providers that are not built-in or OpenAI variants require the installation of their respective addons, which are available for free.

With support for a diverse range of AI providers, ServerAssistantAI enables users to choose the models and services that best fit their needs and budget.

Last updated