ServerAssistantAI
WebsitePurchaseDiscord
1.3.2
1.3.2
  • 🤖ServerAssistantAI
    • Showcase
    • Features
    • Test Server
  • ⭐Getting Started
    • How ServerAssistantAI Works
    • Prerequisites
    • Installation
    • Commands & Permissions
    • Addons
  • 🛠️Configuration
    • The Basics
    • Config YAML File
    • Credentials YAML File
    • Minecraft Config Folder
    • Discord Config Folder
    • Documents Folder
    • Interaction Messages Folder
    • Messages YAML File
  • 🤖Providers & Models
    • AI Providers
      • Configuring Providers
      • Embedding Provider Options
      • Chat (LLM) Provider Options
      • Question Detection Provider Options
    • Recommended Models
    • Free Models
    • Paid Models
  • 🆘FAQs & Support
    • FAQs
    • Official Support
    • Troubleshooting
  • 💻Developers
    • API
Powered by GitBook
On this page
  • What Are Providers?
  • Supported AI Providers

Was this helpful?

  1. Providers & Models

AI Providers

PreviousMessages YAML FileNextConfiguring Providers

Was this helpful?

ServerAssistantAI supports a wide range of AI providers for both language models (LLMs) and embeddings.

What Are Providers?

Providers are configurable components that enable ServerAssistantAI's customizability and flexibility. They serve as the backbone of the plugin's ability to integrate with various AI services, giving server owners the power to tailor the AI assistant's capabilities to their specific needs.

ServerAssistantAI offers several ways to configure and extend its functionality through a flexible provider system for embedding models, chat models (LLMs), and question detection, including (ready-to-use), (additional providers installed via addons), (built-in providers with predefined endpoint URLs), and (integration with any OpenAI API-compatible service using a custom endpoint URL).

To configure a provider, simply specify its name and any required options in the config.yml file using the following format:

section_name:
    provider: 'openai' # Example provider
    option1: value

Each provider has its own set of options. Some options are shared across providers, while others may have the same name but behave differently depending on the provider. The specific functionality and configuration are determined by the selected provider.

Supported AI Providers

Provider
Type
Functionality
Pricing
Description

LLM & Embedding

Free & Paid

Provides access to Cohere's language and embedding models, with RAG capabilities to improve performance.

LLM & Embedding

Paid

Offers premium paid models like GPT-3.5-turbo and GPT-4.

LLM

Paid

Enables the use of Anthropic's Claude models for LLM functionality.

LLM & Embedding

Paid

Allows integration with Azure OpenAI Service for both LLM and embedding capabilities.

LLM & Embedding

Paid

Run the latest ML models with ease using DeepInfra's simple REST API.

LLM & Embedding

Paid

Fireworks.ai is a fast inference platform for serving generative AI models efficiently.

LLM

Free

Access industry-leading AI models directly on GitHub for free.

LLM & Embedding

Free & Paid

Provides access to Google's most advanced Gemini generative AI models.

LLM

Temporarily Free

Utilizes Groq's LPU (Language Processing Unit) Inference Engine for fast LLM inference.

LLM & Embedding

Free

Provides access to thousands of open-source models for free through the HuggingFace Inference API.

LLM

Paid

Kolank is an AI routing platform that connects to various models, ensuring high-quality responses.

LLM & Embedding

Self-hosted

A desktop app for running local models on your computer, supporting models from HuggingFace.

LLM & Embedding

Self-hosted

Open-source, OpenAI drop-in alternative REST API for local inferencing without a GPU.

LLM & Embedding

Paid

Integrates Mistral AI models for both LLM and embedding capabilities.

LLM & Embedding

Free & Paid

Offers reliable API access to GPT-4, Gemini 1.5, Llama 3B, and various other language and embedding models.

LLM

Free & Paid

Integrates NVIDIA's optimized AI models for efficient LLM functionality.

LLM

Paid

Harness the latest AI innovations with OctoAI's efficient, reliable, and customizable AI systems for your apps.

LLM

Self-hosted

Allows self-hosting of Ollama, a lightweight framework for running language models locally.

LLM

Self-hosted

Allows developers to run any open-source LLMs (Llama 3.1, Qwen2, Phi3 and more) or custom models.

LLM

Free & Paid

Standardized API for switching between models and providers, prioritizing price or performance.

LLM

Paid

Perplexity AI's API enables users to use Perplexity Models and Open-Source LLMs.

LLM & Embedding

Paid

Fast, cost-efficient, and scalable inference for open-source models like Llama-3.

LLM

Paid

Offers language models like Yi-1.5, delivering strong performance in instruction-following.

LLM & Embedding

Free & Paid

Brings xAI's powerful Grok models, enabling advanced text capabilities.

OpenAI-Variant

LLM &/or Embedding

Free or Paid

Allows integration with any OpenAI API-compatible service. Users can set up custom endpoints by specifying the base URL in the config.yml file.

With support for a diverse range of AI providers, ServerAssistantAI enables users to choose the models and services that best fit their needs and budget.

All providers that are not built-in or OpenAI variants require the installation of their respective , which are available for free.

🤖
addons
Cohere
OpenAI
Anthropic
Azure-OpenAI
DeepInfra
FireworksAI
Github-Models
Google-AIStudio
Groq
HuggingFace
Kolank
LM-Studio
LocalAI
Mistral-AI
NagaAI
Nvidia-Models
OctoAI
Ollama
OpenLLM
Openrouter
Perplexity
TogetherAI
01.AI
xAI
Built-in Providers
Addon Providers
Pre-configured OpenAI-compatible Providers
Custom Providers with Custom Base URLs