ServerAssistantAI
WebsitePurchaseDiscord
1.3.2
1.3.2
  • 🤖ServerAssistantAI
    • Showcase
    • Features
    • Test Server
  • ⭐Getting Started
    • How ServerAssistantAI Works
    • Prerequisites
    • Installation
    • Commands & Permissions
    • Addons
  • 🛠️Configuration
    • The Basics
    • Config YAML File
    • Credentials YAML File
    • Minecraft Config Folder
    • Discord Config Folder
    • Documents Folder
    • Interaction Messages Folder
    • Messages YAML File
  • 🤖Providers & Models
    • AI Providers
      • Configuring Providers
      • Embedding Provider Options
      • Chat (LLM) Provider Options
      • Question Detection Provider Options
    • Recommended Models
    • Free Models
    • Paid Models
  • 🆘FAQs & Support
    • FAQs
    • Official Support
    • Troubleshooting
  • 💻Developers
    • API
Powered by GitBook
On this page
  • Built-in Providers
  • Addon Providers
  • Pre-configured OpenAI-compatible Providers
  • Custom Providers with Custom Base URLs

Was this helpful?

  1. Providers & Models
  2. AI Providers

Configuring Providers

PreviousAI ProvidersNextEmbedding Provider Options

Was this helpful?

ServerAssistantAI uses a flexible provider system for , , and . Users can choose from built-in providers, addon providers, pre-configured OpenAI-compatible providers, or custom providers with custom base URLs.

Built-in Providers

Built-in providers, such as Cohere and OpenAI, are included with ServerAssistantAI and can be configured directly in the config.yml file.

To use a built-in provider:

  1. Set the provider field to the provider name (e.g., cohere or openai) in the config.yml file for either LLM and/or Embedding.

  2. Add the appropriate model name for the chosen provider.

  3. Add the API key to the corresponding section in the credentials.yml file, if not already present.

  4. Reload the plugin to apply the changes.

Addon Providers

Addon providers, such as Anthropic and Google-AIStudio, require the installation of their respective addons, which are available for free in the #saai-addons Discord channel.

To use an addon provider:

  1. Download the addon .jar file from the #saai-addons Discord channel.

  2. Place the addon file in the plugins/ directory.

  3. Restart the server to load the addon.

  4. Set the provider field to the addon provider name (e.g., anthropic, google-aistudio) in the config.yml file for either LLM and/or Embedding.

  5. Add the appropriate model name for the chosen provider.

  6. Add the API key in the newly generated field in the credentials.yml file.

  7. Reload the plugin to apply the changes.

Pre-configured OpenAI-compatible Providers

Pre-configured OpenAI-compatible providers, such as Groq and Perplexity, are already built into ServerAssistantAI and can be easily configured using their respective endpoint URLs.

To use a pre-configured OpenAI-compatible provider:

  1. Set the provider field to the pre-configured provider name (e.g., groq, perplexity) in the config.yml file for either LLM and/or Embedding.

  2. Add the appropriate model name for the chosen provider.

  3. Reload the plugin to generate the new field in the credentials.yml file.

  4. Fill out the credentials in the newly generated field in the credentials.yml file.

  5. Reload the plugin again to apply the changes.

Custom Providers with Custom Base URLs

Custom providers with custom base URLs, such as LM-Studio and Ollama, allow users to integrate with any OpenAI API-compatible service by specifying the custom endpoint URL in the config.yml file.

To use a custom provider with a custom base URL:

  1. Set the provider field to openai-variant in the config.yml file for either LLM and/or Embedding.

  2. Add the appropriate model name for the chosen provider.

  3. Add the base_url field under the provider field and set it to your custom URL, without including /chat/completions if it is an LLM and /embeddings if it is an embedding model within the URL. For example, if an LLM's endpoint URL is http://localhost:1234/v1/chat/completions, then the base_url should be http://localhost:1234/v1:

    chat_model:
      minecraft:
        provider: 'openai-variant'
        base_url: 'http://localhost:1234/v1'    
        model: 'lmstudio-community/Meta-Llama-3-8B-Instruct-GGUF'
  4. Add the API key to the OpenAI section in the credentials.yml file. If there is no API key, add random information to the API key field.

  5. Reload the plugin to apply the changes.

Remember to reload ServerAssistantAI after making changes to the config.yml file for the changes to take effect.

🤖
chat models (LLMs)
embedding models
question detection