FAQs

Here are some frequently asked questions about ServerAssistantAI:

Verification

I purchased ServerAssistantAI, how can I get verified on Discord?

After buying ServerAssistantAI from SpigotMC, BuiltByBit, or Polymart, follow these steps to get verified on our Discord server:

  1. Join our Discord server.

  2. Head to the #verify-purchase channel.

  3. Click on the button corresponding to the platform where you purchased the plugin (SpigotMC, BuiltByBit, or Polymart).

  4. Fill in the required information.

  5. Our team will review the information provided and confirm the purchase within 24 hours.

Please note that the current manual verification process is temporary, and we are working on implementing an automated system.

General Information

What are the costs associated with using ServerAssistantAI?

ServerAssistantAI offers both completely free and paid AI models. Free models, like the default CommandR+ model, uses Cohere's API and can be used without any cost. Paid models, like OpenAI's GPT3.5-Turbo or Claude Haiku, have usage costs. We offer many different AI Providers with both free and paid options.

What are Embedding and Large Language Models, and how do they work in ServerAssistantAI?

An embedding model is a type of AI model that converts text data, such as the content in the document.txt file, into numerical representations called embeddings. These embeddings capture the semantic meaning and relationships between different pieces of text. When a user asks a question, the embedding model is used to find the most relevant context from the document.txt file by comparing the embeddings of the question with the embeddings of the text in the file.

Once the relevant context is found, it is combined with the user's question and sent to the Large Language Model (LLM). LLMs are powerful AI models that can understand and generate human-like text based on the input they receive. The LLM processes the question and the context provided by the embedding model to generate an accurate and context-aware response.

ServerAssistantAI uses AI service providers through an API for both embedding models and LLMs. By default, the plugin comes with Cohere (for free models) and OpenAI (for premium paid models). We also offer many different built-in providers, as well as addons, downloadable through free addons, such as Anthropic's Claude, Google Gemini, Groq, etc. The Recommended Models wiki page shows the current recommended models for both free and paid options, depending on what users choose.

In summary, the embedding model helps find the most relevant information from the document.txt file, while the LLM uses that information along with the user's question to generate a helpful response.

What versions of Minecraft are compatible with ServerAssistantAI?

ServerAssistantAI supports Spigot, Paper, Purpur, Folia, and similar server types, along with Minecraft versions 1.16 and above, 1.20.6 is fully supported. Velocity and Bungeecord support is planned but no ETA.

Can I use ServerAssistantAI without a Discord server?

While ServerAssistantAI is designed to work with both Minecraft and Discord, you can use it without a Discord server. Simply set discord.enabled to false in the config.yml file and the plugin will function only within Minecraft.

Usage and Features

How does the AI learn and generate responses?

All the text related to your server can be put into the document.txt file. There is no specific format you have to follow, however, we have found that the Markdown (.md) format works well for helping the AI better understand the structure and context of the information, leading to more accurate and relevant responses. The AI doesn't "learn" from this file; instead, it scans the file with the generated Embedding Model results, finds the closest related chunks that are relevant to the question being asked, and sends that context along with the prompt request. The number of chunks and the relevance score required for a chunk to be considered related can be adjusted in the config.

This is not a keyword-based search, thanks to AI embedding models, context from the document.txt file is pulled accurately, even if it includes completely different words. For example, let’s say in the document it states: "/clan create" - Create a new clan. "/clan name (new clan name)" - Sets a new name for your clan. If someone asks “How can I create a faction?”, it will know the user is talking about the clan command and send the information for creating a clan (image attached below).

How can I customize the AI's responses?
  • Modify the document.txt file to include server-specific information, rules, and guidelines that the AI can use to generate accurate and relevant responses.

  • Adjust the AI's persona and behavior by editing the prompt-header.txt files in the discord/ and minecraft/ directories.

  • Customize the response templates in the question-message.txt and information-message.txt files to match your server's style and tone.

Can I set a limit to the number of questions players can ask the AI?

Yes, you can set a daily question limit for both Minecraft and Discord in the config.yml file. Look for the discord.limits and minecraft.limits settings and adjust them according to your preferences. ServerAssistantAI allows you to define multiple user groups with different daily limits:

  • For Minecraft, in-game users can be assigned to a specific group using the permission serverassistantai.group.<group>.

  • For Discord, users can be assigned to a group using either the role ID or role name in the config.

Can the AI's responses be sent privately to the player instead of globally in-game?

Yes, ServerAssistantAI comes with both public and private response options. Players can either ask a question publicly in chat or privately using /serverassistantai ask (question) . There is also an option in the config called send_replies_only_to_sender to send the question asked publically only to the player who asked the question instead of everyone in the public chat in-game.

Can I disable question detection and only allow the AI to respond to the /serverassistantai ask command?

If you want ServerAssistantAI to only respond to the /serverassistantai ask command and not to questions in the chat, you can disable question detection in the config.yml file. To do this, add regex under the question_detection section to a regex that doesn't match anything, like this:

regex: 'a^' or regex: '(?!)'

Will ServerAssistantAI be able to handle dozens of players asking questions concurrently?

Yes, ServerAssistantAI is fully asynchronous, and can process multiple requests concurrently from a large number of players without impacting the server's performance. The plugin utilizes asynchronous programming techniques to handle AI interactions in the background, allowing the server to continue running smoothly even when handling a high volume of requests.

Can I use ServerAssistantAI with multiple languages?

Yes, if the model you choose is multilingual, players can converse with it in the languages it is trained on. You can explore different language models that support multiple languages and adapt the configuration files accordingly.

Can I use ServerAssistantAI on a proxy server (e.g., Velocity, BungeeCord)?

ServerAssistantAI currently is not compatible with proxy servers like Velocity or BungeeCord, but it is planned. There is currently no ETA on this.

Can I run multiple instances of ServerAssistantAI on the same server?

Running multiple instances of ServerAssistantAI on the same server is not supported and may lead to conflicts or unexpected behavior. If you need to run ServerAssistantAI on multiple servers, please install and configure it separately for each server instance.

Will ServerAssistantAI work with responding to gradient player chat messages?

Yes, ServerAssistantAI will work with gradient chat messages. The plugin uses the modern Paper chat event if it detects that it is available. This event provides a component instead of just a text string, allowing ServerAssistantAI to process and respond to gradient chat messages without any issues.

Technical and Setup

Do I have to reset my configuration after a new update is released?

No, you don't need to reset your configuration when a new update for ServerAssistantAI is released. The plugin is designed to automatically update its configuration file (config.yml) with any new options or settings introduced in the latest version. Your existing configuration will be preserved, and you can simply adjust the new settings as needed.

How do I enable the Discord interaction webhook?

To enable the Discord interaction webhook feature in ServerAssistantAI, make sure you have the following prerequisites:

Once you have the required plugins, follow these steps to enable the interaction webhook:

  1. Open the config.yml file located in the plugins/ServerAssistantAI directory.

  2. Locate the minecraft.channel setting under the Minecraft Configuration section.

  3. Replace the default value with the ID of a Discord channel.

  4. Save the config.yml file.

  5. Reload ServerAssistantAI using /serverassistantai reload.

How can I customize the AI's name and avatar in Discord?

To customize the AI's name and avatar in Discord, you'll need to create a new Discord bot account and provide its token in the discord.bot_token setting in the config.yml file. You can then modify the bot's name and avatar through the Discord Developer Portal.

Why isn't the AI responding to questions typed in the Minecraft chat?

If the AI is not replying to questions typed in the Minecraft chat, it could be due to compatibility issues with another plugin. To resolve this, you can use the chat_listener section in the config.yml file:

chat_listener:
  # Use the Paper modern AsyncChatEvent if available. Has no effect on Spigot servers.
  use_paper_event: true
  # Choose the priority level for the chat event listener. Options: LOWEST, LOW, NORMAL, HIGH, HIGHEST, MONITOR (default).
  priority: 'MONITOR'
  # Ignore the chat event if it is cancelled by another plugin.
  ignore_cancelled: true
Why aren't the Discord interaction logs being sent?

To ensure that ServerAssistantAI can send interaction messages to the designated Discord channel, make sure that your bot has the "View Channel," "Send Messages," and "Embed Links" permissions in that channel.

Can I use ServerAssistantAI on a modded Minecraft server?

ServerAssistantAI can be used with Minecraft servers running Spigot, Paper, Folia, or similar server software. While it may work with other server software, compatibility cannot be guaranteed. We plan on adding compatibility for Proxy software like Velocity and Bungeecord in the future.

How can I optimize ServerAssistantAI's performance on my server?

To optimize ServerAssistantAI's performance, consider the following tips:

  • Ensure your server meets the recommended hardware requirements for running ServerAssistantAI.

  • Keep your ServerAssistantAI version up to date to benefit from performance improvements and bug fixes.

  • Adjust the configuration settings in config.yml to fine-tune performance.

  • Monitor your server's resource usage.

Can I use ServerAssistantAI with a different language models or AI providers?

ServerAssistantAI comes with Cohere & OpenAI language models by default, however we offer addons that include many other AI providers. If you have a specific language model or AI provider in mind that is not listed, please reach out to us on our discord server.

How can I create custom addons for ServerAssistantAI?

To create custom addons for ServerAssistantAI, you can use the plugin's API. You can find more information about creating add-ons in the Addons section of the wiki and the API reference documentation. Examples of addons that were created for ServerAssistantAI can be found on the SAAI-Addons GitHub repository.

Security, Privacy, and Hallucinations

Does ServerAssistantAI have a built-in censor or content filter?

Yes, the default configuration of ServerAssistantAI comes with a well-designed system prompt that keeps the AI's responses focused on Minecraft-related topics. This helps to prevent the AI from generating inappropriate or off-topic content. However, server owners can easily adjust the system prompt as needed to further customize the AI's behavior and ensure it aligns with their server's rules and guidelines.

Are the questions or context messages sent to CodeSolutions?

No, none of the questions or context messages are sent to CodeSolutions. They are directly sent to the AI provider chosen by the user.

Are there any AI hallucination issues?

There is a higher chance of hallucinations with open-source free models compared to paid models. However, it takes effort to make them hallucinate. If you choose a paid model like ChatGPT or Claude Haiku, the risk of hallucinations is much lower.

How do I report a security vulnerability in ServerAssistantAI?

If you discover a security vulnerability in ServerAssistantAI, please report it responsibly by emailing us at support@code-solutions.dev or creating a ticket on Discord. We take security issues seriously and appreciate your help in keeping ServerAssistantAI safe for everyone. Please do not disclose the vulnerability publicly until it has been addressed by our team.

How does ServerAssistantAI handle player privacy?

We take player privacy seriously. ServerAssistantAI does not collect or store any personal player data other than anonymous usage data for bStats. The plugin only processes player messages and generates responses based on the provided configuration and server information.

For bStats, ServerAssistantAI collects anonymous usage data, including the AI model provider (e.g., OpenAI, Cohere) and whether Discord features are enabled. This data is aggregated and does not contain any personally identifiable information. You can view the collected data on the ServerAssistantAI bStats page.

If you would like to read our Terms and Services, it is listed on our website. You are also free to contact us for more information.

These FAQs cover a range of common questions and concerns that users may have about ServerAssistantAI. If you can't find an answer to your question here, feel free to reach out on our support channels for further assistance.

Last updated