How ServerAssistantAI Works
This section explains how ServerAssistantAI functions in-game and on Discord, providing insight into the plugin's question detection, response generation, and interaction with players.
Core Components
Embedding Models: These specialized AI models turn text into high-dimensional vectors which are numerical representations called embeddings. These embeddings capture the semantic meaning and relationships between different pieces of text and selects the most relevant document chunks based on similarity scores. The
min_score
andmax_results
settings in the config control this process.Large Language Models (LLMs): These are advanced AI models capable of understanding and generating human-like text. They take the player's question and relevant cached context to generate accurate and context-aware responses.
Document Processing
Server owners add their server information to the
documents/
directory. This directory supports various file formats (.txt, .md, .pdf, .docx, .pptx, .xlsx), allowing for flexible information storage.When the
documents/
directory is updated, the content is sent to the embedding API. The resulting embeddings are saved to thecache/
directory, allowing the AI to find relevant context efficiently without reprocessing or making new API requests for each query.The document content is split into chunks based on the settings in the
splitter
section of theconfig.yml
file. These chunks are then processed by the embedding model to generate the cached embeddings.
Provider System
ServerAssistantAI uses a flexible provider system for chat models (LLMs), embedding models, and question detection.
Chat models and question detection can be configured separately for Discord and Minecraft.
Embedding models are configured globally for both platforms.
Custom providers can be created by developing addons. For more information, refer to our API documentation and Creating Addons guide.
Question Detection System
ServerAssistantAI uses a question detection system that can be set to "simple", "advanced", or "none" mode in the config.yml
file under question_detection.provider
for both Minecraft and Discord. Additionally, custom question detection providers can be created using addons.
Simple Mode
In "simple" mode, the plugin uses a basic combination of question keywords and regex patterns to detect questions:
If only
regex
is set, the plugin matches the regex with the message. If it matches, it's considered a question.If only
interpersonal_regex
is set, the plugin first runs a quick keyword check (what
,where
,when
,why
, andhow
) on the message. If a keyword is included, it runs the interpersonal regex. If it matches, it's not considered a question.If both
regex
andinterpersonal_regex
are set, the plugin first runs the regex. If it matches, it then runs the interpersonal regex. If the interpersonal regex doesn't match, it's considered a question; otherwise, it's not a question.
Advanced Mode
In "advanced" mode, the plugin uses a custom-trained model to detect questions more accurately, reducing false positives and false negatives. This mode requires the advanced question detection addon to be installed. The model can be tested on our online interface here.
When the advanced mode is enabled, the interpersonal_regex
field is ignored, but the regex
field still works in conjunction with the advanced model.
For advanced mode, additional settings can be manually added to the question_detection
section in the config:
debug
(default: false): Shows the detection result and processing time for each message when enabled.minimum_probability
(default: '70'): Sets the minimum probability threshold for a message to be classified as a question. Messages with a question probability below this value are not considered questions.
In-game Functionality
In Minecraft, players can interact with ServerAssistantAI in three ways:
Ask a question directly in the public chat, where the AI will respond if the message is detected as a question
Force the bot to reply by including the AI's full name (specified in the
bot_name
field) within the sentenceAsk privately using the
/serverassistantai ask (question)
commandEngage in a continuous private conversation with the AI using the
/serverassistantai chat
command
When a player sends a message in the Minecraft public chat without forcing the AI to respond, ServerAssistantAI follows these steps:
Checks if the sentence is over the
minimum_words
thresholdVerifies if the user has not exceeded the
daily_limit
set in the configIf both conditions are met and question detection is enabled, the question detection system analyzes the message
If the message is classified as a question, the plugin checks the cached embeddings generated from the content in the
documents/
directory to find relevant information
The min_score
setting defines the minimum similarity score required for a chunk to be considered related to the question. The max_results
determines the maximum number of chunks (relevant pieces of information) to include in the prompt sent to the AI model. These chunks are selected based on their similarity to the player's question.
If relevant information is found and meets the min_score
requirement, or if the question is related to Minecraft, or if the bot is forced to answer, the plugin sends the question and relevant chunks to the chosen AI provider.
Discord Functionality
In Discord, players can interact with ServerAssistantAI in three ways:
Sending a message in a channel specified in the
channel_id
field, where the AI will respond to every message sentAsking a question in channels specified within the
question_detection_channels
, where the AI will only respond if a message is detected to be a question (when question detection is enabled)Mentioning the bot in any channel where it has the necessary permissions. The
max_history
setting determines the number of previous messages in the channel that will be sent along with the question to provide further context for the conversation, if needed.Right-clicking on any message, selecting 'Apps', and choosing the "Ask AI" app. This allows users to get AI responses to specific messages, including those sent by other users.
For each interaction method, ServerAssistantAI follows these steps:
Checks if the sentence is over the
minimum_words
thresholdVerifies if the user has not exceeded the
daily_limit
If both conditions are met and question detection is enabled, the question detection system analyzes the message (for methods 2, 3, and 4)
If the message is classified as a question, the plugin checks the cached embeddings generated from the content in the
documents/
directory to find relevant information
The min_score
setting defines the minimum similarity score required for a chunk to be considered related to the question. The max_results
determines the maximum number of relevant chunks to include in the prompt sent to the AI model.
If relevant information is found and meets the min_score
requirement, or if the bot is forced to answer, the plugin sends the question and relevant chunks to the chosen AI provider.
If the user has the skip keyword role that is set within skip_keyword_roles
in config.yml
, and uses the bypass keyword set in skip_keyword
within the config, the bot will not respond to that specific message. This is useful when replying to someone else's message in the question channel.
Prompt Construction
Template Merging: The system combines content from
prompt-header.txt
(AI persona),question-message.txt
(question format), andinformation-message.txt
(context format). This creates a structured base for the AI's response.Context Integration: The plugin adds selected relevant cached document chunks to the prompt. These chunks are chosen based on their similarity to the user's question.
History Inclusion: The system may include recent message history in the prompt. The amount of history included is based on the
max_history
setting, providing additional context for the AI's response.Relevance Check: When the bot is not forced to answer, and a question is detected with related content, a specific statement is added to the end of the prompt. By default, it states that if the question is not directly related to the server or the provided context, the AI should reply with "[IGNORE]". This statement is customizable in the
ignore_question
field.Structured Output: If JSON Mode is enabled, the system instructs the AI to respond in JSON format. This feature helps the AI stay on topic and provide more concise, focused responses by structuring the output. It can be useful for reducing unnecessary additional text.
Response Delivery
Ignore Keyword Check: Before sending, the system checks if the response contains the
ignore_keyword
. If present, the response is not sent, preventing irrelevant or unwanted messages from being delivered.Filtering: Also before sending, the system applies filters defined in
response_filtering.exclude
andresponse_filtering.stop
. This ensures that unwanted content or phrases are removed from the AI's response.Question Received Sound: In Minecraft, a configurable sound (set by
question_received_sound
) can be played when the AI starts processing a question, providing audio feedback.Replying Status Animation: In Minecraft, the
replying_status
section in allows you to configure the animation displayed when the AI is replying to a question. Supported options include TITLE, SUBTITLE, and BOSSBAR.Formatting: The AI's response is formatted according to the platform. For Minecraft, it uses MiniMessage for rich text formatting, allowing for customized colors, styles, and effects. For Discord, responses can be formatted as plain text or rich embeds.
Delivery Method: Responses can be sent either publicly or privately, depending on the interaction method and configuration. In Minecraft, this is controlled by the
send_replies_only_to_sender
setting. In Discord, it depends on whether the interaction was in a public or private channel.Response Sound Notification: In Minecraft, a configurable sound (set by
response_sound
) can be played when a player receives a response, providing audio feedback.
Optimization and Integration
Caching Mechanism: Embeddings and other frequently used data are cached to reduce API calls and improve response times. This optimization allows the AI to find relevant context efficiently without reprocessing documents for each query.
Asynchronous Processing: Tasks run asynchronously to minimize impact on server performance, ensuring smooth operation even during high-demand periods.
Interaction Logging: If configured, the system logs interactions to a specified Discord channel, allowing for easy monitoring and review of AI interactions.
DiscordSRV Compatibility: ServerAssistantAI can integrate with DiscordSRV if installed, using the Discord bot already configured for DiscordSRV. Alternatively, it can operate as a standalone bot using a separate token specified in the
bot_token
field.
By understanding how ServerAssistantAI works, server owners and administrators can better configure and customize the plugin to suit their specific needs and provide an enhanced experience for their players.
Last updated