AI Providers
ServerAssistantAI supports a wide range of AI providers for both language models (LLMs) and embeddings.
Provider | Type | Functionality | Pricing | Description |
---|---|---|---|---|
Built-in | LLM & Embedding | Free & Paid | Provides access to Cohere's language and embedding models, with RAG capabilities to improve performance. | |
Built-in | LLM & Embedding | Paid | Offers premium paid models like GPT-3.5-turbo and GPT-4. | |
Addon | LLM | Paid | Enables the use of Anthropic's Claude models for LLM functionality. | |
Addon | LLM & Embedding | Paid | Allows integration with Azure OpenAI Service for both LLM and embedding capabilities. | |
Built-in | LLM & Embedding | Paid | Run the latest ML models with ease using DeepInfra's simple REST API. | |
Built-in | LLM & Embedding | Paid | Fireworks.ai is a fast inference platform for serving generative AI models efficiently. | |
Addon | LLM & Embedding | Free & Paid | Provides access to Google's most advanced Gemini generative AI models. | |
Built-in | LLM | Temporarily Free | Utilizes Groq's LPU (Language Processing Unit) Inference Engine for fast LLM inference. | |
Addon | LLM & Embedding | Free | Provides access to thousands of open-source models for free through the HuggingFace Inference API. | |
Built-in | LLM | Paid | Kolank is an AI routing platform that connects to various models, ensuring high-quality responses. | |
Built-in | LLM & Embedding | Self-hosted | A desktop app for running local models on your computer, supporting models from HuggingFace. | |
Built-in | LLM & Embedding | Self-hosted | Open-source, OpenAI drop-in alternative REST API for local inferencing without a GPU. | |
Addon | LLM & Embedding | Paid | Integrates Mistral AI models for both LLM and embedding capabilities. | |
Built-in | LLM & Embedding | Free & Paid | Offers reliable API access to GPT-4, Gemini 1.5, Llama 3B, and various other language and embedding models. | |
Built-in | LLM | Paid | Harness the latest AI innovations with OctoAI's efficient, reliable, and customizable AI systems for your apps. | |
Built-in | LLM & Embedding | Self-hosted | Allows self-hosting of Ollama, a lightweight framework for running language models locally. | |
Built-in | LLM | Free & Paid | Standardized API for switching between models and providers, prioritizing price or performance. | |
Built-in | LLM | Paid | Perplexity AI's API enables users to use Perplexity Models and Open-Source LLMs. | |
Built-in | LLM & Embedding | Paid | Fast, cost-efficient, and scalable inference for open-source models like Llama-3. | |
Built-in | LLM | Paid | Offers language models like Yi-1.5, delivering strong performance in instruction-following. |
All non-built-in providers require the installation of their respective addons, which are available for free.
With support for a diverse range of AI providers, ServerAssistantAI enables users to choose the models and services that best fit their needs and budget.
Last updated