
Self-hosting AnythingLLM the easy way
Yulei ChenAnythingLLM is an all-in-one AI application that lets you chat with your documents, build AI agents, and use RAG (Retrieval-Augmented Generation) with any LLM provider. It supports OpenAI, Anthropic, Ollama, and many more. The best part: it's fully open-source and designed for self-hosting.
Sliplane is a managed container platform that makes self-hosting painless. With one-click deployment, you can get AnythingLLM up and running in minutes - no server setup, no reverse proxy config, no infrastructure to maintain.
Prerequisites
Before deploying, ensure you have a Sliplane account (free trial available).
Quick start
Sliplane provides one-click deployment with presets.
- Click the deploy button above
- Select a project
- Select a server. If you just signed up you get a 48-hour free trial server
- Click Deploy!
About the preset
The one-click deploy above uses Sliplane's AnythingLLM preset. Here's what it includes:
- Official
mintplexlabs/anythingllmDocker image with a pinned version tag - LanceDB as the built-in vector database (no external database needed)
- Persistent storage mounted to
/app/server/storageso your documents, workspaces, and settings survive restarts - Pre-configured with Ollama as the default LLM and embedding provider (pointing to
ollama.internalon Sliplane's internal network) - Listens on port 3001
- Telemetry disabled by default
If you want to use a cloud LLM provider (like OpenAI or Anthropic) instead of Ollama, you can change the LLM_PROVIDER and related environment variables after deployment.
Next steps
Once AnythingLLM is running on Sliplane, access it using the domain Sliplane provided (e.g. anythingllm-xxxx.sliplane.app).
First-time setup
AnythingLLM will walk you through a setup wizard on your first visit. You'll configure:
- Your LLM provider and model
- Your embedding provider
- Your vector database (LanceDB is already set up)
- Whether to enable multi-user mode
Connecting an LLM provider
The preset defaults to Ollama. If you deploy an Ollama instance on Sliplane, AnythingLLM can connect to it over the internal network. Just make sure the OLLAMA_BASE_PATH environment variable points to your Ollama service's internal hostname (e.g. http://ollama.internal:11434).
To use a cloud provider instead, update these environment variables:
| Variable | Example |
|---|---|
LLM_PROVIDER | openai, anthropic, azure, etc. |
OPEN_AI_KEY | Your OpenAI API key |
ANTHROPIC_API_KEY | Your Anthropic API key |
Check the AnythingLLM docs for the full list of supported providers.
Environment variables
Here are the key environment variables you might want to customize:
| Variable | Default | Description |
|---|---|---|
LLM_PROVIDER | ollama | Which LLM provider to use |
OLLAMA_BASE_PATH | http://ollama.internal:11434 | Ollama API endpoint |
VECTOR_DB | lancedb | Vector database engine |
AUTH_TOKEN | (random) | API authentication token |
JWT_SECRET | (random) | Secret for JWT token signing |
PASSWORDMINCHAR | 8 | Minimum password length for users |
Logging
AnythingLLM logs to STDOUT by default, which works perfectly with Sliplane's built-in log viewer. For general Docker log tips, check out our post on how to use Docker logs.
Troubleshooting
If AnythingLLM isn't responding after deployment, check these common issues:
- Port mismatch: Make sure the
PORTenv var is set to3001 - Storage permissions: The container runs as user
1000:1000by default - LLM connection: If using Ollama, verify the Ollama service is running and reachable on the internal network
Cost comparison
Of course you can also self-host AnythingLLM with other cloud providers. Here is a pricing comparison for the most common ones:
| Provider | vCPU Cores | RAM | Disk | Estimated Monthly Cost | Notes |
|---|---|---|---|---|---|
| Sliplane | 2 | 2 GB | 40 GB | €9 | charge per server |
| Render | 1 | 2 GB | 40 GB | ~$35-$45 | VM Small |
| Fly.io | 2 | 2 GB | 40 GB | ~$20-$25 | VM + volume |
| Railway | 2 | 2 GB | 40 GB | ~$15-$66 | Usage-based |
FAQ
What can I do with AnythingLLM?
AnythingLLM lets you chat with your documents (PDFs, Word files, web pages, and more) using RAG. You can build custom AI agents, create multiple workspaces for different topics, and even expose an API for your own applications. It works with virtually any LLM provider, so you're not locked into a single vendor.
How do I connect AnythingLLM to Ollama on Sliplane?
Deploy an Ollama service on the same Sliplane server. AnythingLLM can then reach Ollama over the internal network at http://ollama.internal:11434. The preset already has this configured. You just need to pull a model in Ollama (like llama2) and you're ready to go. If you need help with Ollama, check out our post on self-hosting Open WebUI with Ollama.
How do I update AnythingLLM?
Change the image tag in your service settings and redeploy. Check Docker Hub for the latest stable version.
What are some alternatives to AnythingLLM?
Popular alternatives include Flowise (drag-and-drop LLM flow builder), Langflow (low-code AI builder), and Open WebUI (chat interface for Ollama and other LLMs). Each has a different focus, so pick the one that matches your use case best.
Can I use AnythingLLM with multiple users?
Yes. AnythingLLM supports multi-user mode with role-based access control. You can create admin and regular user accounts, each with their own workspaces and permissions. Enable it during the initial setup wizard or later in the settings.