Self-hosting AnythingLLM the easy way

Self-hosting AnythingLLM the easy way

Yulei Chen - Content-Engineerin bei sliplane.ioYulei Chen
4 min

AnythingLLM is an all-in-one AI application that lets you chat with your documents, build AI agents, and use RAG (Retrieval-Augmented Generation) with any LLM provider. It supports OpenAI, Anthropic, Ollama, and many more. The best part: it's fully open-source and designed for self-hosting.

Sliplane is a managed container platform that makes self-hosting painless. With one-click deployment, you can get AnythingLLM up and running in minutes - no server setup, no reverse proxy config, no infrastructure to maintain.

Prerequisites

Before deploying, ensure you have a Sliplane account (free trial available).

Quick start

Sliplane provides one-click deployment with presets.

SliplaneDeploy AnythingLLM >
  1. Click the deploy button above
  2. Select a project
  3. Select a server. If you just signed up you get a 48-hour free trial server
  4. Click Deploy!

About the preset

The one-click deploy above uses Sliplane's AnythingLLM preset. Here's what it includes:

  • Official mintplexlabs/anythingllm Docker image with a pinned version tag
  • LanceDB as the built-in vector database (no external database needed)
  • Persistent storage mounted to /app/server/storage so your documents, workspaces, and settings survive restarts
  • Pre-configured with Ollama as the default LLM and embedding provider (pointing to ollama.internal on Sliplane's internal network)
  • Listens on port 3001
  • Telemetry disabled by default

If you want to use a cloud LLM provider (like OpenAI or Anthropic) instead of Ollama, you can change the LLM_PROVIDER and related environment variables after deployment.

Next steps

Once AnythingLLM is running on Sliplane, access it using the domain Sliplane provided (e.g. anythingllm-xxxx.sliplane.app).

First-time setup

AnythingLLM will walk you through a setup wizard on your first visit. You'll configure:

  • Your LLM provider and model
  • Your embedding provider
  • Your vector database (LanceDB is already set up)
  • Whether to enable multi-user mode

Connecting an LLM provider

The preset defaults to Ollama. If you deploy an Ollama instance on Sliplane, AnythingLLM can connect to it over the internal network. Just make sure the OLLAMA_BASE_PATH environment variable points to your Ollama service's internal hostname (e.g. http://ollama.internal:11434).

To use a cloud provider instead, update these environment variables:

VariableExample
LLM_PROVIDERopenai, anthropic, azure, etc.
OPEN_AI_KEYYour OpenAI API key
ANTHROPIC_API_KEYYour Anthropic API key

Check the AnythingLLM docs for the full list of supported providers.

Environment variables

Here are the key environment variables you might want to customize:

VariableDefaultDescription
LLM_PROVIDERollamaWhich LLM provider to use
OLLAMA_BASE_PATHhttp://ollama.internal:11434Ollama API endpoint
VECTOR_DBlancedbVector database engine
AUTH_TOKEN(random)API authentication token
JWT_SECRET(random)Secret for JWT token signing
PASSWORDMINCHAR8Minimum password length for users

Logging

AnythingLLM logs to STDOUT by default, which works perfectly with Sliplane's built-in log viewer. For general Docker log tips, check out our post on how to use Docker logs.

Troubleshooting

If AnythingLLM isn't responding after deployment, check these common issues:

  • Port mismatch: Make sure the PORT env var is set to 3001
  • Storage permissions: The container runs as user 1000:1000 by default
  • LLM connection: If using Ollama, verify the Ollama service is running and reachable on the internal network

Cost comparison

Of course you can also self-host AnythingLLM with other cloud providers. Here is a pricing comparison for the most common ones:

ProvidervCPU CoresRAMDiskEstimated Monthly CostNotes
Sliplane22 GB40 GB€9charge per server
Render12 GB40 GB~$35-$45VM Small
Fly.io22 GB40 GB~$20-$25VM + volume
Railway22 GB40 GB~$15-$66Usage-based

FAQ

What can I do with AnythingLLM?

AnythingLLM lets you chat with your documents (PDFs, Word files, web pages, and more) using RAG. You can build custom AI agents, create multiple workspaces for different topics, and even expose an API for your own applications. It works with virtually any LLM provider, so you're not locked into a single vendor.

How do I connect AnythingLLM to Ollama on Sliplane?

Deploy an Ollama service on the same Sliplane server. AnythingLLM can then reach Ollama over the internal network at http://ollama.internal:11434. The preset already has this configured. You just need to pull a model in Ollama (like llama2) and you're ready to go. If you need help with Ollama, check out our post on self-hosting Open WebUI with Ollama.

How do I update AnythingLLM?

Change the image tag in your service settings and redeploy. Check Docker Hub for the latest stable version.

What are some alternatives to AnythingLLM?

Popular alternatives include Flowise (drag-and-drop LLM flow builder), Langflow (low-code AI builder), and Open WebUI (chat interface for Ollama and other LLMs). Each has a different focus, so pick the one that matches your use case best.

Can I use AnythingLLM with multiple users?

Yes. AnythingLLM supports multi-user mode with role-based access control. You can create admin and regular user accounts, each with their own workspaces and permissions. Enable it during the initial setup wizard or later in the settings.

Self-host AnythingLLM now - It's easy!

Sliplane gives you all the tools you need to run AnythingLLM without server hassle.