If you want full ownership of your website analytics without the bloat of enterprise platforms, Umami is one of the best options available. It is an open-source, privacy-focused analytics tool that runs beautifully in Docker containers. This guide walks you through a complete umami analytics docker setup — from the first directory to production-ready deployment with SSL, backups, and event tracking.

This article is part of our complete guide to self-hosted analytics, where we cover every major platform you can run on your own server.

Why Umami?

Umami stands out in the crowded analytics space for several concrete reasons:

  • Lightweight and fast. The entire application uses minimal resources. A single container idles at roughly 40–60 MB of RAM, which means it runs comfortably on the smallest VPS instances available.
  • Minimal footprint. Unlike Matomo, which requires PHP, a web server module, and a cron job for archiving, Umami is a single Node.js application with a PostgreSQL (or MySQL) database. Two containers, no extras.
  • Privacy by default. Umami does not use cookies, does not collect personal data, and does not track users across websites. It is compliant with GDPR, CCPA, and PECR without any configuration. If you are interested in how cookie-free analytics work under the hood, read our article on cookie-free analytics and why it matters.
  • MIT license. The code is fully open source under the MIT license. There are no premium tiers that lock features behind a paywall. You get the complete product.
  • Clean interface. The dashboard is modern, fast, and focused. There are no 47 menu items to navigate — you see visitors, page views, referrers, browsers, operating systems, and events on a single screen.

If you are comparing alternatives, our Matomo vs Plausible vs Fathom comparison covers the broader landscape. Umami fits right between Plausible and Matomo in terms of features and complexity.

Prerequisites

Before you begin, make sure you have the following ready:

  • A VPS with at least 1 GB of RAM. Umami is lightweight, but PostgreSQL needs breathing room. A 1 GB instance from any provider (Hetzner, DigitalOcean, Linode, Vultr) works well for sites with up to 100k monthly page views.
  • Docker and Docker Compose installed. This guide uses Docker Compose V2 (the docker compose command, not the older docker-compose binary). On Ubuntu/Debian, install both with apt install docker.io docker-compose-v2.
  • A domain or subdomain. Something like analytics.yourdomain.com pointed to your server’s IP address via an A record.
  • Basic terminal access. You should be comfortable running commands over SSH.

Step 1: Set Up the Project Directory

Start by creating a dedicated directory for your Umami installation. Keeping everything in one place makes backups and maintenance straightforward.

mkdir -p /opt/umami
cd /opt/umami

Next, create an environment file to store sensitive values outside of your Compose file. Generate a strong random string for the application secret:

openssl rand -base64 32

Now create the .env file:

cat > /opt/umami/.env <<EOF
POSTGRES_DB=umami
POSTGRES_USER=umami
POSTGRES_PASSWORD=CHANGE_ME_strong_password_here
APP_SECRET=paste_your_generated_secret_here
EOF

Replace CHANGE_ME_strong_password_here with a real password and paste the output of the openssl command as the APP_SECRET value. Never commit this file to version control.

Step 2: Docker Compose Configuration

Create the docker-compose.yml file in /opt/umami. This defines two services: the Umami application and a PostgreSQL 15 database.

version: "3"

services:
  umami:
    image: ghcr.io/umami-software/umami:postgresql-latest
    container_name: umami
    ports:
      - "3000:3000"
    environment:
      DATABASE_URL: postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@db:5432/${POSTGRES_DB}
      APP_SECRET: ${APP_SECRET}
    depends_on:
      db:
        condition: service_healthy
    restart: always
    healthcheck:
      test: ["CMD-SHELL", "curl -f http://localhost:3000/api/heartbeat || exit 1"]
      interval: 30s
      timeout: 5s
      retries: 3

  db:
    image: postgres:15-alpine
    container_name: umami-db
    environment:
      POSTGRES_DB: ${POSTGRES_DB}
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
    volumes:
      - umami-db-data:/var/lib/postgresql/data
    restart: always
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U $${POSTGRES_USER} -d $${POSTGRES_DB}"]
      interval: 10s
      timeout: 5s
      retries: 5

volumes:
  umami-db-data:

Key details about this configuration:

  • Named volume (umami-db-data) ensures your database persists across container restarts and updates.
  • Health checks on both services mean Docker will restart unhealthy containers and Umami waits for PostgreSQL to be ready before starting.
  • restart: always brings both containers back after a server reboot.
  • The Umami container exposes port 3000, which you will proxy through Nginx in the next step.

Step 3: Start the Stack

Launch both containers in detached mode:

cd /opt/umami
docker compose up -d

Watch the logs to confirm everything starts cleanly:

docker compose logs -f

You should see PostgreSQL initialize the database on first run, followed by Umami applying its schema automatically. When you see a line like Listening on http://0.0.0.0:3000, the application is ready.

Press Ctrl+C to exit the log view. Verify both containers are running:

docker compose ps

At this point, Umami is accessible on port 3000. If your firewall allows it, you can visit http://your-server-ip:3000 to see the login screen.

Default credentials: The initial admin username is admin and the password is umami. Change these immediately after your first login by going to Settings > Profile.

Step 4: Reverse Proxy with Nginx

Running Umami directly on port 3000 works for testing but is not suitable for production. Exposing application ports directly means no SSL, no HTTP/2, and no protection from malformed requests. A reverse proxy with Nginx sits in front of Umami and gives you proper domain handling, SSL termination, request buffering, and the ability to add security headers. This is standard practice for any containerized web application.

Create a new Nginx server block configuration:

server {
    listen 80;
    server_name analytics.yourdomain.com;

    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }
}

Save this file to /etc/nginx/conf.d/umami.conf (or the appropriate path for your distribution). Then test and reload Nginx:

nginx -t
systemctl reload nginx

Your Umami instance should now be accessible at http://analytics.yourdomain.com. The Upgrade and Connection headers in the configuration ensure WebSocket connections work correctly for the real-time dashboard view.

Step 5: SSL with Certbot

Every production site needs HTTPS. Certbot with the Nginx plugin makes this a one-command process:

apt install certbot python3-certbot-nginx -y
certbot --nginx -d analytics.yourdomain.com

Certbot will automatically modify your Nginx configuration to listen on port 443, add the SSL certificate paths, and redirect HTTP traffic to HTTPS.

Verify automatic renewal is working:

certbot renew --dry-run

Certbot installs a systemd timer (or cron job, depending on your OS) that renews certificates automatically before they expire. You do not need to set this up manually.

After Certbot finishes, visit https://analytics.yourdomain.com — you should see a valid SSL certificate and the Umami login page.

Step 6: Add Your First Website

With the stack running and SSL in place, it is time to start tracking a website.

  1. Log in to Umami at https://analytics.yourdomain.com using your admin credentials.
  2. Navigate to Settings > Websites and click Add website.
  3. Enter a display name (e.g., “My Blog”) and the full domain (e.g., myblog.com). Click Save.
  4. Click the newly created website entry, then click Tracking code. You will see a script tag like this:
<script defer src="https://analytics.yourdomain.com/script.js"
  data-website-id="a1b2c3d4-e5f6-7890-abcd-ef1234567890">
</script>
  1. Paste this script tag into the <head> section of your website. If you use WordPress, add it via your theme’s header or a plugin like Insert Headers and Footers. For static sites, add it directly to your HTML template.
  2. Visit your website in a browser, then go back to the Umami dashboard. You should see your visit appear within seconds on the Realtime panel.

The dashboard shows page views, unique visitors, bounce rate, average visit duration, referrer sources, browser and OS breakdown, and geographic data — all without cookies or personal data collection.

If you need to track multiple websites, repeat this process for each one. Umami handles multiple sites from a single installation with no performance impact. Each website gets its own tracking ID and separate dashboard view, and you can switch between them from the top navigation bar.

Step 7: Custom Event Tracking

Page views are only part of the picture. Umami supports custom event tracking so you can measure specific user actions like button clicks, form submissions, and downloads.

Using the data-umami-event Attribute

The simplest method is adding a data-umami-event attribute to any HTML element:

<button data-umami-event="signup-button-click">Sign Up</button>

<a href="/pricing" data-umami-event="pricing-link-click">View Pricing</a>

<form data-umami-event="contact-form-submit">
  <!-- form fields -->
</form>

You can also pass additional properties with data-umami-event-* attributes:

<button
  data-umami-event="purchase"
  data-umami-event-product="pro-plan"
  data-umami-event-price="29">
  Buy Pro Plan
</button>

Using the JavaScript API

For more complex tracking scenarios, use the JavaScript API directly:

// Track a simple event
umami.track('newsletter-subscribe');

// Track an event with custom data
umami.track('file-download', {
  filename: 'whitepaper.pdf',
  category: 'resources'
});

// Track dynamic events based on user actions
document.getElementById('video-player').addEventListener('ended', function() {
  umami.track('video-completed', { title: this.dataset.title });
});

All custom events appear in the Events section of your Umami dashboard, where you can filter and analyze them alongside your page view data.

Bypassing Ad Blockers

One advantage of self-hosting analytics is that you can avoid being blocked by ad blockers and privacy extensions. Most blockers maintain lists of known tracking domains and script names. Since your Umami instance runs on your own domain, it is already harder to block than third-party services.

To further reduce blocking, you can proxy the tracking script through your main website’s domain. Add a location block to your website’s Nginx configuration:

location /stats/script.js {
    proxy_pass https://analytics.yourdomain.com/script.js;
    proxy_set_header Host analytics.yourdomain.com;
}

location /api/send {
    proxy_pass https://analytics.yourdomain.com/api/send;
    proxy_set_header Host analytics.yourdomain.com;
}

Then update the tracking script on your website to use the proxied path:

<script defer src="/stats/script.js"
  data-website-id="a1b2c3d4-e5f6-7890-abcd-ef1234567890">
</script>

With this setup, both the script and the data collection endpoint appear to be part of your main website. No third-party domain is visible in network requests, which makes blocking significantly harder. You can also rename script.js to anything you want (like app.js or site.js) in the proxy path for additional obfuscation.

Keep in mind that this approach is only possible because you self-host the analytics tool. Cloud-hosted solutions like Google Analytics or even Plausible Cloud cannot offer this level of control, since their scripts always load from a third-party domain that blockers can easily target.

Backup and Updates

Self-hosting means you are responsible for your own data. Set up a simple backup routine for the PostgreSQL database:

# Manual backup
docker exec umami-db pg_dump -U umami umami > /opt/umami/backups/umami-$(date +%Y%m%d).sql

# Create a daily cron job
mkdir -p /opt/umami/backups
echo "0 3 * * * docker exec umami-db pg_dump -U umami umami > /opt/umami/backups/umami-\$(date +\%Y\%m\%d).sql" | crontab -

To restore from a backup:

cat /opt/umami/backups/umami-20260318.sql | docker exec -i umami-db psql -U umami umami

Updating Umami to the latest version is straightforward with Docker:

cd /opt/umami
docker compose pull
docker compose up -d

Docker pulls the latest image, stops the old container, and starts the new one. The database volume persists, so your data is preserved. Umami handles database migrations automatically on startup when updating between versions.

It is good practice to take a database backup before updating, in case a migration causes issues and you need to roll back:

docker exec umami-db pg_dump -U umami umami > /opt/umami/backups/pre-update-$(date +%Y%m%d).sql
docker compose pull
docker compose up -d
docker compose logs -f umami

Watch the logs to confirm the update completed without errors.

Bottom Line

A complete umami analytics docker setup takes about 15 minutes from start to finish. You get a fast, modern analytics dashboard that respects user privacy, requires no cookie banners, and gives you full control over your data. The resource requirements are minimal — a single 1 GB VPS handles the entire stack comfortably.

The maintenance overhead is equally low. Updates are a two-command process, backups are a single pg_dump call, and the entire stack can be migrated to a new server by copying the Compose file, the .env file, and a database dump. There is no plugin ecosystem to manage, no PHP version compatibility to worry about, and no scheduled tasks to configure.

Compared to Google Analytics, you trade machine learning-powered audience segments for something arguably more valuable: complete data ownership, zero third-party dependencies, and a clean interface that shows you exactly what you need without drowning you in reports you will never open. For most website owners, bloggers, and small SaaS products, Umami provides more than enough insight to make informed decisions about content and user experience.

If you found this guide useful, explore our self-hosted analytics hub for more tutorials on running your own analytics infrastructure.