Self-hosting Supabase on Ubuntu and Digital Ocean

published on October 03, 2022 by
Kelvin Pompey

Supabase is primarily a cloud-based service for building application backends with PostgreSQL However it’s built on open-source tools and offers a way for users to host the service on their own servers. You can have a look at the official guide here.

The guide provides an overview of the Supabase architecture and the steps we need to follow to set it up. So why do you need this article? The official guide is server agnostic and does not include details on setting it up on a specific platform, but this article will walk you through the steps required to host Supabase on an Ubuntu server on Digital Ocean. Many of the concepts will be applicable to other Linux distributions. The main difference will be the steps required to install the required tools.


So let’s get started. Once your Ubuntu server is ready you’ll need to have Git, Docker and Docker Compose installed. If you’re starting from scratch the easiest way to get these is to use a Droplet with docker preinstalled. I chose the 2 GB RAM option.

If you have an existing server and need to install Docker Compose manually you can follow this guide.

Screenshot 2022-10-03 at 8.02.31 AM.png

Once that’s completed you’ll need to create a hostname and then either set a password or an SSH key for your droplet. I’ll set a password for this guide.

With that done we can then login via ssh.

ssh root@dropletipaddress

Next we add a regular user. This can be anything but I'll use "supabase".

adduser supabase

After you've finished entering the user details you'll want to grant the user sudo privileges.

usermod -aG sudo supabase

Next, login as that user.

su supabase

Now the fun begins.

Clone the Supabase repo and cd into the docker directory.

cd ~
git clone --depth 1
cd supabase/docker

Create the .env file from the .env.example template.

cp .env.example .env

This is technically all you need to run Supabase but it's not secure. It uses the default keys from the repository. Generating these keys is super easy though. There is a key generator in the Supabase guide.

You can generate the ANON_KEY and SERVICE_KEY with the "preconfigured payload" dropdown. Replace the default values in the .env file. Replace the JWT_SECRET also. You'll also need to generate a database password. You can refresh the page and generate a new value for the JWT_SECRET and use that.


You also need to set the ANON_KEY and SERVICE_ROLE_KEY in docker/volumes/api/kong.yml

  - username: anon
      - key: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJhbm9uIiwKICAgICJpc3MiOiAic3VwYWJhc2UtZGVtbyIsCiAgICAiaWF0IjogMTY0MTc2OTIwMCwKICAgICJleHAiOiAxNzk5NTM1NjAwCn0.dc_X5iR_VP_qT0zsiyj_I_OZ2T9FtRU2BBNWN8Bu4GE
  - username: service_role
      - key: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJzZXJ2aWNlX3JvbGUiLAogICAgImlzcyI6ICJzdXBhYmFzZS1kZW1vIiwKICAgICJpYXQiOiAxNjQxNzY5MjAwLAogICAgImV4cCI6IDE3OTk1MzU2MDAKfQ.DaYlNEoUrrEn2Ig7tqibS-PHK5vgusbcbo7X36XVt4Q

If you're going to be sending emails for magic links for instance you'll need to update the SMTP configuration with your provider. It's not absolutely necessary to get the system to run. You can come back to it later if you don't need it immediately.

## Email auth

Finally, you need to change the PUBIC_REST_URL to use the public ip address to access Studio remotely.


You can now run the container. From the docker directory run:

docker-compose up

If you get a permission error, you'll need to add your user to the docker group.

docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', PermissionError(13, 'Permission denied'))
sudo usermod -aG docker supabase

Try docker-compose up again

The first time takes a while as all the images are downloaded. If all the services run without error then you're good to go.

It will look something like this:

supabase-storage | [1664936808931] INFO (7 on 8883c6916b0d): Server listening at
supabase-storage | Server listening at
supabase-rest | 05/Oct/2022:02:26:50 +0000: Listening on port 3000
supabase-rest | 05/Oct/2022:02:26:50 +0000: Connection successful
supabase-rest | 05/Oct/2022:02:26:50 +0000: Config re-loaded
supabase-rest | 05/Oct/2022:02:26:50 +0000: Listening for notifications on the pgrst channel
supabase-rest | 05/Oct/2022:02:26:50 +0000: Schema cache loaded
supabase-realtime | 2022-10-05 02:27:03.096 [info] Running RealtimeWeb.Endpoint with cowboy 2.8.0 at :::4000 (http)
supabase-realtime | 2022-10-05 02:27:03.097 [info] Access RealtimeWeb.Endpoint at http://localhost:4000
supabase-realtime | 2022-10-05 02:27:06.141 [info] tzdata release in place is from a file last modified Wed, 21 Oct 2020 18:40:20 GMT. Release file on server was last modified Sat, 24 Sep 2022 04:40:44 GMT.
supabase-realtime | 2022-10-05 02:27:07.265 [info] Tzdata has updated the release from 2020d to 2022d

Now to verify that the everything is configured correctly you can try to open Studio in your desktop browser.

Navigate to http:dropletipaddress:3000

You should see the following:

Select Default project.

You should see the following screen:

If on the other hand you see a connecting to database screen then there is something wrong with your configuration. Double check it and try again.

Running docker-compose down shuts down the containers and removes them. This destroys the database. ctrl+c shuts down the containers without destroying the database. You can also run docker-compose up without shutting down the containers which will reload the containers with the changes to the .env file.

You can connect to the Supabase service directly using your public ip and the kong port number (8000) but for a real project you'll want to use an SSL secured domain.

To do this you'll need an HTTP proxy server. Nginx is quite popular and there are articles covering how to configure it but my preference is Caddy for its relative ease of configuration compared to Nginx.

To install it run the following commands:

sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https -y

curl -1sLf '' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg

curl -1sLf '' | sudo tee /etc/apt/sources.list.d/caddy-stable.list

sudo apt-get update

sudo apt-get install caddy -y

Open the HTTP ports.

sudo ufw allow proto tcp from any to any port 80,443

To configure a domain and to secure the studio with a login and password edit /etc/caddy/Caddyfile and add the following:

sudo vim /etc/caddy/Caddyfile {
        reverse_proxy /rest/v1/* localhost:8000
        reverse_proxy /auth/v1/* localhost:8000
        reverse_proxy /realtime/v1/* localhost:8000
        reverse_proxy /storage/v1/* localhost:8000
        reverse_proxy * localhost:3000
        basicauth / {
                yourusername $yourhashedpassword

To generate $yourhashedpassword, run:

caddy hash-password

This will prompt you to enter and confirm your password and will generate a hashed version of it. Copy that value to the Caddyfile in place of $yourhashedpassword.

Save the file and restart caddy.

sudo service caddy restart

Update the .env file to use your domain.


Stop the Supabase containers.


Start the containers

docker-compose up

Now navigate to and verify that it loads Studio and it connects to the database.

For a more complete test, configure one of the the Supabase sample projects. The user management sample is a good choice as it would features using the REST API, Auth and Storage.

If you've found this guide useful, kindly leave a tip! 😊