How to deploy SvelteKit Sites to a DigitalOcean droplet

With all the recent fuss about a certain cloud provider billing someone a huge amount of money for a simple website that experienced a DDoS attack, it got me wondering "how hard would it be to just deploy this stuff myself". The TL;DR is that it's not really that hard if you're patient.

The Strategy

I decided to use the following tech to make this happen...

  • GitHub and GitHub Actions - for code storage and deployment.

  • Digital Ocean - for a small server.

  • SvelteKit - the framework that I reach for if making a small, simple brochure website.

  • Caddy - the web server that we'll use as a proxy and automatic free SSL certificate generator.

  • PM2 - the process manager that will keep our Node-based SvelteKit apps up and running.

  • NVM - Node Version Manager, allows us to download and switch between Node versions.

  • Express - we'll use this as a small server for each one of our sites.

Steps

Create the droplet

I created a Droplet inside Digital Ocean, for this I used the cheapest ubuntu one they had on offer with a mere 1GB memory and 25GB of disk space, cute. When you create a droplet, you'll receive an email with all of the login details that you need on it. The password that you're sent will need changing on first login so be prepared for that. I quite like pwgen for generating passwords (brew install pwgen and then pwgen 32 will give you a list of 32 character passwords to choose from).

I then ssh 'd into the newly created droplet. You can do this by using digital ocean's own 'console' if you prefer or by using the details that you got sent via email when you registered.

Install Caddy

I used the Ubuntu steps, but if you're running a different OS then the instructions can be found here: https://caddyserver.com/docs/install Note that I removed the sudo parts of their installation instructions because I was already logged in as root .

apt install -y debian-keyring debian-archive-keyring apt-transport-https curl
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
apt update
apt install caddy

Install Node and make an Express Server

Now that caddy is installed, which we will use as our web server and SSL certificate creator, create a user on the server that you want to use for your websites. For this documentation I'll go with james (it took me aaaaaages to think of that name), by running adduser james .

With that user added you can now switch to become them by running su james. If you use su - james instead then you'll change to the user's home directory at the same tine which is probably where you'd like to be. Alternatively run cd ~ to get you there.

PM2 needs Node and my preffered way of using Node is via NVM. I followed as per their instructions so:

curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash

I then ran their recommended export so that nvm would load as part of my terminal profile going forwards:

export NVM_DIR="$([ -z "${XDG_CONFIG_HOME-}" ] && printf %s "${HOME}/.nvm" || printf %s "${XDG_CONFIG_HOME}/nvm")"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm

After running that export command I ran source ~/.profile to reload my profile, making nvm available to me right away without having to logout and in again or open a new terminal.

To get Node downloaded I then simply ran:

nvm install node

I then created a dummy project just to get things up and running with by running:

npm create svelte@latest my-app

This gave me a new folder inside my home directory named my-app (/home/james/my-app).

I cd'd into that directory and got it up and running with the things that I would need to create an Express server for pm2 to run in the future:

cd my-app
npm install dotenv express helmet @godaddy/terminus
npm install -D @sveltejs/adapter-node

I then edited the first line in svelte.config.js file changing this:

import adapter from '@sveltejs/adapter-auto';

into this:

import adapter from '@sveltejs/adapter-node';

The complete file is here:

import adapter from '@sveltejs/adapter-node';
import { vitePreprocess } from '@sveltejs/vite-plugin-svelte';

/** @type {import('@sveltejs/kit').Config} */
const config = {
        // Consult https://kit.svelte.dev/docs/integrations#preprocessors
        // for more information about preprocessors
        preprocess: vitePreprocess(),

        kit: {
                // adapter-auto only supports some environments, see https://ki>
                // If your environment is not supported or you settled on a spe>
                // See https://kit.svelte.dev/docs/adapters for more informatio>
                adapter: adapter()
        }
};

export default config;

I then created a server.js file by running nano server.js and pasted the following into it:

import 'dotenv/config'

import { handler } from './build/handler.js';
import express from 'express';
import helmet from "helmet";
import http from 'http';
import { createTerminus } from '@godaddy/terminus'

const app = express();

app.use(
  helmet({
    contentSecurityPolicy: {
      directives: {
        ...helmet.contentSecurityPolicy.getDefaultDirectives(),
        "script-src": ["'self'", "'unsafe-inline'"],
      }
    },
    referrerPolicy: {
      policy: ["same-origin"],
    },
  })
)

app.use(handler);

const server = http.createServer(app)

createTerminus(server, {
  signals: ['SIGTERM', 'SIGINT'],
  onSignal: async () => {
    // Call your cleanup functions below. For example:
    // db.shutdown()
  }
})

server.listen(3000, () => {
  console.log('Listening on port 3000');
});

I ran npm run build to build the project dependencies in preparation for getting started with pm2.

Install PM2

Run npm install -g pm2 to install pm2 as a global dependency for this user.

At this point you should still be in the /home/james/my-app directory (Run pwd to check). From this directory run pm2 start server.js . Your project should now be up and running on port 3000 because that's the port that we defined in the server.js file.

Configure Caddy

If you want to use SSL, which you probably do, you need to go into your hosting DNS records and add an A record for this website. Point the A record at the IP address of this droplet. It's important to do this now because Caddy will try and provision an SSL certificate after you've done the next step and it will fail if there is no A record pointing to this server droplet.

We need to switch back to being the root user for this step. Because we originally logged in as root and then switched to the new user we can easily get back to being root again by pressing CTRL+D (for logout). Now that you're back to being the root user, run nano /etc/caddy/Caddyfile .

Add the following to Caddyfile (changing james-nock.co.uk to your domain name in all places).

www.james-nock.co.uk {
	reverse_proxy * localhost:3000
}
james-nock.co.uk {
	redir https://www.james-nock.co.uk permanent
}

The first block above forwards requests to www.james-nock.co.uk to port 3000 and the second block redirects non-www traffic to https://www.james-nock.co.uk.

Press CMD+X to exit nano and hit enter to save. Validate your configuration by running caddy validate and then start caddy by running caddy start . Caddy will magically generate the SSL certificate for your site and it will start working very soon. You can spy on progress by running journalctl -f to tail the system journal logs. You can easily test whether the SSL certificate is working by runnining a curl command to your website, including the https for example curl https://www.james-nock.co.uk in the example above.

Voila!

At this point you should have your "my-app" live on the internet. At this point, at the very least, enable your firewall (on ubuntu its ufw). You should also prohibit SSH root login access and I'd also recommend disallowing password-based SSH access and only allowing access by SSH key.

But that's only the demo app, how do I put MY website up!?

Good, I was just testing you were still awake at this point. I'll walk you through this now, firstly, create an ssh key on your server as your "james" user: ssh-keygen -t id_ecdsa -C "github" and hit the enter key when you're asked if you want to set a passphrase, you don't. You should now have an id_ecdsa.pub and and id_ecdsa file inside your ~/.ssh folder. Run ls -al ~/.ssh to check (list all files including hidden files). If you're new to ssh keys just think of them as mega passwords that are really long so very hard to guess.

You will need to add this public key to your 'authorized_keys' file so that GitHub can 'login' with the key when the action runs. To do this run cat ~/.ssh/id_ecdsa.pub and copy the output. Then head over to your ~/.ssh folder by running cd ~/.ssh and then run nano authorized_keys to create and open the file. Paste the public key that you now have on your clipboard and then exit by pressing CTRL+X and hitting enter to save your changes. Next, set the permissions on the authorized_keys as necessary by running chmod 600 ~/.ssh/authorized_keys.

Head over to your repo on GitHub (if it's not already up there then follow their steps for pushing it up there). Once your project is up there, navigate to the settings tab and then go into "Secrets and variables" and click on "Actions". You will need to add 3 Repository Secrets:

SSH_HOST which is the IP address of your digital ocean droplet
SSH_USER which in my case is james (whatever you used in adduser)
SSH_KEY this is the PRIVATE key that you just made using ssh-keygen. You will need to run cat ~/.ssh/id_ecdsa and copy the whole output into this SSH_KEY variable.

With the repository secrets added, head to the Deploy keys section of your repository settings and paste in the public key that you made earlier (cat ~/.ssh/id_ecdsa.pub) and give it a sensible title such as "digital ocean deploy" or similar.

At this point GitHub should have all of the things it needs for your workflow to run, which we'll make now. Inside your project add a folder at the top level named .github (don't be tempted to use the .git folder that already exists, these are not the same) and inside that make a folder named workflows. Inside that folder make a file called deploy.yml. This is the file that GitHub actions will run when you push your code up to GitHub. This is probably a blog post in itself but below is the workflow that I came up with that seems to work fine:

name: Build & Deploy
on:
  push:
    branches:
      - main
  workflow_dispatch:

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout main branch
        uses: actions/checkout@v4

      - name: Install SSH Key
        uses: shimataro/ssh-key-action@v2
        with:
          key: ${{ secrets.SSH_KEY }}
          known_hosts: "unnecessary"

      - name: Adding Known Hosts
        run: ssh-keyscan -H ${{ secrets.SSH_HOST }} >> ~/.ssh/known_hosts

      - name: Deploy with rsync
        run: rsync -avz --delete . ${{ secrets.SSH_USERNAME }}@${{ secrets.SSH_HOST }}:website/

      - name: Build site
        uses: appleboy/ssh-action@master
        with:
          host: ${{ secrets.SSH_HOST }}
          username: ${{ secrets.SSH_USERNAME }}
          key: ${{ secrets.SSH_KEY }}
          script: |
            cd ~/website
            nvm use || nvm install $(cat .nvmrc)
            npm install
            npm run build
            pm2 reload website || pm2 start server.js --name website

The workflow above will:

On a push to the branch named "main", checkout the branch, add the SSH Key and then deploy the website to your server assuming using the path that you gave it, in this case "website" (end of line 25). It will then ssh into the server, change into this directory, use the correct version of npm, install dependencies and build the site. Once that's done it will then reload the pm2 process or start it up if it's not already running. Job done. Note that this is a basic workflow and I'll probably do a follow-up post at some point with improvements I've now added a new, better one. The slight flaw in the basic one is that there will be a slight period of downtime each time you deploy of ~ 30 seconds or so.

If you push main up to GitHub from your machine or run it manually from inside 'Actions' in your repository on GitHub you should see it run. If it fails, read through the output and see where it went wrong. I hit one hurdle which was a problem with 'nvm', it turns out that the ~/.bashrc file on Ubuntu begins with code that basically says "if this is not a human stop running everything below" and the "export" code from earlier was below that line. I had two options really, one was to comment out that code or the second was to move the export to be before it. I chose the latter so the ~/.bashrc file on the server now starts with...

# ~/.bashrc: executed by bash(1) for non-login shells.
# see /usr/share/doc/bash/examples/startup-files (in the package bash-doc)
# for examples

export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"  # This loads nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion"  # This load>

# If not running interactively, don't do anything
case $- in
    *i*) ;;
      *) return;;
esac

Further steps

You could also think about setting up a clone of the server that you just made and putting a load balancer in front of the two of them so that if one of the servers goes down, your website doesn't.