solidweb.app setup guide v5

tags: jss solid 2026

solidweb.app — Full Server Setup Guide (v5)

tags: melvin matthias solid jss debian pm2 nginx letsencrypt

Server: 92.205.60.157 · OS: Debian 12 Bookworm
Domain: solidweb.app
Stack: Debian Bookworm · nvm · Node.js 24.11.0 · PM2 · JavaScript Solid Server (JSS) · Nginx · Let's Encrypt (wildcard) · Netdata · Uptime Kuma

Credits: The JavaScript Solid Server (JSS) is created by
Melvin Carvalho — web pioneer, mathematician, Solid enthusiast,
and long-time contributor to the Solid ecosystem and decentralised web.


Table of Contents


0. Crosscheck Notes (v4 → v5)

This version was recalculated against the official JSS Docusaurus docs at
javascriptsolidserver.github.io/docs
and the Solid LLM Skills at
github.com/solid/solid-llm-skills.

Changes from v4:

# What changed Why
1 jss init step added before first start Official CLI docs list jss init as the interactive configuration setup command — useful to run once to confirm config
2 Pod creation command now includes email + password Official API docs: with --idp enabled, POST /.pods requires {"name","email","password"}
3 idpIssuer config key noted as extended feature Present in the gh-pages README but absent from the canonical config reference table; kept with caveat
4 Pod structure section added Official docs fully specify the on-disk pod layout — useful for debugging and for knowing what to expect
5 jss --help verification step added after install Official docs recommend this as the install verification step
6 WAC ACL model noted explicitly Solid LLM Skills confirm JSS uses WAC (Web Access Control) by default, not ACP
7 Nginx config enriched with X-Forwarded-Host Official JSS Nginx snippet lacks this; required for correct IDP issuer URL construction in subdomain mode
8 Open-registration note clarified against API spec inviteOnly omitted = open; confirmed against both the config table (no such key listed) and gh-pages docs

1. Architecture Overview

Internet
│
▼
92.205.60.157 :80 / :443
│
▼
┌──────────────────────────────────────────────────────────────────┐
│  Nginx (reverse proxy + TLS termination, wildcard cert)          │
│                                                                  │
│  solidweb.app            → JSS :3000  (root / login / IDP)      │
│  *.solidweb.app          → JSS :3000  (per-user pods)           │
│  status.solidweb.app     → Uptime Kuma :3001                    │
│  monitor.solidweb.app    → Netdata :19999                       │
└──────────────────────────────────────────────────────────────────┘
         ↑                        ↑
   PM2 (user: jss)          PM2 (user: kuma)
   manages JSS               manages Uptime Kuma
   pm2-jss.service           pm2-kuma.service
   (systemd unit,            (systemd unit,
    auto-generated            auto-generated
    by PM2)                   by PM2)

Process management strategy:

Why one PM2 instance per user and not a shared root PM2?
Running PM2 as root is a security anti-pattern. Separate per-user PM2 daemons give each
service its own isolated process tree, log directory (~/.pm2/logs), and dump file.
Each generates its own systemd unit (pm2-jss.service, pm2-kuma.service).


2. DNS Setup

Create the following records at your DNS registrar (TTL 300 s is fine to start):

Hostname Type Value Purpose
solidweb.app A 92.205.60.157 Root domain / Solid IDP
*.solidweb.app A 92.205.60.157 All user pods + subservices

One wildcard A record covers everything: alice.solidweb.app, status.solidweb.app,
monitor.solidweb.app, etc. No individual subdomain records needed.

Verify propagation before step 10 (TLS):

dig alice.solidweb.app +short      # → 92.205.60.157
dig status.solidweb.app +short     # → 92.205.60.157
dig monitor.solidweb.app +short    # → 92.205.60.157

3. Server Preparation

# Update and upgrade
apt update && apt upgrade -y

# Install essential packages
apt install -y \
  curl wget git \
  build-essential \
  ufw \
  nginx \
  certbot \
  apache2-utils

# Set hostname
hostnamectl set-hostname solidweb

build-essential is required on Debian Bookworm before nvm can install Node.js.
apache2-utils provides htpasswd for Netdata basic auth.
python3-certbot-nginx is intentionally not installed — wildcard certs require
the DNS-01 challenge, not the Nginx plugin.


4. Node.js via nvm

4.1 Create dedicated service users

useradd --system --create-home --shell /bin/bash --home-dir /home/jss  jss
useradd --system --create-home --shell /bin/bash --home-dir /home/kuma kuma

Both users get /bin/bash so nvm can install into their home directories.
Services run non-interactively once PM2 is managing them.

4.2 Install nvm for both users

su - jss  -c 'curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.4/install.sh | bash'
su - kuma -c 'curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.4/install.sh | bash'

4.3 Install Node.js 24.11.0

su - jss  -c 'source /home/jss/.nvm/nvm.sh  && nvm install 24.11.0 && nvm alias default 24.11.0'
su - kuma -c 'source /home/kuma/.nvm/nvm.sh && nvm install 24.11.0 && nvm alias default 24.11.0'

Verify:

su - jss  -c 'source /home/jss/.nvm/nvm.sh  && node --version && npm --version'
su - kuma -c 'source /home/kuma/.nvm/nvm.sh && node --version && npm --version'
# Expected: v24.11.0 / 10.x.x

JSS requires Node.js 18+ (official docs). 24.11.0 is fully compatible.


5. PM2 Installation

5.1 Install PM2 for both users

Never sudo npm install -g pm2 — that installs into the system npm, not the nvm one,
causing PATH mismatches at boot. Always install as the service user.

su - jss  -c 'source /home/jss/.nvm/nvm.sh  && npm install -g pm2'
su - kuma -c 'source /home/kuma/.nvm/nvm.sh && npm install -g pm2'

Verify:

su - jss  -c 'source /home/jss/.nvm/nvm.sh  && pm2 --version'
su - kuma -c 'source /home/kuma/.nvm/nvm.sh && pm2 --version'

5.2 Install pm2-logrotate (prevents unbounded log growth)

su - jss  -c 'source /home/jss/.nvm/nvm.sh  && pm2 install pm2-logrotate'
su - kuma -c 'source /home/kuma/.nvm/nvm.sh && pm2 install pm2-logrotate'

6. JavaScript Solid Server (JSS)

6.1 Install JSS

su - jss -c 'source /home/jss/.nvm/nvm.sh && npm install -g javascript-solid-server'

Verify (official recommended check from docs):

su - jss -c 'source /home/jss/.nvm/nvm.sh && jss --help'

6.2 Create data directory

mkdir -p /var/lib/jss/data
chown -R jss:jss /var/lib/jss

6.3 Run jss init (interactive config setup)

The official CLI docs list jss init as the interactive configuration setup command.
Run it once as the jss user to generate a validated baseline config, then we'll
replace/augment it with our production config.json in the next step.

sudo -u jss bash -c '
  source /home/jss/.nvm/nvm.sh
  cd /var/lib/jss
  jss init
'
# Walk through the prompts; this confirms the binary works correctly.
# Output config is written to ./config.json in the cwd — we will not use
# this file directly; it just validates the install.

6.4 Production config file

mkdir -p /etc/jss

Create /etc/jss/config.json:

{
  "port": 3000,
  "host": "127.0.0.1",
  "root": "/var/lib/jss/data",
  "subdomains": true,
  "baseDomain": "solidweb.app",
  "conneg": true,
  "notifications": true,
  "idp": true,
  "idpIssuer": "https://solidweb.app",
  "mashlibCdn": true,
  "defaultQuota": "1GB"
}

Config key reference (crosschecked against official docs):

Key Official ref Our value Notes
port ✅ config table 3000 JSS default; Nginx proxies externally
host ✅ config table "127.0.0.1" Override from default 0.0.0.0 — loopback only
root ✅ config table /var/lib/jss/data Persistent data dir
subdomains ✅ config table true Pod per subdomain (alice.solidweb.app)
baseDomain ✅ config table "solidweb.app" Required for subdomain URI construction
conneg ✅ config table true Turtle ↔ JSON-LD content negotiation
notifications ✅ config table true WebSocket updates (solid-0.1 protocol)
idp ✅ config table true Built-in Identity Provider
idpIssuer ⚠️ gh-pages README only "https://solidweb.app" Not in canonical config table; present in extended docs. No trailing slash — must be exact
mashlibCdn ✅ config table true SolidOS data browser from unpkg CDN
defaultQuota ✅ gh-pages docs "1GB" Per-pod storage limit

Open registration: The inviteOnly key is absent — omitting it leaves registration
fully open. Anyone visiting https://solidweb.app can self-register and get a pod at
<username>.solidweb.app. Confirmed: this key does not appear in the official config
reference table.

chown -R jss:jss /etc/jss

6.5 Quick sanity test (before PM2)

sudo -u jss bash -c 'source /home/jss/.nvm/nvm.sh && jss start --config /etc/jss/config.json'
# Look for: "Server listening on 127.0.0.1:3000"
# Ctrl+C

7. Uptime Kuma

7.1 Install

su - kuma -c 'source /home/kuma/.nvm/nvm.sh && npm install -g uptime-kuma'

7.2 Create data directory

mkdir -p /var/lib/kuma
chown -R kuma:kuma /var/lib/kuma

7.3 Quick sanity test (before PM2)

sudo -u kuma bash -c 'source /home/kuma/.nvm/nvm.sh && uptime-kuma-server \
  --data-dir /var/lib/kuma --port 3001 --host 127.0.0.1'
# Look for: "Server started on port 3001"
# Ctrl+C

Uptime Kuma has no default password. Admin account is created on first browser visit.


8. PM2 Ecosystem Files & Boot Hook

Read this section fully before executing. Order matters.

8.1 Ecosystem file for JSS

Create /etc/jss/ecosystem.config.js:

module.exports = {
  apps: [
    {
      name: 'jss',

      // Full absolute path to the versioned binary — PM2 at boot does not
      // source .bashrc and cannot resolve nvm shims.
      script: '/home/jss/.nvm/versions/node/v24.11.0/bin/jss',
      args: 'start --config /etc/jss/config.json',
      cwd: '/var/lib/jss',

      // Fork mode is correct — cluster mode is for stateless HTTP apps only.
      exec_mode: 'fork',
      instances: 1,

      autorestart: true,
      watch: false,           // never watch in production
      max_restarts: 10,
      min_uptime: '5s',       // must stay alive 5 s to count as a clean start
      restart_delay: 4000,    // wait 4 s between restart attempts

      max_memory_restart: '512M',

      out_file:   '/home/jss/.pm2/logs/jss-out.log',
      error_file: '/home/jss/.pm2/logs/jss-error.log',
      merge_logs: true,
      log_date_format: 'YYYY-MM-DD HH:mm:ss Z',

      env_production: {
        NODE_ENV: 'production',
        PATH: '/home/jss/.nvm/versions/node/v24.11.0/bin:' + process.env.PATH,
      },
    },
  ],
};
chown jss:jss /etc/jss/ecosystem.config.js

8.2 Ecosystem file for Uptime Kuma

Create /home/kuma/ecosystem.config.js:

module.exports = {
  apps: [
    {
      name: 'uptime-kuma',
      script: '/home/kuma/.nvm/versions/node/v24.11.0/bin/uptime-kuma-server',
      args: '--data-dir /var/lib/kuma --port 3001 --host 127.0.0.1',
      cwd: '/var/lib/kuma',

      exec_mode: 'fork',
      instances: 1,

      autorestart: true,
      watch: false,
      max_restarts: 10,
      min_uptime: '5s',
      restart_delay: 4000,

      max_memory_restart: '256M',

      out_file:   '/home/kuma/.pm2/logs/uptime-kuma-out.log',
      error_file: '/home/kuma/.pm2/logs/uptime-kuma-error.log',
      merge_logs: true,
      log_date_format: 'YYYY-MM-DD HH:mm:ss Z',

      env_production: {
        NODE_ENV: 'production',
        PATH: '/home/kuma/.nvm/versions/node/v24.11.0/bin:' + process.env.PATH,
      },
    },
  ],
};
chown kuma:kuma /home/kuma/ecosystem.config.js

8.3 Start both apps under PM2

sudo -u jss bash -c '
  source /home/jss/.nvm/nvm.sh
  pm2 start /etc/jss/ecosystem.config.js --env production
  pm2 status
'

sudo -u kuma bash -c '
  source /home/kuma/.nvm/nvm.sh
  pm2 start /home/kuma/ecosystem.config.js --env production
  pm2 status
'

Expected from pm2 status for each user:

┌────┬──────────────┬──────┬─────────┬──────────┐
│ id │ name         │ mode │ pid     │ status   │
├────┼──────────────┼──────┼─────────┼──────────┤
│ 0  │ jss          │ fork │ 12345   │ online   │
└────┴──────────────┴──────┴─────────┴──────────┘

8.4 Register PM2 startup hooks (mandatory two-step)

PM2 generates a systemd unit containing the exact PATH with the nvm bin directory.
You must run pm2 startup as the service user first — it prints a sudo env PATH=...
command. Copy-paste that exact command and run it as root.
Skipping this or running
pm2 startup directly as root produces a broken PATH at boot.

For the jss user:

# Step 1 — run as jss; prints the sudo command
sudo -u jss bash -c \
  'source /home/jss/.nvm/nvm.sh && pm2 startup systemd -u jss --hp /home/jss --service-name pm2-jss'

PM2 prints something like:

[PM2] To setup the Startup Script, copy/paste the following command:
sudo env PATH=$PATH:/home/jss/.nvm/versions/node/v24.11.0/bin \
  /home/jss/.nvm/versions/node/v24.11.0/lib/node_modules/pm2/bin/pm2 \
  startup systemd -u jss --hp /home/jss --service-name pm2-jss
# Step 2 — run the EXACT printed command as root (paths will match your install):
sudo env PATH=$PATH:/home/jss/.nvm/versions/node/v24.11.0/bin \
  /home/jss/.nvm/versions/node/v24.11.0/lib/node_modules/pm2/bin/pm2 \
  startup systemd -u jss --hp /home/jss --service-name pm2-jss

For the kuma user:

# Step 1
sudo -u kuma bash -c \
  'source /home/kuma/.nvm/nvm.sh && pm2 startup systemd -u kuma --hp /home/kuma --service-name pm2-kuma'

# Step 2 — copy and run the exact printed command as root
sudo env PATH=$PATH:/home/kuma/.nvm/versions/node/v24.11.0/bin \
  /home/kuma/.nvm/versions/node/v24.11.0/lib/node_modules/pm2/bin/pm2 \
  startup systemd -u kuma --hp /home/kuma --service-name pm2-kuma

8.5 Save PM2 process lists (mandatory)

pm2 startup only registers the boot hook. pm2 save writes the dump file that lists
which processes to resurrect. Both steps are required.

sudo -u jss  bash -c 'source /home/jss/.nvm/nvm.sh  && pm2 save'
sudo -u kuma bash -c 'source /home/kuma/.nvm/nvm.sh && pm2 save'

8.6 Verify generated systemd units

systemctl status pm2-jss.service
systemctl status pm2-kuma.service

systemctl cat pm2-jss.service    # inspect the generated unit
systemctl cat pm2-kuma.service

Both should be active (running) with WantedBy=multi-user.target.


9. Nginx HTTP Scaffolding

9.1 Remove default site

rm -f /etc/nginx/sites-enabled/default
mkdir -p /var/www/certbot

9.2 Temporary HTTP catch-all vhost

For wildcard DNS-01 certs the webroot challenge is not needed.
This redirect-only block is enough to keep Nginx running while we get certs.

Create /etc/nginx/sites-available/solidweb.app:

server {
    listen 80;
    listen [::]:80;
    server_name solidweb.app *.solidweb.app;
    return 301 https://$host$request_uri;
}
ln -s /etc/nginx/sites-available/solidweb.app /etc/nginx/sites-enabled/
nginx -t && systemctl reload nginx

10. Let's Encrypt Wildcard Certificate (DNS-01)

Why DNS-01?

*.solidweb.app wildcards cannot be issued via HTTP-01.
Let's Encrypt mandates DNS-01 for all wildcard SANs.

One cert covers everything

SANs covered Stored at
solidweb.app + *.solidweb.app /etc/letsencrypt/live/solidweb.app/

This single cert is used by all four Nginx server blocks.

10.1 Request (manual DNS mode)

certbot certonly \
  --manual \
  --preferred-challenges dns \
  --server https://acme-v02.api.letsencrypt.org/directory \
  --agree-tos \
  --email you@example.com \
  -d solidweb.app \
  -d '*.solidweb.app'

10.2 Add two TXT records at your registrar

Certbot pauses twice — once per SAN. Both records must coexist simultaneously.

Name Type Value
_acme-challenge.solidweb.app TXT <first token>
_acme-challenge.solidweb.app TXT <second token>

Do not delete the first record before adding the second.

10.3 Verify propagation before pressing Enter

# In a second terminal:
dig TXT _acme-challenge.solidweb.app +short
# Both token values must appear before proceeding

10.4 Auto-renewal via DNS plugin

The manual method does not auto-renew. Re-issue with your registrar's Certbot plugin:

# Example: Cloudflare
apt install -y python3-certbot-dns-cloudflare

cat > /etc/letsencrypt/cloudflare.ini <<'EOF'
dns_cloudflare_api_token = YOUR_API_TOKEN_HERE
EOF
chmod 600 /etc/letsencrypt/cloudflare.ini

certbot certonly \
  --dns-cloudflare \
  --dns-cloudflare-credentials /etc/letsencrypt/cloudflare.ini \
  --server https://acme-v02.api.letsencrypt.org/directory \
  --agree-tos \
  --email you@example.com \
  -d solidweb.app \
  -d '*.solidweb.app'

Full plugin list: https://certbot.eff.org/docs/using.html#dns-plugins
(Hetzner, DigitalOcean, OVH, Route53, Gandi, Linode, and many more are available.)

10.5 Nginx post-renewal reload hook

cat > /etc/letsencrypt/renewal-hooks/post/reload-nginx.sh <<'EOF'
#!/bin/bash
systemctl reload nginx
EOF
chmod +x /etc/letsencrypt/renewal-hooks/post/reload-nginx.sh

# Verify auto-renewal timer is active
systemctl status certbot.timer

# Dry-run test
certbot renew --dry-run

11. Nginx HTTPS Final Config

11.1 Shared TLS snippet

Create /etc/nginx/snippets/ssl-params.conf:

ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:10m;
ssl_stapling on;
ssl_stapling_verify on;
resolver 1.1.1.1 8.8.8.8 valid=300s;
resolver_timeout 5s;
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;

11.2 status.solidweb.app (Uptime Kuma)

Create /etc/nginx/sites-available/status.solidweb.app:

server {
    listen 80;
    listen [::]:80;
    server_name status.solidweb.app;
    return 301 https://status.solidweb.app$request_uri;
}

server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
    server_name status.solidweb.app;

    ssl_certificate     /etc/letsencrypt/live/solidweb.app/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/solidweb.app/privkey.pem;
    include snippets/ssl-params.conf;

    # Uptime Kuma requires WebSocket for real-time dashboard
    location / {
        proxy_pass          http://127.0.0.1:3001;
        proxy_http_version  1.1;
        proxy_set_header    Upgrade    $http_upgrade;
        proxy_set_header    Connection "upgrade";
        proxy_set_header    Host       $host;
        proxy_set_header    X-Real-IP  $remote_addr;
        proxy_set_header    X-Forwarded-For   $proxy_add_x_forwarded_for;
        proxy_set_header    X-Forwarded-Proto $scheme;
        proxy_read_timeout  3600s;
    }
}

11.3 monitor.solidweb.app (Netdata)

htpasswd -c /etc/nginx/.htpasswd admin
# Enter a strong password when prompted

Create /etc/nginx/sites-available/monitor.solidweb.app:

server {
    listen 80;
    listen [::]:80;
    server_name monitor.solidweb.app;
    return 301 https://monitor.solidweb.app$request_uri;
}

server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
    server_name monitor.solidweb.app;

    ssl_certificate     /etc/letsencrypt/live/solidweb.app/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/solidweb.app/privkey.pem;
    include snippets/ssl-params.conf;

    auth_basic           "Netdata — restricted";
    auth_basic_user_file /etc/nginx/.htpasswd;

    location / {
        proxy_pass         http://127.0.0.1:19999;
        proxy_http_version 1.1;
        proxy_set_header   Host              $host;
        proxy_set_header   X-Real-IP         $remote_addr;
        proxy_set_header   X-Forwarded-For   $proxy_add_x_forwarded_for;
        proxy_set_header   X-Forwarded-Proto $scheme;
    }

    # Netdata streaming API (WebSocket)
    location ~ ^/api/v[0-9]+/stream {
        proxy_pass         http://127.0.0.1:19999;
        proxy_http_version 1.1;
        proxy_set_header   Upgrade    $http_upgrade;
        proxy_set_header   Connection "upgrade";
    }
}

11.4 solidweb.app + all pod subdomains (JSS)

This single server block handles the root IDP and every *.solidweb.app pod,
because JSS in subdomain mode routes internally via the Host header.

Overwrite /etc/nginx/sites-available/solidweb.app:

# ─── HTTP → HTTPS ─────────────────────────────────────────────────────────────
server {
    listen 80;
    listen [::]:80;
    server_name solidweb.app *.solidweb.app;
    return 301 https://$host$request_uri;
}

# ─── HTTPS → JSS ──────────────────────────────────────────────────────────────
server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;

    # Nginx resolves exact server_name matches before wildcard ones.
    # status.solidweb.app and monitor.solidweb.app are caught by their own
    # blocks above; everything else falls here.
    server_name solidweb.app *.solidweb.app;

    ssl_certificate     /etc/letsencrypt/live/solidweb.app/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/solidweb.app/privkey.pem;
    include snippets/ssl-params.conf;

    # Allow large file uploads to pods
    client_max_body_size 512m;

    # WebSocket: solid-0.1 notifications (JSS real-time updates)
    # Header: Updates-Via: wss://alice.solidweb.app/.notifications
    location ~ ^/\.notifications {
        proxy_pass         http://127.0.0.1:3000;
        proxy_http_version 1.1;
        proxy_set_header   Upgrade    $http_upgrade;
        proxy_set_header   Connection "upgrade";
        proxy_set_header   Host       $host;
        proxy_read_timeout 3600s;
    }

    # WebSocket: Nostr relay (if --nostr is later enabled in JSS)
    location ~ ^/relay {
        proxy_pass         http://127.0.0.1:3000;
        proxy_http_version 1.1;
        proxy_set_header   Upgrade    $http_upgrade;
        proxy_set_header   Connection "upgrade";
        proxy_set_header   Host       $host;
        proxy_read_timeout 3600s;
    }

    # All other Solid/LDP requests → JSS
    location / {
        proxy_pass         http://127.0.0.1:3000;
        proxy_http_version 1.1;
        proxy_set_header   Host              $host;
        proxy_set_header   X-Real-IP         $remote_addr;
        proxy_set_header   X-Forwarded-For   $proxy_add_x_forwarded_for;
        proxy_set_header   X-Forwarded-Proto $scheme;
        # Required for correct IDP issuer URL construction in subdomain mode
        proxy_set_header   X-Forwarded-Host  $host;
        proxy_read_timeout 300s;
        proxy_send_timeout 300s;
    }
}

11.5 Enable all sites and reload

ln -s /etc/nginx/sites-available/status.solidweb.app  /etc/nginx/sites-enabled/
ln -s /etc/nginx/sites-available/monitor.solidweb.app /etc/nginx/sites-enabled/
# solidweb.app was already linked in step 9

nginx -t && systemctl reload nginx

12. Netdata

Netdata is not a Node.js process — PM2 is not involved. It runs under its own native
systemd service.

12.1 Install

wget -O /tmp/netdata-kickstart.sh https://get.netdata.cloud/kickstart.sh
sh /tmp/netdata-kickstart.sh --dont-start-it --stable-channel

12.2 Bind to localhost only

Edit /etc/netdata/netdata.conf:

[web]
    bind to = 127.0.0.1:19999

12.3 Start and enable

systemctl enable --now netdata
systemctl status netdata

12.4 Verify

curl -s http://127.0.0.1:19999/api/v1/info | python3 -m json.tool | head -20

13. Firewall Rules

ufw allow OpenSSH        # do this first — never lock yourself out
ufw allow 'Nginx Full'   # opens ports 80 and 443
ufw --force enable
ufw status verbose

Expected output:

To                         Action      From
--                         ------      ----
OpenSSH                    ALLOW IN    Anywhere
Nginx Full                 ALLOW IN    Anywhere

Ports 3000 (JSS), 3001 (Uptime Kuma), 19999 (Netdata) remain closed to the public.
They are reachable only from 127.0.0.1 via Nginx.


14. Nginx Virtual Host Summary

Incoming request Nginx match Backend Auth
https://solidweb.app exact solidweb.app JSS :3000 Solid-OIDC / WAC
https://alice.solidweb.app wildcard *.solidweb.app JSS :3000 Solid-OIDC / WAC
https://status.solidweb.app exact (higher priority) Uptime Kuma :3001 Kuma login + 2FA
https://monitor.solidweb.app exact (higher priority) Netdata :19999 HTTP Basic Auth
http://* all → 301 HTTPS

Nginx exact server_name matches take priority over wildcard matches on the same port.


15. Post-Install Checklist

PM2 health

sudo -u jss  bash -c 'source /home/jss/.nvm/nvm.sh  && pm2 status'
sudo -u kuma bash -c 'source /home/kuma/.nvm/nvm.sh && pm2 status'
systemctl status pm2-jss.service
systemctl status pm2-kuma.service

JSS — open registration + subdomain mode

# Root IDP
curl -I https://solidweb.app
# Expected: HTTP/2 200

# Pod subdomain routing (200 or 401 — both confirm routing works)
curl -I https://alice.solidweb.app/
# 401 Unauthorized is correct if WAC enforces authentication on the pod root

# Confirm subdomain mode: podUri must be at the subdomain, not a path
# NOTE: with --idp enabled, the API requires email + password (official API docs)
curl -s -X POST https://solidweb.app/.pods \
  -H "Content-Type: application/json" \
  -d '{"name":"testpod","email":"test@example.com","password":"changeme123"}' \
  | python3 -m json.tool
# Expected:
# "podUri": "https://testpod.solidweb.app/"
# "webId":  "https://testpod.solidweb.app/index.html#me"  (subdomain mode WebID)

Expected pod structure on disk after creation (per official JSS docs):

/var/lib/jss/data/testpod/
├── index.html          ← WebID profile (HTML + embedded JSON-LD)
├── .acl                ← Root WAC access control
├── inbox/              ← LDP inbox (public append)
│   └── .acl
├── public/             ← Public container
├── private/            ← Private container (owner only)
│   └── .acl
└── settings/           ← User preferences
    ├── prefs
    ├── publicTypeIndex
    └── privateTypeIndex
# WebSocket notifications header — confirms solid-0.1 protocol is active
curl -I https://testpod.solidweb.app/public/
# Should include: Updates-Via: wss://testpod.solidweb.app/.notifications

Wildcard certificate

# Confirm SANs cover both root and wildcard
echo | openssl s_client \
  -connect solidweb.app:443 \
  -servername solidweb.app 2>/dev/null \
  | openssl x509 -noout -text \
  | grep -A2 "Subject Alternative Name"
# Expected: DNS:solidweb.app, DNS:*.solidweb.app

# Check expiry on all three hostnames
for host in solidweb.app status.solidweb.app monitor.solidweb.app; do
  echo "=== $host ==="
  echo | openssl s_client -connect "$host:443" 2>/dev/null \
    | openssl x509 -noout -dates
done

Uptime Kuma

  1. Open https://status.solidweb.app.
  2. Create admin account (no default password — first-run wizard).
  3. Enable 2FA in Settings → Security.
  4. Add monitors:
    • HTTP(s): https://solidweb.app — interval 60 s
    • HTTP(s): https://alice.solidweb.app/ — interval 60 s
    • HTTP(s): https://status.solidweb.app — interval 60 s
    • HTTP(s): https://monitor.solidweb.app — interval 60 s
    • SSL Certificate: solidweb.app — alert 14 days before expiry
  5. Create a public Status Page.

Netdata

# Open https://monitor.solidweb.app — log in with htpasswd credentials
curl -s -u admin:yourpassword https://monitor.solidweb.app/api/v1/info | head -5

16. Maintenance & Useful Commands

PM2 daily operations

# Interactive live dashboard
sudo -u jss  bash -c 'source /home/jss/.nvm/nvm.sh  && pm2 monit'
sudo -u kuma bash -c 'source /home/kuma/.nvm/nvm.sh && pm2 monit'

# Tail logs
sudo -u jss  bash -c 'source /home/jss/.nvm/nvm.sh  && pm2 logs jss'
sudo -u kuma bash -c 'source /home/kuma/.nvm/nvm.sh && pm2 logs uptime-kuma'

# Graceful reload (zero-downtime restart)
sudo -u jss  bash -c 'source /home/jss/.nvm/nvm.sh  && pm2 reload jss'
sudo -u kuma bash -c 'source /home/kuma/.nvm/nvm.sh && pm2 reload uptime-kuma'

# Hard restart
sudo -u jss  bash -c 'source /home/jss/.nvm/nvm.sh  && pm2 restart jss'
sudo -u kuma bash -c 'source /home/kuma/.nvm/nvm.sh && pm2 restart uptime-kuma'

# Stop
sudo -u jss  bash -c 'source /home/jss/.nvm/nvm.sh  && pm2 stop jss'
sudo -u kuma bash -c 'source /home/kuma/.nvm/nvm.sh && pm2 stop uptime-kuma'

Update JSS

su - jss -c 'source /home/jss/.nvm/nvm.sh && npm update -g javascript-solid-server'
sudo -u jss bash -c 'source /home/jss/.nvm/nvm.sh && pm2 restart jss'

Update Uptime Kuma

su - kuma -c 'source /home/kuma/.nvm/nvm.sh && npm update -g uptime-kuma'
sudo -u kuma bash -c 'source /home/kuma/.nvm/nvm.sh && pm2 restart uptime-kuma'

Update Netdata

/usr/libexec/netdata/netdata-updater.sh

Upgrade Node.js version

Per PM2 documentation: the startup hook must be re-run after every Node version change
because the binary path changes.

NEW=24.12.0   # example

for USER in jss kuma; do
  HOME_DIR="/home/$USER"
  sudo -u $USER bash -c "
    source $HOME_DIR/.nvm/nvm.sh
    nvm install $NEW
    nvm alias default $NEW
    npm install -g pm2
  "
done

# Re-run pm2 startup for each user; copy-paste the printed sudo env command as root
sudo -u jss  bash -c 'source /home/jss/.nvm/nvm.sh  && pm2 startup systemd -u jss  --hp /home/jss  --service-name pm2-jss'
sudo -u kuma bash -c 'source /home/kuma/.nvm/nvm.sh && pm2 startup systemd -u kuma --hp /home/kuma --service-name pm2-kuma'

# Reload PM2 daemon in-memory without losing running processes
sudo -u jss  bash -c 'source /home/jss/.nvm/nvm.sh  && pm2 update'
sudo -u kuma bash -c 'source /home/kuma/.nvm/nvm.sh && pm2 update'

# Save updated process lists
sudo -u jss  bash -c 'source /home/jss/.nvm/nvm.sh  && pm2 save'
sudo -u kuma bash -c 'source /home/kuma/.nvm/nvm.sh && pm2 save'

# Update ecosystem PATH lines
sed -i "s|v24\.11\.0|v${NEW}|g" /etc/jss/ecosystem.config.js
sed -i "s|v24\.11\.0|v${NEW}|g" /home/kuma/ecosystem.config.js

Update PM2 itself

su - jss  -c 'source /home/jss/.nvm/nvm.sh  && npm install -g pm2@latest && pm2 update'
su - kuma -c 'source /home/kuma/.nvm/nvm.sh && npm install -g pm2@latest && pm2 update'

# Re-generate systemd units (binary path may have changed)
sudo -u jss  bash -c 'source /home/jss/.nvm/nvm.sh  && pm2 startup systemd -u jss  --hp /home/jss  --service-name pm2-jss'
sudo -u kuma bash -c 'source /home/kuma/.nvm/nvm.sh && pm2 startup systemd -u kuma --hp /home/kuma --service-name pm2-kuma'
# Copy-paste the printed sudo env ... command as root for each user

Manage storage quotas

sudo -u jss bash -c 'source /home/jss/.nvm/nvm.sh && jss quota show alice'
sudo -u jss bash -c 'source /home/jss/.nvm/nvm.sh && jss quota set alice 2GB'
sudo -u jss bash -c 'source /home/jss/.nvm/nvm.sh && jss quota reconcile alice'

Manual certificate renewal

certbot renew --force-renewal
systemctl reload nginx

Summary: Port & Service Map

Service Managed by User Port Public URL
JSS PM2 (pm2-jss) jss 3000 https://solidweb.app + https://*.solidweb.app
Uptime Kuma PM2 (pm2-kuma) kuma 3001 https://status.solidweb.app
Netdata systemd (native) root 19999 https://monitor.solidweb.app
Nginx systemd (native) root 80/443 All of the above

Credits

The JavaScript Solid Server (JSS) is created by
Melvin Carvalho — web pioneer, mathematician, Solid Protocol
enthusiast, and long-time contributor to the decentralised web. Melvin previously ran
solid.community, one of the original public Solid pod communities, and has been a key
figure in the development of WebID, Solid, and linked data on the web.


Reference documents used for this version:


v5 — solidweb.app · 92.205.60.157 · Debian 12 Bookworm · Node.js 24.11.0 via nvm · PM2 · March 2026

this document: https://hackmd.io/YnMXe517TlyMMv6L4UKpYg?view