SQLite vs PostgreSQL for Small Projects: When to Use Which
What You’ll Need
- n8n Cloud or self-hosted n8n for workflow automation
- Hetzner VPS or Contabo VPS for hosting PostgreSQL instances
- DigitalOcean as an alternative cloud provider
- Namecheap for domain registration (optional)
- SQLite (built-in to most systems) or PostgreSQL (free, open-source)
Table of Contents
- Understanding the Fundamentals
- SQLite: Lightweight and Local
- PostgreSQL: Scalable and Powerful
- Head-to-Head Comparison
- Migration Strategies
- Real-World Use Cases
- Getting Started
Understanding the Fundamentals
I’ve spent the last five years building small projects that started simple and ended up needing serious database muscle. The SQLite vs PostgreSQL decision isn’t academic—it’s the difference between shipping fast today and refactoring in panic mode six months from now.
SQLite is a file-based relational database that lives on your machine. No server, no network calls, no complexity. PostgreSQL is a heavyweight champion: a full-featured relational database server that handles concurrent users, complex transactions, and enterprise-grade reliability.
The wrong choice wastes weeks. The right one lets you scale without rewrites.
SQLite: Lightweight and Local
I reach for SQLite when I’m building a prototype, a CLI tool, or anything that doesn’t need remote access. It’s bundled with Python, Node.js, and most runtimes. Installation? You already have it.
SQLite stores everything in a single file. Your entire database is just database.db sitting on disk. This means:
- Zero configuration
- Instant backups (copy the file)
- Perfect for local development
- No network overhead
- ACID compliance out of the box
Here’s how you’d initialize a basic SQLite database in Node.js:
const sqlite3 = require('sqlite3').verbose();
const db = new sqlite3.Database('./myapp.db', (err) => {
if (err) {
console.error(err.message);
} else {
console.log('Connected to SQLite database');
}
});
db.serialize(() => {
db.run(`CREATE TABLE IF NOT EXISTS users (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL,
email TEXT UNIQUE NOT NULL,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
)`);
db.run(`CREATE TABLE IF NOT EXISTS posts (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER NOT NULL,
title TEXT NOT NULL,
content TEXT,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (user_id) REFERENCES users(id)
)`);
console.log('Tables created successfully');
});
db.close((err) => {
if (err) {
console.error(err.message);
}
});
Now let’s add some data:
const sqlite3 = require('sqlite3').verbose();
const db = new sqlite3.Database('./myapp.db');
db.serialize(() => {
const stmt = db.prepare('INSERT INTO users (name, email) VALUES (?, ?)');
stmt.run('Alice Chen', 'alice@example.com', function(err) {
if (err) console.error(err);
else console.log('Inserted user with ID:', this.lastID);
});
stmt.run('Bob Martinez', 'bob@example.com', function(err) {
if (err) console.error(err);
else console.log('Inserted user with ID:', this.lastID);
});
stmt.finalize();
db.all('SELECT * FROM users', [], (err, rows) => {
if (err) throw err;
console.log('All users:', rows);
});
});
db.close();
SQLite handles single-process or light concurrent access beautifully. If you’re building a Electron app, a mobile backend, or a data pipeline that runs on one machine, SQLite is your answer.
The catch? SQLite struggles with concurrent writes. It locks the entire database file during writes, which works fine for one user but explodes with multiple simultaneous requests. It also doesn’t support replication or clustering. Scale beyond 100,000 records with multiple concurrent users, and you’ll hit limitations.
PostgreSQL: Scalable and Powerful
PostgreSQL is the opposite: a client-server database that handles thousands of concurrent connections, advanced data types, full-text search, JSON storage, and nearly infinite scalability.
I use PostgreSQL when:
- Multiple users hit the database simultaneously
- I need advanced queries or complex joins
- I’m building an API that serves a web or mobile app
- I plan to grow beyond prototype stage
- I need features like stored procedures or custom types
Setting up PostgreSQL requires more work. I typically deploy it on a Hetzner VPS or DigitalOcean using Docker:
version: '3.8'
services:
postgres:
image: postgres:16-alpine
environment:
POSTGRES_USER: appuser
POSTGRES_PASSWORD: securepassword123
POSTGRES_DB: myapp_db
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U appuser"]
interval: 10s
timeout: 5s
retries: 5
volumes:
postgres_data:
Run this with:
docker-compose up -d
Now connect from Node.js using the pg library:
const { Pool } = require('pg');
const pool = new Pool({
user: 'appuser',
password: 'securepassword123',
host: 'localhost',
port: 5432,
database: 'myapp_db'
});
pool.on('error', (err) => {
console.error('Unexpected error on idle client', err);
});
const createTables = async () => {
const client = await pool.connect();
try {
await client.query(`
CREATE TABLE IF NOT EXISTS users (
id SERIAL PRIMARY KEY,
name VARCHAR(255) NOT NULL,
email VARCHAR(255) UNIQUE NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
)
`);
await client.query(`
CREATE TABLE IF NOT EXISTS posts (
id SERIAL PRIMARY KEY,
user_id INTEGER NOT NULL REFERENCES users(id),
title VARCHAR(255) NOT NULL,
content TEXT,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
)
`);
console.log('Tables created successfully');
} finally {
client.release();
}
};
createTables().catch(console.error);
Insert and query data:
const { Pool } = require('pg');
const pool = new Pool({
user: 'appuser',
password: 'securepassword123',
host: 'localhost',
port: 5432,
database: 'myapp_db'
});
const addUser = async (name, email) => {
const result = await pool.query(
'INSERT INTO users (name, email) VALUES ($1, $2) RETURNING id, name, email, created_at',
[name, email]
);
return result.rows[0];
};
const getAllUsers = async () => {
const result = await pool.query('SELECT id, name, email, created_at FROM users ORDER BY created_at DESC');
return result.rows;
};
const getUserWithPosts = async (userId) => {
const result = await pool.query(`
SELECT
u.id, u.name, u.email, u.created_at,
json_agg(json_build_object('id', p.id, 'title', p.title, 'content', p.content)) as posts
FROM users u
LEFT JOIN posts p ON u.id = p.user_id
WHERE u.id = $1
GROUP BY u.id, u.name, u.email, u.created_at
`, [userId]);
return result.rows[0];
};
(async () => {
try {
const newUser = await addUser('Carol Davis', 'carol@example.com');
console.log('Created user:', newUser);
const allUsers = await getAllUsers();
console.log('All users:', allUsers);
const userDetail = await getUserWithPosts(newUser.id);
console.log('User with posts:', userDetail);
} catch (err) {
console.error('Database error:', err);
} finally {
await pool.end();
}
})();
PostgreSQL also handles JSON natively, which is perfect if you’re storing flexible data. Notice in the query above I’m using json_agg and json_build_object—these are PostgreSQL features that SQLite can’t match.
💡 Fast-Track Your Project: Don’t want to configure this yourself? I build custom n8n pipelines and bots. Message me with code SYS3-HUGO.
Head-to-Head Comparison
Here’s a decision matrix I use:
Choose SQLite if:
- Your app is a single-user tool or runs on one machine
- You’re prototyping or building a proof of concept
- You need zero DevOps burden
- File-based storage works for your architecture
- You’re embedding a database in a game, desktop app, or CLI tool
- You have fewer than 10,000 daily transactions
Choose PostgreSQL if:
- Multiple users access the database simultaneously
- You’re building a web API or SaaS product
- You need replication, backups, or high availability
- Your data is complex or relational with many joins
- You plan to scale beyond prototype stage
- You need features like JSONB, full-text search, or arrays
- Your infrastructure already supports Docker or cloud deployments
I’ve also seen projects successfully run SQLite at scale when stored on high-performance NVMe drives and accessed by a single process (think a data warehouse that loads once daily). Don’t assume SQLite is always “too small”—it’s about your access pattern, not your dataset size.
Migration Strategies
I’ve migrated three projects from SQLite to PostgreSQL. Here’s how to do it without losing sleep.
First, export your SQLite data as SQL:
sqlite3 myapp.db .dump > backup.sql
This creates a portable SQL file. Now here’s the tricky part: SQLite and PostgreSQL have syntax differences. SQLite’s AUTOINCREMENT and CURRENT_TIMESTAMP don’t translate directly.
Create a migration script that handles these differences:
import sqlite3
import psycopg2
from psycopg2 import sql
import sys
sqlite_db = 'myapp.db'
pg_host = 'localhost'
pg_user = 'appuser'
pg_password = 'securepassword123'
pg_db = 'myapp_db'
sqlite_conn = sqlite3.connect(sqlite_db)
sqlite_cursor = sqlite_conn.cursor()
pg_conn = psycopg2.connect(
host=pg_host,
user=pg_user,
password=pg_password,
database=pg_db
)
pg_cursor = pg_conn.cursor()
sqlite_cursor.execute("SELECT name FROM sqlite_master WHERE type='table';")
tables = sqlite_cursor.fetchall()
for table_name in tables:
table = table_name[0]
Want to automate this yourself?
Start with n8n Cloud (free tier available) or self-host on a Hetzner VPS for full control.
📬 Get Weekly Automation Tips
One email per week with tutorials, tools, and workflows. No spam, unsubscribe anytime.