cr
Builder

Patolake: A DuckDB Workspace in a Single Binary

A Go server with an embedded Svelte frontend that gives teams a SQL editor, dashboards, scheduling, governance, and an AI copilot — all backed by DuckDB and shipped as one executable.

#open-source#go#duckdb#svelte#self-hosted#data-platform#sql

Every data team I’ve worked with ends up stitching together the same stack: a SQL client, a dashboard tool, a scheduler, some governance spreadsheet, and maybe a chatbot bolted on top. Five tools, five logins, five billing pages. The query layer alone usually requires a dedicated ops person.

I built Patolake to collapse all of that into a single binary you can download and run in seconds.

The Architecture

Patolake is a Go backend with an embedded Svelte frontend. The server starts on port 3488, serves the UI, and manages two embedded databases:

No external dependencies. No Docker requirement. No cluster. One process, one port, one data directory.

curl -L -o patolake https://github.com/caioricciuti/pato-lake/releases/latest/download/patolake-darwin-arm64
chmod +x patolake
./patolake

Open http://localhost:3488 and you’re in.

What’s Inside

The community edition (Apache 2.0) gives you a multi-tab SQL editor with result views, a database and table explorer, and saved queries. That alone replaces DBeaver or TablePlus for DuckDB work.

The pro tier adds the features that make it a platform:

Why Go + Svelte

The codebase is roughly 837k lines of Go and 723k lines of Svelte with TypeScript. Go handles everything server-side: HTTP routing, DuckDB and SQLite interactions, auth, scheduling, and static file embedding. Svelte handles the frontend — fast reactivity, small bundle, and a natural fit for the kind of interactive data exploration UI that Patolake needs.

The build produces a single statically-linked binary. No Node runtime, no Java, no Python — just download and run. Binaries ship for Linux and macOS, both amd64 and arm64. Docker images are also available if that’s your preference:

docker run --rm -p 3488:3488 -v patolake-data:/app/data ghcr.io/caioricciuti/pato-lake:latest

The DuckDB Bet

If you’ve followed my other projects — CH-UI for ClickHouse, Duck-UI for browser-based DuckDB, Glintlog for observability — you’ll notice a pattern. I keep reaching for embedded databases and single-binary deployments.

DuckDB is the engine that makes Patolake possible. It handles analytical queries on local files, remote Parquet on S3, and everything in between. The extension ecosystem means you can attach new data sources without writing integration code. And because it’s embedded, there’s no network hop between the query engine and the application server.

The limitation is the same as always: this won’t replace Snowflake for multi-petabyte workloads. But for the majority of teams running analytics on gigabytes, not terabytes, it’s faster to set up, cheaper to run, and simpler to operate than any managed alternative.

Where It Stands

Patolake is in early alpha — v0.0.8 at the time of writing, with 8 releases shipped in the first week. The core is stable enough for local and small-team use, but expect rough edges. The roadmap includes more connector types, improved lineage visualization, and deeper AI integration.

If you’re tired of juggling five tools to query your data, check it out on GitHub. Download the binary, point it at your files, and see how far a single process can take you.

← All posts