cr
Data

Glintlog: Self-Hosted Observability That Doesn't Need a Cluster

A single Go binary that ingests logs and traces via OpenTelemetry, stores them in DuckDB, and gives you dashboards — no Elasticsearch, no Kafka, no YAML hell.

#observability#open-source#go#duckdb#opentelemetry#self-hosted

Most observability stacks look like this: a collector, a queue, a storage backend, a query layer, and a UI. That’s five services minimum before you’ve seen a single log line. Datadog costs a fortune. Grafana + Loki is free but demands serious operational investment. For small teams and solo engineers, the cost of observing your system often rivals the cost of running it.

I built Glintlog to answer a question: how simple can observability get without being useless?

One Binary, Three Ports

Glintlog is a Go backend with an embedded React frontend. You download it, run it, and open your browser. That’s the full setup.

curl -fsSL https://raw.githubusercontent.com/caioricciuti/glintlog/main/scripts/install.sh | bash
glintlog

It exposes three ports:

If your app already speaks OpenTelemetry — and it should — you just point your SDK at Glintlog. No proprietary agents, no custom formats.

DuckDB as the Storage Engine

This is the part that raises eyebrows. Elasticsearch is the default choice for log storage. It’s also a distributed system that needs tuning, heap sizing, shard management, and a dedicated operations person.

DuckDB is an embedded analytical database. No server process, no networking overhead, no cluster coordination. It sits inside the Glintlog binary and writes to a single file. For log workloads — append-heavy, column-scan-heavy, rarely updated — it’s a natural fit.

The compression alone makes this worth it. DuckDB’s columnar format compresses log data aggressively. The description says 140x compared to Elasticsearch, and from my testing that’s in the right ballpark for structured, repetitive log bodies. Your disk bill goes down by orders of magnitude.

The trade-off is scale. DuckDB won’t handle petabytes of logs across a fleet of machines. But for the 90% of teams running a handful of services on a few servers, it’s more than enough — and it’s operationally free.

What You Get

Over 18 alpha releases in about three weeks, the feature set grew fast:

The dashboard system supports stat cards, histograms, and sparkline trends. It’s not Grafana, but it covers the operational monitoring most teams actually need — without requiring a separate visualization layer.

The Stack

The language breakdown tells the story: ~700k lines of TypeScript for the frontend, ~510k lines of Go for the backend. The frontend is React with a component library built around shadcn/ui patterns. The backend handles OTLP ingestion, DuckDB queries, auth, and serves the embedded UI — all in a single process.

Binaries ship for Linux amd64 and arm64. A Makefile handles the build: Bun compiles the frontend, Go embeds the static output and produces the final binary. CI builds and releases are automated through GitHub Actions.

Who It’s For

If you’re running a SaaS product with 50 microservices and terabytes of daily logs, use Datadog or Grafana Cloud. Glintlog isn’t competing there.

But if you’re a small team, a solo founder, or someone running side projects that still need observability — Glintlog gives you 90% of what you need at a fraction of the cost and complexity. Point your OTLP exporters at it, set up a dashboard, and move on.

The project is in alpha, actively developed, and open-source. Check it out on GitHub.

← All posts