VictoriaMetrics: When You Need to Push Metrics Fast Without Breaking Things
Let’s be honest — most time series databases start falling apart once you throw real data at them. Either ingestion slows to a crawl, storage eats the disk, or queries turn into a waiting game. That’s where VictoriaMetrics steps in. It’s a tool that doesn’t try to be flashy — just efficient, predictable, and brutally fast when it counts.
It’s written in Go. One binary. No dependencies. Drop it in, and it starts listening on port 8428. Feeds on remote_write from Prometheus like it was born for it. Query response times? Sharp. Disk footprint? Surprisingly small. Stability? Better than expected, even with tens of millions of active time series.
So How Does It Work Behind the Scenes?
You’ve got exporters scraping data every 15 seconds — or faster — from hundreds of nodes. Normally, that’s when things get ugly. But VictoriaMetrics uses a write-ahead log, custom storage layout, and aggressive deduplication to keep performance steady.
You can run it as a single-node daemon — good enough for many setups. Or split it into vmstorage, vmselect, and vminsert if you need cluster-grade throughput. It doesn’t need ZooKeeper or sidecars to scale. Just add nodes and point them at each other.
The best part? It speaks Prometheus — remote write, remote read, and the same query language (PromQL). So swapping out the default TSDB in a Prometheus setup takes minutes, not hours.
Where It Just Works
– You’ve got a bunch of Prometheus instances across clusters and want a central place to send data
– Grafana dashboards are timing out and you need queries that don’t choke
– Long-term storage is chewing too much disk or too much memory
– InfluxDB’s quirks are making you consider switching
– You just want something that runs, logs to stdout, and keeps doing its job
What Makes It Practical
It Can Do | What That Means in Real Use |
Handle millions of series | Great for K8s, VM fleets, IoT — without blowing up RAM |
Accept Prometheus writes | No exporter rewrites, no config gymnastics |
Fast lookups on disk | You can query 6-month-old metrics and get results fast |
Run from a single binary | No dependencies, no init scripts, just run the thing |
Keep storage tight | Low disk use even at high cardinality |
Cluster, if you need it | Works simple at first, but grows if it has to |
Typical Setup Steps (the Way It Usually Goes)
1. Download the latest release from GitHub
2. Start with something like:
./victoria-metrics -retentionPeriod=12 -storageDataPath=/var/lib/vm
3. Point Prometheus to it:
remote_write:
– url: http://vmhost:8428/api/v1/write
4. Add it as a Prometheus-compatible source in Grafana
5. Done. No rocket science.
How It Holds Up Against Others
Tool | Pain You’ll Hit | Where VM Feels Cleaner |
Prometheus | No HA, local-only storage | VM handles remote_write and clustering |
InfluxDB | Eats RAM, inconsistent queries | VM is faster, simpler, more stable |
Thanos | Complex to deploy, S3 required | VM runs standalone or clusters without cloud |
TimescaleDB | Feels like a database, not a TSDB | VM is focused, built just for metrics |
Final Word
VictoriaMetrics isn’t a toolbox full of surprises. It’s a blunt instrument that does one job well — absorb, store, and serve up time series data at scale. It doesn’t care if you’re piping in metrics from Prometheus, Telegraf, or a custom collector. It just works. And in a world full of overcomplicated solutions, that’s a win.