How do you monitor your server containers, disks, load…?
Do you use an easy-to-use web interface? Do you do everything via SSH? Or maybe you’ve got a more complicated setup?
I want to change my setup and I’m looking for new ideas, I’ve been using Cockpit for some years and some of the plugins are really outdated (ZFS for example) and others are completely broken (docker-compose).
My own server? YOLO
At work? Grafana, KOBS, Victoria Metrics, Jaeger, OpsGenie, …
My own server? YOLO
I can’t figure out whether there’s a monitoring tool called YOLO or you don’t monitor anything.
Now I am intrigued to develop one that is called YOLO.
But just in case: no, I don’t monitor my server. If I notice something not working, I ssh into the machine and check what’s up. I don’t want to deal with another zoo of services for the monitoring part.
You are me
Yes.
This is the first time I’ve heard of Victoria Metrics. It looks like it has a similar use case as Prometheus, is that correct? If so, what made you or your team choose one over the other?
IIRC it had better performance than Prometheus. We also ditched Elasticsearch in favor of ClickHouse to keep up with log ingestion.
I can second that. We had some really good experiences with ClickHouse and its performance. If it fits the bill, it’s a very nice piece of software.
Thanks for the info! Looks pretty cool I’ll have to check it out
My clients when they text me the server is down.
This has the same energy as my spouse yelling at me because jellyfin went down
Or my partners greeting me in the morning “Home assistant went down again, so the lights are all manual”
Thankfully that one is mostly solved.
So damn accurate ahhaha
“Huh weird, I tried to use <insert service here> and it’s not working. Welp, guess I better fix it…”
I’m a huge fan of Netdata, very configurable and monitors just about anything you could want. Great interface and alerts too - https://www.netdata.cloud/
Same been running netdata for years. They’re monetizing now where it used to just be free. Good for them, it’s a great product. And it’s foss
I was looking for something free that I could host on my machine but thanks, I didn’t know about it
Netdata is free and can be run standalone. Just install it and do not configure the cloud integration. You can see your dashboard on localhost:19999
Oh that’s neat, will take a look! Can you run it on docker?
As others stated, you can run and access the interface locally (or setup your own reverse proxy) for free. Their Cloud dashboard is also free for up to 5 nodes. They recently added a flat-rate “Homelab” plan as well, if you want to remove the limit. It’s all quite usable for $0 otherwise though!
Netdata 100%
It feeds my itch for more data than I know what to do with and it’s presented in one of the cleanest ways I’ve ever seen for so much info.
I love how easy to use NetData is, but when running it on my home servers it destroys their performance lol. Every once in awhile I check in to see if it runs better.
That’s strange, I’ve run it fine on some very underpowered hardware. Are you adding a specific monitoring integration with it, or just out of the box settings?
Just out of the box. I am usually running it as a container on UnRAID on an x86 machine. It seems primarily to just be a big memory hog when I’ve tried to use it.
Weird! For reference one VM I run on only has 1 GB of memory, and Netdata uses 100-200 MB. Could be something going on with UnRAID though. Definitely some sort of bug I’d think, since normally resource usage should be very low across the board.
Node exporter on hosts, OpenTelemetry collector to scrape metrics and collect logs, shipping them to Prometheus and Loki, visualising with Grafana.
Day job is for an observability platform where we heavily encourage the use of (and also contribute) to the OpenTelemetry collector project, hence my use of it.
Try VictoriaMetrics. Basically the same feature set as Prometheus, but so much more resource friendly for homelab scale. I store some metrics for 12 months now, because it’s easy.
Do you have a name for the opentelemetry collector? I’m interested.
Use the Contrib version of the collector, it has many more receivers, processors and exporters
Similar setup here with additional exporters like cadvisor for container metrics and other components.
OpenTelemetry is awesome, but still a very fast moving project. Expect therefore more frequent updates and changes compared to more older and established projects.
I just use homepage as my homepage :D
I can see simple CPU/RAM/storage stats and got widgets for almost all services, one of them is portainer so I can see if any service is stopped (most of them are running in docker). Also few services send notification on error or update
I know its not really a monitoring tool, but it works well enough for me
Zabbix for agent / snmp based statistics.
Uptime Kuma for up/down states with a webhook notification into Discord so I get instant alerts on my phone when one goes down.
How has nobody in this thread said check_mk yet?
It’s free, you host it yourself. It’s built off of nagios, compatible with nagios plugins, supports snmp or agent based checks. It can email, SMS, slack or discord you when something breaks, you can write your own custom checks in any language that can output to a local console… I could never imagine even looking for something else.
+1 for check_mk.
It’s got a scriptable config file that begs for automation like mgmtConfig and it does SNMP. For me, that’s it. SNMP->MQTT->SNMP next year.
I started using Checkmk recently after it was mentioned here and I really like it. I’d used Zabbix a bit but was annoyed at how much work it took to get it to do what I want. Checkmk was a lot better right out of the box.
I like monit. It’s simple to setup and pretty flexible.
I used it as well until I found out I could just do it with
systemd
. https://www.baeldung.com/linux/systemd-service-fail-notification
I’ve been using uptime Kuma recently and it’s great but works better outside of docker.
Inside docker I’d get a lot of false down positives from I assume docker throttling the checks.
Plus it works with email, telegram, and matrix chat alerts. I monitor all my clients sites with it, and it’s bullet proof behind caddy.
For light touch monitoring this is my approach too. I have one instance in my network, and another on fly.io for the VPSs (my most common outage is my home internet). To make it a tiny bit stronger, I wrote a Go endpoint that exposes the disk and memory usage of a server including with mem_okay and disk_okay keywords, and I have Kuma checking those.
I even have the two Kuma instances checking each other by making a status page and adding checks for each other’s ‘degraded’ state. I have ntfy set up on both so I get the Kuma change notifications on my iPhone. I love ntfy so much I donate to it.
For my VPSs, this is probably not enough, so I am considering the more complicated solutions (I’ve started wanting to know things like an influx of fali2ban bans etc.)
I just do web hosting for clients sites and use Kuma to monitor uptime and SSL certificates.
Ive got multiple Kuma’s running as well.
At home, libreNMS. Just SNMP everything.
For work, whatever the tool of the day is from management.
Zabbix
Second Zabbix. Been using it for years and it just works.
Adding my vote for Zabbix. It was a bit of a bear to set up and I had to write custom scripts to install the agents with TLS settings that were secure enough for me, but once it’s all set up it’s amazingly easy and intuitive to use and incredibly customizable.
At home, nagios, at work colleagues. (I finally escaped the admin rat race)
Grafana set up to run on the server locally, then I connect to it via SSH forwarding. Then I can view all kinds of metrics in my browser in a neat interface.
I liked Grafana a lot, but I can’t monitor things like zfs pools with it right?
I don’t know as I don’t use zfs pools, but a simple search led me to this https://grafana.com/grafana/dashboards/15362-zfs-pool-metrics/
Nevermind then! Will take a look at it ^^
Prometheus and Altertmanager