For what it’s worth you can convert the database to postgres if you want. I tried it out a few weeks ago and went flawlessly.
https://docs.nextcloud.com/server/latest/admin_manual/configuration_database/db_conversion.html
For what it’s worth you can convert the database to postgres if you want. I tried it out a few weeks ago and went flawlessly.
https://docs.nextcloud.com/server/latest/admin_manual/configuration_database/db_conversion.html
Yeah I’ve been using it for about a year and half or so on my main devices and it’s been wonderful. I’m likely going to down the list of supported providers from the gluetun docs and decide from there. Throwing my torrents and all that behind a vpn was the catalyst for signing up so I’ll continue to look for that support first and everything else is secondary.
I’m pretty sure it’s entirely disabled. Their announcement post says it’s being removed and doesn’t call out any exceptions.
I run my clients through a gluetun container with forwarding set up and ever since their announced end of support date (July I think?) I have had 0B uploaded for any of my trackers.
E: realized you may be asking about proton, oops
Wow this is great. I’ve been having trouble getting exit nodes working properly with these two. Sad that mullvad dropped port forwarding though so I’m not sure if I’ll stay with them.
I thought about setting one up for my main server because every time the power went out I’d have to reconfigure the bios for boot order, virtualization, and a few other settings.
I’ve since added a UPS to the mix but ultimately the fix was replacing the cmos battery lol. Had I put one of these together it would be entirely unused these days.
It’s a neat concept and if you need remote bios access it’s great, but people usually overestimate how useful that really is.
Why do you think AdGuard is better than Pihole? I’m not upset with the job Pihole is doing but always looking for improvements.
Yeah I use different VMs to separate out the different containers into arbitrary groups I decided on.
I run my docker containers inside different Debian VMs that are on a couple different Proxmox hosts.
I can’t speak for everyone else, but I run about 6 different VMs solely to run different docker containers. They’re split out by use case, so super critical stuff on one VM, *arr stuff on another, etc. I did this so my tinkering didn’t take down Jellyfin and other services for my wife and kids.
Beyond that I also have two VMs for virtualized pihole running gravity sync on different hosts, and another I intend to use for virtualized opnsense.
Everything is managed via ansible with each docker project in its own forgejo repo.
You lose comment history and all that jazz too but it’s better than nothing. I’m not sure if devs plan to implement a way to do it but it’s one of the reasons I decided to roll my own instance. Nothing more frustrating than using someone else’s and losing access while they take days to get it back up.
I’m assuming you installed it directly to the container vs running docker in there?
I have been debating making the jump from docker in a VM to a container, but I’ve been maintaining Nextcloud in docker the entire time I’ve been using it and not had any issues. The interface can be a little slow at times but I’m usually not in there for long. I’m not sure it’s worth it to have to essentially rearchitect mely setup for that.
All that aside, I also map an NFS share to my docker container that stores all my files on my NAS. This could be what causes the interface slowness I sometimes see, but last time I looked into it there wasn’t a non hacky way to mount a share to an LXC container, has that changed?
Yikes! I pay a couple bucks more for uncapped gigabit. I’m fortunate in that there’s two competing providers in my area that aren’t in cahoots (that I can tell.) I much prefer the more expensive one and was able to get them to match the other’s price.
My wife has been dropping hints she wants to move to another state though and I’m low key dreading dealing with a new ISP/losing my current plan.
The growth is happening mostly in the pictrs and db containers. I know pictrs is optional if you’re not uploading pics yourself, but I didn’t want to limit myself on that. I haven’t dived into where the db growth is happening yet either. Right now my hurdle is there doesn’t seem to be any baked in maintenance tools, so it’s all going to be me editing the database directly. I’m okay with doing it but need to figure out how to not purge content I have saved via Lemmy.
As far as NSFW stuff, there’s a check box for the instance settings for enabling NSFW instance wide. I have it unchecked and haven’t seen a single NSFW post browsing through my instance. It does require things to be marked as such though. I’ll probably go the extra step and defederate the porn instances just to add another layer.
Please let me know if you find anything useful for maintaining the instance.
I have this one on a Hetzner server that runs me like $6/mo. I’m not comfortable with the federated nature of things potentially putting CSAM or other illegal content on disk in my home.
I use tailscale so I can still hit my internal (at home) git repos and all that. The rest of my stuff is all hosted on an old gaming PC I turned into a Proxmox host that sits in my spare bedroom. Of those services, I only expose like 3 things to the outside world. Nextcloud being the main one. I don’t route it through my VPS, just proxy it through cloudflare.
Yeah I haven’t found anything for cleanup maintenance. Right now with just me my disk usage is increasing ~300MB per day. I’m debating purging stuff older than 30 days or something. The only stuff where my server is the source of truth is my profile and communities on my instance.
We’ll see though, this is just a fun little side thing I’m not taking too seriously.
2fa was in at the time. IIRC the jwt was granted after 2fa so it didn’t matter.
You’ve got a point though, small instances aren’t gonna be nearly as useful as a giant one to threat actors. Assuming you don’t give them a reason to go after you specifically they wouldn’t have a reason to target such a tiny server.
Still though, I don’t need that shiny A next to my name so I’m good with how I have it set up.
I do a separate container for each service that requires a db. It’s pretty baked into my backup strategy at this point where the script I wrote references environment variables for dumps in a way that I don’t have to update it for every new service I deploy.
If the container name has -dbm on the end it’s MySQL, -dbp is postgres, and -dbs would be SQLite if it needed its own containers. The suffix triggers the appropriate backup command that pulls the user, password, and db name from environment variables in the container.
I’m not too concerned about system overhead, but I’m debating doing a single container for each db type just to do it, but I also like not having a single point of failure for all my services (I even run different VMs to keep stable services from being impacted by me testing random stuff out.)
Exactly. I went one step further and decided not to use my admin account as my main. I don’t run around as root on servers so I try not to do that with apps. It’s easier with Lemmy because once it’s set up all the admin tasks hit my email.
I also wanted to avoid that vulnerability that hit Lemmy World a few weeks ago that was only possible because the server admin got their jwt stolen, which wouldn’t have been so impactful if they weren’t on the admin account.
It took a little bit of work but I rolled my own docker compose and it’s been pretty solid. I pin the specific nextcloud version in my compose file (I don’t like using :latest for things) and updating is as simple as incrementing the version, pulling the new image, and restarting the container. I’ve been running this way for a couple years now and I couldn’t be happier with it.
I host forgejo internally and use that to sync changes. .env and data directories are in .gitignore (they get backed up via a separate process)
All the files are part of my docker group so anyone in it can read everything. Restarting services is handled by systemd unit files (so sudo systemctl stop/start/restart) any user that needs to manipulate containers would have the appropriate sudo access.
It’s only me they does all this though, I set it up this way for funsies.
A lot of people self host so they are in control. This is Plex taking away that control, plain and simple.
I don’t know how many people host completely legitimately acquired content in their libraries, but your reasoning is such a cop out. Are you gonna defend them if they start scanning libraries for potentially illegally obtained content and blocking that because it could “put them in legal hot water?”