This is already implemented on a lot of the settings pages on 11.
Edit: just wanted to add I don’t think well. I use it at work.
This is already implemented on a lot of the settings pages on 11.
Edit: just wanted to add I don’t think well. I use it at work.
lol I would open every port on my router and route them all to wireguard before I would ever consider doing this
I use Nextcloud with Nginx Proxy Manager and just use NPM to handle the reverse proxy, nothing in Nextcloud other than adding the domain to the config so it’s trusted.
I use Plex instead of Jellyfin, but I stream it through NPM with no issues. I can’t speak to the tunnel though, I prefer a simple wireguard tunnel for anything external so I’ve never tried it.
Edit: unless that’s what you mean by tunnel, I was assuming you meant traefik or tailscale or one of the other solutions I see posted more often, but I think one or both of those use wireguard under the hood.
I have a feeling the people making fiber internet faster aren’t the same people installing it in neighborhoods.
The product was an LLM.
I never switched to Proton for exactly this reason. I’d much rather use a service that does one thing really well than one that does 20 things okay.
It’s all just to keep you locked into your subscription. Now they want you to keep other money tied up in it too.
The issue is that the docker container will still be running as the LXC’s root user even if you specify another user to run as in the docker compose file or run command, and if root doesn’t have access to the dir the container will always fail.
The solution to this is to remap the unprivileged LXC’s root user to a user on the Proxmox host that has access to the dir using the LXC’s config file, mount the container’s filesystem using pct mount, and then chown everything in the container owned by the default root mapped user (100000).
These are the commands I use for this:
find /var/lib/lxc/xxx/rootfs -user 100000 -type f -exec chown username {} +;
find /var/lib/lxc/xxx/rootfs -user 100000 -type d -exec chown username {} +;
find /var/lib/lxc/xxx/rootfs -user 100000 -type l -exec chown -h username {} +;
find /var/lib/lxc/xxx/rootfs -group 100000 -type f -exec chown :username {} +;
find /var/lib/lxc/xxx/rootfs -group 100000 -type d -exec chown :username {} +;
find /var/lib/lxc/xxx/rootfs -group 100000 -type l -exec chown -h :username {} +
(Replace xxx with the LXC number and username with the host user/UID)
If group permissions are involved you’ll also have to map those groups in the LXC config, create them in the LXC with the corresponding GIDs, add them as supplementary groups to the root user in the LXC, and then add them to the docker compose yaml using group_add.
It’s super confusing and annoying but this is the workflow I’m using now to avoid having to have any resources tied up in VMs unnecessarily.
I’ve been doing this for at least a decade now and the drives are just as reliable as if you bought them normally. The only downside is having to block one of the pins on the SATA connector with kapton tape for it to work.
Acts as a wildcard for any directories that exist between arteries and clot.
I like the workflow of having a DNS record on my network for *.mydomain.com pointing to Nginx Proxy Manager, and just needing to plug in a subdomain, IP, and port whenever I spin up something new for super easy SSL. All you need is one let’s encrypt wildcard cert for your domain and you’re all set.
IIRC from running into this same issue, this won’t work the way you have the volume bind mounts set up because it will treat the movies and downloads directories as two separate file systems, which hardlinks don’t work across.
If you bind mounted /media/HDD1:/media/HDD1 it should work, but then the container will have access to the entire drive. You might be able to get around that by running the container as a different user and only giving that user access to those two directories, but docker is also really inconsistent about that in my experience.
lol Japan invents the three major optical disc storage mediums that became ubiquitous and their government says fuck that and just keeps on using floppy disks
If you want Proxmox to dynamically allocate resources you’ll need to use LXCs, not VMs. I don’t use VMs at all anymore for this exact reason.
I also take money from possible fascists because I need it to survive. It’s called having a job.
Am I missing something in this article? I’m not defending either company, but it doesn’t seem like they actually have any evidence to confirm either is doing this.
The world’s top two AI startups are ignoring requests by media publishers to stop scraping their web content for free model training data, Business Insider has learned.
It claims this, but then they say this about the source of this info:
TollBit, a startup aiming to broker paid licensing deals between publishers and AI companies, found several AI companies are acting in this way and informed certain large publishers in a Friday letter, which was reported earlier by Reuters. The letter did not include the names of any of the AI companies accused of skirting the rule.
So their source doesn’t actually say which companies are doing this, but then they jump straight into this:
AI companies, including OpenAI and Anthropic, are simply choosing to “bypass” robots.txt in order to retrieve or scrape all of the content from a given website or page.
So they’re just concluding that based on nothing and reporting it as fact?
You are both speculating about what triggered the lawsuit because the only people that know for sure what triggered the lawsuit are the publishers and they aren’t talking.
If all public libraries are using CDL and the publishers have only sued IA, who flagrantly violated CDL, and they sued them only 2 months after they started violating the CDL, then that certainly seems like a very possible factor in the lawsuit, right?
Visual discomfort because it looks like an slightly older app? What kind of issue is that???
You’ve met an iOS user.
Exactly, why didn’t they just ask Lynn Conway for her preference when writing the article?
Absolutely, if it was anything I needed or even really wanted to be sure was reliably available I’d never put it on a free VPS.
Now, something trivial like this that just requires installing wireguard and nginx, copying over some configs, and changing a DNS record? Hard to beat free.
Sounds like you maybe just have a habit of entering conversations on topics you don’t know much about (and in this case self-admittedly don’t even care about), so you get a lot of people who are more informed and do care expressing their disagreement with you?
Have you considered just not doing that?