CallMeButtLove@lemmy.worldtoSelfhosted@lemmy.world•Self-Hosted AI is pretty darn coolEnglish
2·
1 month agoI really hate when companies do that kind of crap. I just imagine a little toddler stomping around going “No! No! Nooo!”
I really hate when companies do that kind of crap. I just imagine a little toddler stomping around going “No! No! Nooo!”
Is there a way to host an LLM in a docker container on my home server but still leverage the GPU on my main PC?
You’re not wrong about how important those keys are and how he definitely should have known better. But I at least have a little sympathy for the guy. Everyone makes mistakes from time to time, even with important stuff. Hopefully they are lucky enough not to lose 40k on one but unfortunately he wasn’t. Whether he should have known better or not, that just plain sucks.
Thank you for that answer! That makes sense.
Lurking beginner here, why is this bad?
I have a similar setup except I use pfSense as my router and pihole for DNS, but I’m sure you can get the same results with your setup. I’m running HAProxy for my reverse proxy and configs for each of my docker containers so any traffic on 443 or 80 gets sent to the container IP on whatever unique port it uses. I then have DNS entries for each URL I want to access the container by, with all of those entries just pointing to HAProxy. Works like a charm.
I have HAProxy running on the pihole itself but there’s no reason you couldn’t just run that in it’s own container. pfSense also let’s you install an HAProxy package to handle it on the router itself. I don’t know if opensense supports packages like that though.
You can even get fancy and do SSL offloading to access everything over HTTPS.