Uhh, that’s interesting, I miss that feature a lot, but the plugin is always out of date.
Uhh, that’s interesting, I miss that feature a lot, but the plugin is always out of date.
No Linux client? I guess I understand, not enough DRM free Linux games, but still… Not for me.
Dozzle sounds awesome, definitely adding it to my stack
Even considering your edits, it’s still a stupid argument. By that same logic nothing should be preserved. Watching LotR now is not the same as watching it when it first came out, which should have never been made according to you because by that time the book should have already been destroyed since you wouldn’t want to preserve it for 50 years, but Tolkien shouldn’t have even written it, since they were based on ideas and drafts he did during the first world war exploring how war changes men and power corrupts, which obviously is only valid in that context and nowhere else so it should be destroyed since preserving it would be invasive and destructive, no?.
Preserving something can never be destructive, it’s the opposite of it. If the Mona Lisa was destroyed you wouldn’t even know it existed, so how can having preserved it be destructive when the alternative is oblivion?
And I agree that the Mona Lisa is no big deal, you know who else agrees? People from that time. It’s widely known that the Mona Lisa was one of Da Vinci’s less famous works, and until Napoleon made a big deal out of it it was just a random painting in a random museum. So I get part of your point, that people who make a big deal out of the Mona Lisa are only there to see the famous painting, but that doesn’t mean that there’s no reason to preserve it, or that there are no people who go there to see the actual Mona Lisa.
One important thing, ensure the drive is CMR, the reason is that you likely want a RAID, and non-CMR disks take so long to read the entire disk that the chances of a second failure while recovering from a disk failure is significant.
That being said, how are you keeping track of the disks state? I built my RAID recently, and your post made me realize that I have nothing to notify me if one of the disks shows early signs of problems.
This will be almost impossible. The short answer is that those pictures might be 95% similar but their binary data might be 100% different.
Long answer:
Images are essentially a long list of pixels, each pixel is 3 numbers for Red, Green and Blue (and optionally Alpha if you’re dealing with a transparent image, but you’re talking pictures so I’ll ignore that). This is a simple but very stupid way to store the data of an image, because it’s very likely that the image will use the same color in multiple places, so you can instead list all of the colors a image uses, and then represent the pixels as the number in that list, this makes images occupy a LOT less space. Some formats add to that, because your eye can’t see the difference between two very close colors, they group all colors that are similar into one only color, making their list of colors used on the image WAY smaller, thus having the entire image be a LOT more compressed (but you might noticed we lost information in this step). Because of this it’s possible that one image choose color X in position Y, while the other choose Z in position W, the binaries are now completely different, but an image comparison tool can tell you that color X and Z are similar enough to be the same, and they account for a given percentage of the image depending on the amount minimum of the values Y and W. But outside of image software, nothing else knows that these two completely different binaries are the same. If you hadn’t loss data by compressing get images in the first place you could theoretically use data from different images to compress (but the results wouldn’t be great, since even uncompressed images won’t be as similar as you think), but images can be compressed a LOT more by losing unimportant data so the tradeoffs are not worth it, which is why JPEG is so ubiquitous nowadays.
All of that being said, a compression algorithm specifically designed for images could take advantage of this, but no general purpose compression can, and it’s unlikely someone went to the trouble of building a compression for this specific case, when each image is already compressed there’s little to be gained by writing something that takes colors from multiple images in consideration, needing to decide if an image is similar enough to be bundled in together with that group or not, etc. This is an interesting question, and I wouldn’t br surprised to know that Google has one such algorithm to store all images you snap together that it can already know will be sequential. But for home NAS I think it’s unlikely you’ll find something.
Besides all of this, storage is cheap, just buy an extra disk and move over some files there, that’s likely to be your best way forward anyways.
If all you want is ssh the easiest and cheapest way might be to hire a VPS, connect to it and connect to tailscale there. Just ensure you have very strict rules on ssh and you should be safe enough.
Exposing web services in this manner is also easy using Caddy, but be careful since the services would then be publicly available.
Use docker, once you’re comfortable with it then switch to Podman. Podman has a few more complications, so it’s easier to get the base thing running using the most common tool, and work from there.
If you don’t give immich write access to photos you lose one of their biggest advantages, i.e. having your phone upload the photos directly. So now you need something else like syncthing to do that job, which is not as elegant.
That’s interesting, although most of it is directed at people building the images, the fact that pushing without a tag sets the latest is something I did not know and something that I could see the human factor causing a problem.
Why? Latest means latest stable for most services
That’s the thing, if the project is too early to have a stable enough structure to allow for programatical updates then it’s probably too early to offer something “perpetual”
I agree, I’m not trying to bad mouth the project, I just feel that they shouldn’t change from a donation structure until they have a stable version of the product.
Yeah, I have high hopes for the project, it ticks almost every box for me. I would still prefer to be able to store tags in the actual images and use them and also be able to recover a library already in the proper folder (so in the case of a catastrophic failure, reimporting the full library is a matter of minutes not days, not to mention having to retag people, etc).
My point is that projects should ask for donations when they’re so early in development, asking for a subscription implies you have a stable product.
Yup, and I’m fine with that, but I think that switching from a donation to a subscription model before then is wrong.
I don’t mind this model. That being said for me Immich is great but has a fatal flaw that has prevented me from using it: it doesn’t do updates.
For me that’s a big one, everything else I self host I have a docker compose pointing to latest, so eventually I do a pull and up and I’m done, running the latest version of the thing. In Immich this is not possible, I discovered the hard way that they are not backwards compatible and that if you do that you need to keep track of their release notes to know what you need to manually do to update.
I haven’t settled on a self-hosted photo management because of this. In theory Immich has almost everything I want (or more specifically, all of the other solutions I found lack something), but having to keep track of releases to do manual upgrades is stupid, this is a software, it should be easy to have it check the version on start and perform migration tasks if needed.
When my server was a laptop it was on 24/7. When my server was a desktop I had a cron job to turn it off at 2AM. Now that it’s a specialized hardware it’s on 24/7.
Being constantly on is very convenient, but if your services start quickly it’s not the end of the world to have to turn the machine on for them.
Thanks, I’m checking that out, but can’t find any “add services” button. Alsp someone mentioned IONOS, which is local to me and doesn’t seem to have bandwidth limits… I was trying to find the poop and they require lots of personal info just to get the account setup, still a bit torn there.
That would be awesome, currently it’s 500GB for their cheaper option which starts at 23/year. I didn’t find an option to increase the bandwidth before completing the order. Also it needs to be deployed in NY (which would be possibly slow for me in Europe). Finally their isos are somewhat old, the latest Ubuntu they have is 20.04 (which has an EoL next year).
All that being said, 23/year is very cheap for a VPS, and for people in the US that use less than 500GB/month that’s the best deal I’ve ever seen.
I use https://silverbullet.md and love it, it’s a bit more than a note taking app, but it’s definitely worth it.