I think the massive privacy benefits outweigh things like that, which should be documented properly anyways
I think the massive privacy benefits outweigh things like that, which should be documented properly anyways
To actually keep data persistent on IPFS and not be deleted by the garbage collector, you need to have a server(s) pin the node that holds that data.
You either host these servers yourself, or pay providers to store it for you.
And at that point you just reinvented a server simply hosting your data but with extra steps.
There’s a big issue with this.
If malicious content like CP gets uploaded on to a server, obviously other servers do not want this to be replicated to their servers. So how would you solve this problem? Well they could give all moderation power to the original server they’re replicating, but that could be far too slow or they could even miss malicious content like this. Or maybe they even disagree about taking down certain things.
Another solution is that any server participating in the content mirroring could take it down for just themselves or for all the other members as well. The issue here is now you’re expanding moderation abilities, while also giving the other servers much more responsibilities.
It’s not as simple as wanting to replicate content. If you host it, you are responsible for any illegal content a user may upload to it. Not to mention laws vary by country as well. Ignoring the technical challenges here, it’s also mandatory that the other servers replicate the other servers data to also choose to be responsible for what gets uploaded. And that is a really big ask. The law doesn’t care about the technical reasons, they’ll just see illegal content uploaded to your server.
Personally I’m in the camp that I want history to be lost. That’s part of the appeal to me. In fact my favorite feature in the fedi is Mastodon’s option to enable auto-deleting posts of a certain age.
Only content that is explicitly pinned or reaches a certain amount of interactions should be saved imo. Since that’s the stuff you’d actually want to preserve rather than the 99% of forgettable content, and it would also drastically cut down on file hosting.
Another thing is that a federation should only act as the exchange between users on ActivityPub. It should only cache relevant information and not be expected to store everything, like I wrote before. The user should be a portable account that is stored on a device. The federation server would sync your account between your devices, but not store it. You send your content to the federation, and then the federation sends it out into the world where they choose to do what they want with it. The federation shouldn’t hoard it indefinitely.
Also this makes sense from a privacy perspective. If you care about privacy, why would you also want all your data indefinitely stored? Unless certain things are relevant and explicitly kept, it should be expected to expire and be lost by default. Where did we get this expectation that data should be stored forever? Also you expect it to be stored forever and not be trained on by AI?
This comment for example, after about a week or two most of the visibility and interaction of it will drop to zero. At that point, this comment should expire and no longer exist. I wrote this comment, it reached some people, and served it’s purpose and should expire. I’m not going to pretend like this comment is some kind of historic document that should be indefinitely preserved, nor do I expect or want it to be.
Why would you need to host this? Why not just have a client that does backups?
I tried and couldn’t find it on my system. I run Linux btw.
The free market is going very well here
Also all the ad blocking extensions would have to continue maintaining forks of their own projects for increasingly obscure manifest V2 Chromium browsers.
yeah but it’s GAMER so it’s okay
looks like the bigger issue is hvec itself. Also the support is extremely spotty with all the other browsers as well, with it still only having limited support in Chrome as well depending on your hardware.
Or just use av1 instead. I’ve literally never run into this as an issue before lol.
They’re already a fork of Chromium… Also it doesn’t matter much since they use the Google extension store, which disabled uBO.
You could probably install and handle a manifest V2 extension by installing the xpi file manually. But as a developer, the users who would actually do this is a small fraction of the previous user base.
So how do you justify your limited manpower to be spent on that increasingly obscure user base? It may as well be removed anyways at that point.
Eh, I’d still take Chromium anything over the dumpster fire that is Safari
What you’re talking about is webcompat and is a very complicated issue. Also I’ve talked to some Mozilla devs who gave me multiple examples of Chromium rendering something wrong, and they’d have to intentionally break Firefox to render it incorrectly too, just so the end user would get a more consistent experience. Of course these issues happen more and more when things are only tested for one browser.
That depends on the DE, not the distro.
If you’re following citations, may as well just search for the citations themselves… aka just a regular search engine.
Not sure why they mention AI search, as it’s practically non-existent right now.
Glad to see it is built on top of the Solid protocol. Cuz I was going to say it sounded familiar!
Tldr for those who are confused, since Android already does support side loading and even seamless updates for third-party app stores (like Droid-ify, etc), these are mostly legal changes.
Basically Google can’t force Google IAP as the only method of payment in apps anymore, can’t block companies from advertising how to find them on non-Play Store android app stores. So good changes overall.
Also when you download third party apks, on Android, while it’s still relatively easy to do, it does give bit of a scary warning saying security issues are on the user for doing so. This creates the assumption that Play Store is the only secure way to get apps on Android, and the OS gives all sorts of special security exceptions to the Play Store for that. Obviously other secure app stores can exist, so this can be seen as an anti-competitive method since Google is exempt from their own scary apk install message.
What is he talking about, public WiFi can easily poison and monitor your DNS requests (most people don’t know or use encrypted DNS), and there’s still tons of non-https traffic leaks all over the place that are plain text. Even if encrypted, there’s still deep packet inspection. VPNs can mitigate DPI techniques and shift the trust from an easily snoopable public WiFi to the VPN’s more trustworthy exit servers.
This guy really needs to elaborate on what he’s trying to say when the cyber security field very much disagrees with this stance. I’m not a huge fan of Proton, but they aren’t doing anything wrong here. You should use it for public Wi-Fi.
Cool but the proper solution is that they shouldn’t have access to this data at all. It should be either stored locally, or encrypted on their servers. Companies not being able to access their consumer data should be the default.