Just your average Reddit refugee.

  • 0 Posts
  • 9 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle


  • Now I think I see what you are saying. People have suggested that Lemmy needs a separate protocol to connect with other Lemmy instances to more efficiently synchronize. Gossipsub could do that. It would also be nice if each Lemmy instance only needed to keep a minimal amount of data at any one time to service the clients that connect to it while the rest exists in the swarm.

    I still don’t think that you would want a phone to function as your server and your client, though. All that coordinating takes bandwidth and processing power. Phones are ill-equipped for that. Also, usually to p2p effectively you need to be able to make direct connections through firewalls. Opening your phone directly to the Internet would be a bad idea, plus I doubt any phone companies would let you do that. Without a direct connection, you would need to proxy your connection through some server somewhere and deal with bandwidth costs. Might as well just connect to a server as a client.

    Maybe the final solution is software like Lemmy running with decentralized identities via the Nostr protocol that is federated out using Gossipsub.


  • Then the p2p network is really the “server” and the phone is still just a client. I’m also not sure that a p2p network could be queried very well because something would have to be able to produce aggregated and sorted results. It isn’t like pulling one file from a swarm. It would be like a blockchain and the phone would have to download the whole dataset from the p2p network before running queries on it.

    What you are talking about sounds kind of like the Nostr protocol. It is a distributed social network trying to solve the same problem that ActivityPub is but in a slightly different way. All the events are cached on multiple relays and the client applications query those relays looking for information that gets aggregated and sorted on the client however it wants.


  • ActivityPub is all about pushing content around to subscribing servers. It sort of expects the subscribers to always be online which would not work for a phone. Servers could resend missed events, but essentially you would miss every event that occurs while the phone is asleep or doesn’t have the app running.

    Also, every event that occurs needs to be processed and stored whether or not you are actively looking at it so it would be a huge battery drain while it was running.

    It is definitely a service best run on an always-on server with a client application in a phone just asking the server for the latest stuff on-demand.


  • You can do that, but there are a couple of things to keep in mind.

    Different apps may only be compatible with certain database products and versions. I could be a real pain if you have to spin up a new version of a database and migrate just for one service that updated their dependencies or have to keep an old database version around for legacy software.

    If you stop using a service then it’s data is still in the database. This will get bloated after a while. If the database is only for one service then wiping it out when you are done isn’t a big deal. However, if you use a shared database then you likely have to go in and remove schemas, tables, and users manually; praying you don’t mess something up for another service.

    When each service has its own database moving it to another instance is as easy as copying all the files. If the database is shared then you need to make sure the database connection is exposed to all the systems that are trying to connect to it. If it’s all local then that’s pretty safe, but if you have services on different cloud providers then you have to be more careful to not expose your database to the world.

    Single use databases don’t typically consume a lot of resources unless the service using it is massive. It typically is easier to allow each service to have its own database.



  • I’m not super concerned. It’s been a little over a week since stuff hit the fan. Contributors need time to learn the code base. People are starting to help with the easy stuff, but the two main devs still need to check everything because they are the only ones that can understand how those changes affect long term visions. Also, the urgent fixes are all somewhat-breaking changes which is why it’s looking like the next release is going to be 0.18 instead of 0.17.5. It makes sense to get as many urgent breaking changes in as they can before publishing, and it’s only been 8 days since the last release to identify, code, and test.


  • For personal projects this is fine, but I’m curious why you feel the need to have every crate be the newest? Once you have it compiling, why upgrade dependencies at all unless you have to? Compiling a new binary is way more work than just running the one that is already compiled. You talk about minimizing build times with this method, but it isn’t clear why recompiling at all with newer dependencies is beneficial.

    Theoretically, every update to a crate is better than the last, but sometimes it’s just adding non-breaking features that you weren’t using anyway. You could just check crate updates every once in a while looking for performance gains or features you would like to make use of.