• 2 Posts
  • 100 Comments
Joined 1 year ago
cake
Cake day: July 18th, 2023

help-circle








  • vyatta and vyatta-based (edgerouter, etc) I would say are good enough for the average consumer.

    WTF? What galaxy are you from? Literally zero average consumers use that. They use whatever router their ISP provides, is currently advertised on tech media, or is sold at retailers.

    I’m not talking about budget routers. I’m talking about ALL software running on consumer routers. They’re all dogshit closed source burn and churn that barely receive security updates even while they’re still in production.

    Also you don’t need port forwarding and ddns for internal routing. … At home, all traffic is routed locally

    That is literally the recommended config for consumer Tailscale and any mesh VPN. Do you even know how they work? The “external dependency” you’re referring to — their servers — basically operate like DDNS, supplying the DNS/routing between mesh clients. Beyond that all comms are P2P, including LAN access.

    Everything else you mention is useless because Tailscale, Nebula, etc all have open source server alternatives that are way more robust and foolproof to rolling your own VPS and wireguard mesh.

    My argument is that “LAN access” — with all the “smart” devices and IoT surveillance capitalism spyware on it — is the weakest link, and relying on mesh VPN software to create a VLAN is significantly more secure than relying on open LAN access handled by consumer routers.

    Just because you’re commenting on selfhosted, on lemmy, doesn’t mean you should recommend the most complex and convoluted approach, especially if you don’t even know how the underlying tech actually works.


  • What is the issue with the external dependency? I would argue that consumer routers have near universal shit security, networking is too complex for the average user, and there’s a greater risk opening up ports and provisioning your own VPN server (on consumer software/hardware). The port forwarding and DDNS are essentially “external dependencies”.

    Mesh VPN clients are all open source. I believe Tailscale are currently implementing a feature where new devices can’t connect to your mesh without pre-approval from your own authorized devices, even if they pass external authentication and 2FA (removing the dependency on tailscale servers in granting authorization, post-authentication).


  • Programs have “learned” how to play games without instruction — most recently with rat neurons playing Doom — and that’s what they’ll attempt to do with LLM’s. It must learn from feedback, experience and interaction as we could never code something that complex.

    I don’t believe LLM’s can achieve general AI, but humans are just organic pattern recognition devices at the end of the day — a brain in an organic machine that can sense a fraction of the world around us.

    The problem is that LLM’s are dumb, thus dangerous when given autonomy, they’ll be used to wage war, and the military industrial complex is more likely to destroy us with autonomous LLM killbots than achieve general AI.




  • I’ll bite, too. The reason the status quo allows systemic wage stagnation for existing employees is very simple. Historically, the vast majority of employees do not hop around!

    Most people are not high performers and will settle for job security (or the illusion of) and sunk cost fallacy vs the opportunity of making 10-20% more money. Most people don’t build extensive networks, hate interviewing, and hate the pressure and uncertainty of having to establish themselves in a new company. Plus, once you have a mortgage or kids, you don’t have the time or energy to job hunt and interview, let alone the savings to cover lost income if the job transition fails.

    Obviously this is a gamble for businesses, and can often turn out foolish for high-skilled and in demand roles — we’ve all seen many products stagnate and be destroyed by competition — but the status quo also means that corporations are literally structured — managerially, and financially — towards acquisition, so all of the data they capture to make decisions, and all of the decision makers, neglect the fact that their business is held together by the 10-30% of under appreciated, highly experienced staff.

    It’s essentially the exact same reason companies offer the best deals to new customers, instead of rewarding loyalty. Most of the time the gamble pays off, and it’s ultimately more profitable to screw both your employees and customers!



  • I believe this is what some compression algorithms do if you were to compress the similar photos into a single archive. It sounds like that’s what you want (e.g. archive each day), for immich to cache the thumbnails, and only decompress them if you view the full resolution. Maybe test some algorithms like zstd against a group of similar photos vs individually?

    FYI file system deduplication works based on file content hash. Only exact 1:1 binary content duplicates share the same hash.

    Also, modern image and video encoding algorithms are already the most heavily optimized that computer scientists can currently achieve with consumer hardware, which is why compressing a jpg or mp4 offers negligible savings, and sometimes even increases the file size.



  • All of which are heavily based on open source software, donations, and in the case of wikipedia, user generated and moderated content.

    The solution is not centralization. It’s decentralization. A decentralized internet archive could not be held accountable, or taken down, by any individual government. It will remain active and fault tolerant as long as enough users keep enough storage allocated to maintain replication and redundancy. One architected with zero knowledge encryption as the backbone (e.g. IPFS + I2P) could even operate within the jurisdiction of hostile governments.


  • Exactly. This is why the internet archive should be a universally publicly-funded endeavor. It’s just as important as the world’s libraries.

    I’m really hoping the internet archive shifts to some distributed P2P type model (IPFS, Tahoe-Lafs etc) where anyone can assign a hard drive as tribute, archive any public webpage on it and it’ll be replicated around the world, but still accessible through a single protocol. You can’t stop the signal!