• 1 Post
  • 13 Comments
Joined 1 year ago
cake
Cake day: July 3rd, 2023

help-circle
  • On the other side of the same coin: When I mass edited my comments before quitting Reddit, I got site-banned. Basically, my first account’s automated edit got me auto-banned from several subs with pro-spez mods. Some subs had set their automod to detect when people were using the more popular methods of auto-editing, and set the automod to ban for using them. Then when I did the same with my second (and third, and fourth, and fifth, etc…) account, it almost immediately got site-banned for ban evasion.

    Basically, account 1 was banned from a sub, so when account 2 started doing the same thing on the same IP address, it was flagged as ban evasion. And ban evasion is one of the few things that will get you banned site-wide instead of just from a specific sub.

    I went back and checked a few months ago, and all of those site bans were lifted and the edits were undone. Likely because a site ban prevents the comments from showing up (which hurts Reddit’s bottom line, because they show up as a bunch of [removed] comments instead,) but also prevented any of the edits from actually being published. So when they lifted the site ban (to get those old comments to show back up again) it was as if I had never edited them at all. I had probably a million karma spread across my various accounts. I was extremely active at one point, so Reddit had a direct incentive to unban those accounts with literal thousands of comments.


  • Yup. Rand() chooses a random float value for each entry. By default I believe it’s anywhere between 0 and 1. So it may divide the first bill by .76, then the second by .23, then the third by 0.63, etc… So you’d end up with a completely garbage database because you can’t even undo it by multiplying all of the numbers by a set value.



  • It isn’t compressible at all, really. As far as a compression algorithm is concerned, it just looks like random data.

    Imagine trying to compress a text file. Each letter normally takes 8 bits to represent. The computer looks at 8 bits at a time, and knows which character to display. Normally, the computer needs to look at all 8 bits even when those bits are “empty” simply because you have no way of marking when one letter stops and another begins. It’s all just 1’s and 0’s, so it’s not like you can insert “next letter” flags in that. But we can cut that down.

    One of the easiest ways to do this is to count all the letters, then sort them from most to least common. Then we build a tree, with each character being a fork. You start at the top of the tree, and follow it down. You go down one fork for 0 and read the letter at your current fork on a 1. So for instance, if the letters are sorted “ABCDEF…” then “0001” would be D. Now D is represented with only 4 bits, instead of 8. And after reading the 1, you return to the top of the tree and start over again. So “01000101101” would be “BDBAB”. Normally that sequence would take 40 bits to represent, (because each character would be 8 bits long,) but we just did it in 11 bits total.

    But notice that this also has the potential to produce letters that are MORE than 8 bits long. If we follow that same pattern I listed above, “I” would be 9 bits, “J” would be 10, etc… The reason we’re able to achieve compression is because we’re using the more common (shorter) letters a lot and the less common (longer) letters less.

    Encryption undoes this completely, because (as far as compression is concerned) the data is completely random. And when you look at random data without any discernible pattern, it means that counting the characters and sorting by frequency is basically a lesson in futility. All the letters will be used about the same, so even the “most frequent” characters are only more frequent by a little bit due to random chance. So now. Even if the frequency still corresponds to my earlier pattern, the number of Z’s is so close to the number of A’s that the file will end up even longer than before. Because remember, the compression only works when the most frequent characters are actually used most frequently. Since there are a lot of characters that are longer than 8 bits and those characters are being used just as much as the shorter characters our compression method fails and actually produces a file that is larger than the original.



  • Not only that; You have to pay for updates too. Supposedly it’s because Apple takes time to verify that the app is legit and not going to do nefarious things. So they don’t want a bad actor to get a legit app on the store, then later push an update that infects everyone with a virus.

    But apparently a company did a study and realized that app testing rarely made it past the main page, with testers spending ~15-20 seconds per app. They’d basically open it and if it looked like it did what it said, they didn’t bother digging any deeper.








  • The downside is ease of use. Not everyone wants to set up a mastodon feed or a Lemmy feed. Lots of users only want one specific type of post.

    For instance, I hate the Twitter-style microblog. I choose to use Lemmy because I specifically want to exclude Mastodon posts from my feed.

    There’s also the issue with app development. Apps for Lemmy have undergone a lot of development in the past few weeks. Apps for kbin are basically non-existent. This is an issue that could be solved with time and the right developer(s) but as it currently stands a mobile user will be better off using kbin in their browser. So if someone is looking for a more seamless transition from Reddit, the natural move is to Lemmy.