• 0 Posts
  • 32 Comments
Joined 10 months ago
cake
Cake day: November 22nd, 2023

help-circle

  • Another Millennial here, so take that how you will, but I agree. I think that Gen Z is very tech literate, but only in specific areas that may not translate to other areas of competency that are what we think of when we say “tech savvy” - especially when you start talking about job skills.

    I think Boomers especially see anybody who can work a smartphone as some sort of computer wizard, while the truth is that Gen Z grew up with it and were immersed in the tech, so of course they’re good with it. What they didn’t grow up with was having to type on a physical keyboard and monkey around with the finer points of how a computer works just to get it to do the thing, so of course they’re not as skilled at it.


  • Because we’re talking pattern recognition levels of learning. At best, they’re the equivalent of parrots mimicking human speech. They take inputs and output data based on the statistical averages from their training sets - collaging pieces of their training into what they think is the right answer. And I use the word think here loosely, as this is the exact same process that the Gaussian blur tool in Photoshop uses.

    This matters in the context of the fact that these companies are trying to profit off of the output of these programs. If somebody with an eidetic memory is trying to sell pieces of works that they’ve consumed as their own - or even somebody copy-pasting bits from Clif Notes - then they should get in trouble; the same as these companies.

    Given A and B, we can understand C. But an LLM will only be able to give you AB, A(b), and B(a). And they’ve even been just spitting out A and B wholesale, proving that they retain their training data and will regurgitate the entirety of copyrighted material.



  • The argument that these models learn in a way that’s similar to how humans do is absolutely false, and the idea that they discard their training data and produce new content is demonstrably incorrect. These models can and do regurgitate their training data, including copyrighted characters.

    And these things don’t learn styles, techniques, or concepts. They effectively learn statistical averages and patterns and collage them together. I’ve gotten to the point where I can guess what model of image generator was used based on the same repeated mistakes that they make every time. Take a look at any generated image, and you won’t be able to identify where a light source is because the shadows come from all different directions. These things don’t understand the concept of a shadow or lighting, they just know that statistically lighter pixels are followed by darker pixels of the same hue and that some places have collections of lighter pixels. I recently heard about an ai that scientists had trained to identify pictures of wolves that was working with incredible accuracy. When they went in to figure out how it was identifying wolves from dogs like huskies so well, they found that it wasn’t even looking at the wolves at all. 100% of the images of wolves in its training data had snowy backgrounds, so it was simply searching for concentrations of white pixels (and therefore snow) in the image to determine whether or not a picture was of wolves or not.


  • So the way Tumblr works is that your account is basically a blog, with your home page on the site being populated with posts from the accounts that you follow. You can reblog posts onto your own account and comment on them to create individual conversation threads like this one. At one point, there was a bug in the edit post system that let you edit the entirety of a post when you reblogged it, including what other people had said previously, and even the original post. This would only affect your specific reblog of it, of course, but you could edit a post to say something completely different from the original and create a completely unrelated comment chain.









  • The short of it is: why is he making that much money in the first place, especially at a time where the game’s industry has seen record-breaking layoffs for the past 2 years - worse than during the 2008 financial crash.

    The long of it is that they’re symptoms of the same problem and show the ever increasing wealth disparity between the aristocracy and the commoners in the US. In 2020, the wealth disparity in the US was said to be on par with France just before the French Revolution, where the price of a loaf of bread hit a full day’s wages for the average worker. To add to this, at least one of the people laid off was going on scheduled maternity leave the next day, which is probably in violation of some workers’ rights law, but because the majority of states are “at will” employment states, Bungie won’t face any consequences. The average time for people in the industry to find a new job is 2-4 months, and with all the layoffs, plenty of these people will never work in the industry again. And on top of that, these workers are already exploited so badly for their passion for making games that they could see a 50% or more pay increase with lower responsibilities for the same skill set just by changing industries. There are people working at Activision-Blizzard-King who are living out of their cars because they don’t get paid enough to afford rent within commuting distance of the studio.

    People are waking up to the fact that the boss makes 10 grand while we make a dime, and they’re getting pretty pissed about it.



  • But the fear isn’t so rational. It’s like a fear that the cocktail in your example will replace the original vodka whether they want the cocktail or not, or that the vodka will be so diluted by seltzer that it will functionally cease to exist.

    It’s like a fear of gentrification of the country as a whole.

    It’s also important to remember that the US is a huge exception in this regard as well. Most other countries are like 90%+ native population, and immigrant populations tend to be sort of isolated from the wider national culture due to things like language barriers, and they often set up little “bastions” of their native culture locally wherever they live. We even see plenty of that in the US as well. While there are many distinctly US cultures across the country that are derived from a variety of backgrounds, there are tons of “enclaves” of European culture that make it blatantly clear where immigrants from certain countries settled. In Boston, the culture of Chinatown is distinctly unique and separate from the wider culture of the city, which largely has ties back to Ireland (and is very proud of it). And both of those are distinctly different from where the Italian immigrants settled, who effectively have their own districts of cultures descended from Italy regardless of where they immigrated to.





  • That’s what I was thinking. Apart from the porn locked up in the Disney vault, big companies aren’t in the business of making porn. And the companies that do aren’t going to be interested in deep fakes. The people who are using Photoshop to create porn are small fries to Adobe. Deep fake porn has been around as long as photo manipulation has, and Adobe hasn’t cared before.

    Bearing that in mind, I don’t think this policy has anything to do with AI deep fakes or porn. I think it’s more likely to be some new revenue source, like farming data for LLM training or something. They could go the Tumblr route and use AI to censor content, but considering Tumblr couldn’t tell the difference between the Sahara Desert and boobs, I think that’s one fuck up with a major company away from being litigation hell. The only reason that I think would make sense for Adobe to do this because of deep fakes is if they believe that governments are going to start holding them liable for the content people make with their products.