No it’s a huge one, because it’s the most likely application of AI, AI site moderation will be the start of AI digital policing a field which risks growing larger and larger until it manifests as actual legal policing.
No it’s a huge one, because it’s the most likely application of AI, AI site moderation will be the start of AI digital policing a field which risks growing larger and larger until it manifests as actual legal policing.
The part where they were saying they don’t like the current AIs they know about. Showing disapproval of the trend.
Try Venice Ai, free to use, won’t try to censor your topics. Still just a chat bot though (although I think it does image generation too).
I think less restrictive AI that are free, like Venice AI (you can ask it pretty much anything and it will not stop you) will be around for longer than ones that went with restrictive subscription models, and that eventually those other ones will become niche.
New technology always propagates further the freer it is to use and experiment with, and ChatGPT and OpenAI are quite restrictive and money hungry.
Do we know how human brains reason? Not really… Do we have an abundance of long chains of reasoning we can use as training data?
…no.
So we don’t have the training data to get language models to talk through their reasoning then, especially not in novel or personable ways.
But also - even if we did, that wouldn’t produce ‘thought’ any more than a book about thought can produce thought.
Thinking is relational. It requires an internal self awareness. We can’t discuss that in text so much that a book is suddenly conscious.
This is the idea that"Sentience can’t come from semantics"… More is needed than that.
Guarantee it would be a widely used substance if it wasn’t for the smell… People would be making scriptures out of it and fixing up cracks in their homes. It would be considered innocent and fun, and some would alter their diets to get a particular consistency.
Incredibly gross to us, and probably still unhygienic. Maybe that’s why it smells, to keep us away from it!
It’s time to take CEO’s money away!
The problem is you have to know the approximate timeframe of your last visit - eg. If it was in the past week it may not be searchable in other categories such as *more than 3 months ago".
… likewise, if you can still find it by going to a currently open tab and hitting"back" enough times, it may not have been addded to history yet.
Firefox’s history is a little idiosyncratic. One of the less polished parts of the browser.
I downloaded Librewolf today - the privacy oriented fork of Firefox!
Good to see there are browser variants that aren’t just Chrome.
Trump is going to steal it. He’s going to do fraud in front of everyone.
He’s Mr. Burns but less lovable and entertaining.
I agree, autocracy sucks!
There’s already censorship free versions of stable diffusion available. You can run it on your own computer for free.
https://en.m.wikipedia.org/wiki/Corporate_personhood
Corporate Personhood, all the rights of people, none of the responsibilities or jail time.
…and not a status an average person would allocate them, but for some reason judges and politicians are just that anti-humanist.
How about something autonomous that makes choices of its own will, and performs long term learning that influences the choices it makes, just as a flat benchmark.
LLMs don’t qualify, they’re trained, retain information within a conversation, then forget it after the conversation is closed. They don’t do any long term learning after their initial training so they’re basically forever trapped in the mode of regurgitating within the parameters set by the training data at the time they’re trained.
That’s just a very fancy way to search and read out the training data. Definitely not an active intelligence in there.
They also don’t have any autonomy, they’re not active of their own accord when they’re not being addressed. They’re not sitting there thinking, so they have no internal personal landscape of thought. They have no place in which a private intelligence can be at play.
They’re innert.
“What am I supposed to call LLMs if not calling them AIs?”
…really dude? They’re large language models, not artificial intelligences. So that’s what you call them. Because that’s what they are.
The fact that they came from research into artificial intelligence doesn’t factor in. Microwave ovens came from radar research, doesn’t mean we call them radars, does it?
Oof, programmers calling LLMs “AI” - that’s embarrassing. Glorified text generators don’t need ethics, what’s the risk? Making the Internet’s worst texts available? Who cares.
I’m from an era when the Anarchists Cook Book, and The Unabombers Manifesto were both widely available - and I’m betting they still are.
There’s no obligation to protect people from “dangerous text” - there might be an obligation to allow people access to them though.
Amazon has a healthcare company now too…
…and they own twitch.
Good, it’s the only reliable sign of intelligent self-awareness there is, to the point that all children progress through it, starting out as bad liars, and getting better at it.
LLMs however might just be stupid, or stocasticaly incorrect.
The article makes it clear that the Chinese botnet is targeting Microsoft azure accounts, usually for large organizations involved with governments, infrastructure, legal professionals, science and technology.
It also states that the attacks can be disinfected by regularly restarting your router, but that this doesn’t prevent reinfection later.
The US intelligence services also says you should regularly restart your phone.
This is Microsoft’s posting about it which other news sources are quoting from: https://www.microsoft.com/en-us/security/blog/2024/10/31/chinese-threat-actor-storm-0940-uses-credentials-from-password-spray-attacks-from-a-covert-network/
It has a recommendations section which suggests “credential hygiene” and strong passwords help.