- We have no conclusive evidence to suggest that gravity is propagated by particles. Currently, we think that it very likely might be, but we have not come up with models to quantize gravity. U would win a Nobel prize if u did that.
- Watch this
I saw this here as well haha. However, last time I liked his videos, people didn’t seem to like it. Regardless, I guess it would be better to link his vids in the TLDR.
It’s so dumb uggh. Getting the same power output as the sun would need a MINIMUM surface area of the size of the area on earth it would illuminate.
So say the use case is extending daylight time in Anchorage, Alaska during winter. You would need a mirror that has MINIMUM surface area that of Anchorage. Somehow, it would need to be in an orbit that can reliably reflect light to Anchorage at all points.
Then, it would most likely be in low Earth orbit as putting it higher would require bigger mirrors. However, if u are in LEO, u are also moving incredibly fast. You would thus need an array of these super large mirrors.
All of this for what? Something that an led can do incredibly easily?
Yea, I used an LLM for the TLDR. The article was too long for me to just sit writing it haha
Yea. I can’t see why people r defending copyrighted material so much here, especially considering that a majority of it is owned by large corporations. Fuck them. At least open sourced models trained on it would do us more good than than large corps hoarding art.
Honestly I find it really hard to imagine a humanoid robot (at least without muscle induced mobility) maintaining an aircraft/plumbing and so on.
I can imagine the use case. I don’t see this tech being anywhere close to maturity in this decade though. The amount of processing power would be CRAZY to deal with these tasks, no? Computer vision + motor skills. Add actual mobility to that. What would be the power source? Definitely not a battery! Would it be like a cable connected to a wall outlet or something?
What even is the purpose of humanoid robots? Is there even a use case for it?
The main problem is the definition of what “us” means here. Our brain is a biological machine guided by the laws of physics. We have input parameters (stimuli) and output parameters (behavior).
We respond to stimuli. That’s all that we do. So what does “we” even mean? The chemical reactions? The response to stimuli? Even a worm responds to stimuli. So does an amoeba.
There sure is complexity in how we respond to stimuli.
The main problem here is an absent objective definition of consciousness. We simply don’t know how to define consciousness (yet).
This is primarily what leads to questions like u raised right now.
Interesting perspective, although I don’t see how some of your points might add up. Regardless, thank you for the elaboration! :)
These things are like arguing about whether or not a pet has feelings…
Mhm. And what’s fundamentally wrong with such an argument?
I’d say it’s far more likely for a cat or a dog to have complex emotions and thoughts than for the human made LLM to actually be thinking.
Why?
I’m in the camp that thinks the LLMs are by and large a huge grift (that can produce useful output for certain tasks) by virtue of extreme exaggeration of the facts, but maybe I’m wrong.
Why?
I too see how grifters use AI to further their scams. That’s with the case of any new tech that pops up. This however, doesn’t make LLMs not interesting.
I am not in disagreement, and i hope you wont take offense to what i am saying but you strike me as someone quite new to philosophy in general.
Nah no worries haha. And yeah, I am relatively new to philosophy. I’m not even that well read on the matter as I would like to be. :(
Personal philosophy
I see philosophy (what we mean by philosophy TODAY) as putting up some axioms and seeing how logic follows. The scientific method differs, in that these axioms have to be proven to be true.
I would agree with you with the personal philosophy point regarding the ethics branch of philosophy. Different ethical frameworks always revolve around axioms that are untestable in the first place. Everything suddenly becomes subjective, with no capacity of being objective. Therefore, it makes this part of philosophy personal imo.
As for other branches of philosophy tho, (like metaphysics), I think it’s just a game of logic. Doesn’t matter who plays this game. Assume an untested/untestable axiom, build upon it using logic n see the beauty that u’ve created. If the laws of logic are followed and if the assumed axiom is the same, anyone can reach the same conclusion. So I don’t see this as personal really.
but i would suggest first studying human consciousness before extrapolating psychology from ai behavior
Agreed
Personally i got my first taste from that knowledge pre-ai from playing video games
Woah that’s interesting. Could you please elaborate upon this?
ChatGPT says this itself. However, why does an intention have to be made by ChatGPT itself? Our intentions are often trained into us by others. Take the example of propaganda. Political propaganda, corporate propaganda (advertisements) and so on.
It’s just because AI stuff is overhyped pretty much everywhere as a panacea to solve all
capitalistails. Seems every other article, no matter the subject or demographic, is about how AI is changing/ruining it.
Agreed :(
You know what’s sad? Communities that look at this from a neutral, objective position (while still being fun) exist on Reddit. I really don’t want to keep using it though. But I see nothing like that on Lemmy.
No, that definition does not exclude dogs or apes. Both are significantly more intelligent than an LLM.
Again, depends on what type of intelligence we are talking about. Dogs can’t write code. Apes can’t write code. LLMs can (not bad code in my experience for low level tasks). Dogs can’t summarize huge pages of text. Heck, they can’t even have a vocab greater than a few thousand words. All of this definitely puts LLMs above dogs n apes in the scale of intelligence.
Pseudo-intellectual bullshit like this being spread as adding to the discussion does meaningful harm. It’s inherently malignant, and deserves to be treated with the same contempt as flat earth and fake medicine should be.
Your comments are incredibly reminiscent of self righteous Redditors. U make bold claims without providing any supporting explanation. Could you explain how any of this is pseudoscience? How does any of this not follow the scientific method? How is it malignant?
LLMs have no agency.
Define “agency”. Why do u have agency but an LLM doesn’t?
“Intentionally” doing anything isn’t possible.
I see “intention” as a goal in this context. ChatGPT explained that the goal was to make the conversation appear “natural” (which means human like). This was the intention/goal behind it lying to Alex.
A conscious system has to have some baseline level of intelligence that’s multiple orders of magnitude higher than LLMs have.
Does it? By that definition, dogs aren’t conscious. Apes aren’t conscious. Would you say they both aren’t self aware?
If you’re entertained by an idiot “persuading” something less than an idiot, whatever. Go for it.
Why the toxicity? U might disagree with him, sure. Why go further and berate him?
Logic
Please explain your reasoning.
For the same reason I can’t get a date with Michelle Ryan: it’s a physical impossibility.
Huh?
Exactly. Which is what makes this entire thing quite interesting.
Alex here (the interrogator in the video) is involved in AI safety research. Questions like “do the ethical frameworks of AI match those of humans”, “how do we get AI to not misinterpret inputs and do something dangerous” are very important to be answered.
Following this comes the idea of consciousness. Can machine learning models feel pain? Can we unintentionally put such models into immense eternal pain? What even is the nature of pain?
Alex demonstrated that ChatGPT was lying intentionally. Can it lie intentionally for other things? What about the question of consciousness itself? Could we build models that intentionally fail the Turing test? Should we be scared of such a possibility?
Questions like these are really interesting. Unfortunately, they are shot down immediately on Lemmy, which is pretty disappointing.
Our brains aren’t really black boxes either. A little bit of hormone variation leads to incredibly different behavior. Does a conscious system HAVE to be a blackbox?
The reason why I asked “do you” was because of a point that I was trying to make- “do you HAVE to understand/not understand the functioning of a system to determine its consciousness?”.
What even is consciousness? Do we have a strict scientific definition for it?
The point is, I really hate people here on Lemmy making definitive claims about anything AI related by simply dismissing it. Alex (the interrogator in the video) isn’t making any claims. He’s simply arguing with ChatGPT. It’s an argument I found to be quite interesting. Hence, I shared it.