• 1 Post
  • 27 Comments
Joined 2 months ago
cake
Cake day: September 6th, 2024

help-circle

  • Full self driving should only be implemented when the system is good enough to completely take over all driving functions. It should only be available in vehicles without steering wheels. The Tesla solution of having “self driving” but relying on the copout of requiring constant user attention and feedback is ridiculous. Only when a system is truly capable of self-driving 100% autonomously, at a level statistically far better than a human, should any kind of self-driving be allowed on the road. Systems like Tesla’s FSD officially require you to always be ready to intervene at a moment’s notice. They know their system isn’t ready for independent use yet, so they require that manual input. But of course this encourages disengaged driving; no one actually pays attention to the road like they should, able to intervene at a moment’s notice. Tesla’s FSD imitates true self-driving, but it pawns off the liability do drivers by requiring them to pay attention at all times. This should be illegal. Beyond merely lane-assistance technology, no self-driving tech should be allowed except in vehicles without steering wheels. If your AI can’t truly perform better than a human, it’s better for humans to be the only ones actively driving the vehicle.

    This also solves the civil liability problem. Tesla’s current system has a dubious liability structure designed to pawn liability off to the driver. But if there isn’t even a steering wheel in the car, then the liability must fall entirely on the vehicle manufacturer. They are after all 100% responsible for the algorithm that controls the vehicle, and you should ultimately have legal liability for the algorithms you create. Is your company not confident enough in its self-driving tech to assume full legal liability for the actions of your vehicles? No? Then your tech isn’t good enough yet. There can be a process for car companies to subcontract out the payment of legal claims against the company. They can hire State Farm or whoever to handle insurance claims against them. But ultimately, legal liability will fall on the company.

    This also avoids criminal liability. If you only allow full self-driving in vehicles without steering wheels, there is zero doubt about who is control of the car. There isn’t a driver anymore, only passengers. Even if you’re a person sitting in the seat that would normally be a driver’s seat, it doesn’t matter. You are just a passenger legally. You can be as tired, distracted, drunk, or high as you like, you’re not getting any criminal liability for driving the vehicle. There is such a clear bright line - there is literally no steering wheel - that it is absolutely undeniable that you have zero control over the vehicle.

    This actually would work under the same theory of existing drunk-driving law. People can get ticketed for drunk driving for sleeping in their cars. Even if the cops never see you driving, you can get charged for drunk driving if they find you in a position where you could drunk drive. So if you have your keys on you while sleeping drunk in a parked car, you can get charged with DD. But not having a steering wheel at all would be the equivalent of not having the keys to a vehicle - you are literally incapable of operating it. And if you are not capable of operating it, you cannot be criminally liable for any crime relating to its operation.


  • I think we should indict Sam Altman on two sets of charges:

    1. A set of securities fraud charges.

    2. 8 billion counts of criminal reckless endangerment.

    He’s out on podcasts constantly saying the OpenAI is near superintelligent AGI and that there’s a good chance that they won’t be able to control it, and that human survival is at risk. How is gambling with human extinction not a massive act of planetary-scale criminal reckless endangerment?

    So either he is putting the entire planet at risk, or he is lying through his teeth about how far along OpenAI is. If he’s telling the truth, he’s endangering us all. If he’s lying, then he’s committing securities fraud in an attempt to defraud shareholders. Either way, he should be in prison. I say we indict him for both simultaneously and let the courts sort it out.



  • “What is he trying to hide‽” I dunno, man. Maybe he recognizes that there’s a bunch of unhinged weirdos who are hellbent on stalking “Satoshi,” and he doesn’t want to be harassed?

    Forget being harassed. Honestly, being kidnapped is a serious concern. Whoever or whatever group Satoshi is, it’s estimated he, she, or they own something like a million bitcoins.

    Kidnapping is normally a pretty poor choice of crime for a criminal gang to undertake. It had its heyday back in the early 20th century. But as the FBI really got going, and we got better at tracking down people across state lines and internationally, kidnapping became much more difficult to pull off. Kidnapping someone - physically abducting them - is the easy part. But actually sending their family a ransom letter and collecting the money in a way that can’t be traced back to you? That’s a whole different matter. Actually getting the ransom money and somehow getting it into a form you can spend, all without getting caught? That’s nearly impossible in this day and age.

    But someone with a million Bitcoins? It’s entirely possible that everything needed to access those funds is entirely within that one person’s skull. Either the private keys themselves, or some way to access or generate them.

    Someone with that amount of Bitcoins is actually at incredible risk for kidnapping by an organized crime outfit. We’re talking about $65 billion USD worth of assets that can be obtained by just kidnapping one person and torturing them until they give up their private keys. Then once you have them, the coins can be transferred to another account and washed through numerous transactions until they’re untraceable. And the poor bastard who gets kidnapped for this just never leaves their captors alive.

    And even if they keep their keys in their home instead of in their head? Now they’re at risk of break-in, or being held hostage during a nighttime break-in.

    Hell, even just being suspected of being Satoshi would be incredibly dangerous. That’s an even more horrifying scenario. Imagine an organized crime outfit thinks you’re Satoshi, they’re incorrect, and they abduct you and torture you, demanding you give them something you are simply incapable of providing…




  • Wouldn’t just keeping your phone in a metal box prevent it from communicating with anything? Keep your phone in a metal box and only take it out when you need it. Only take it out in a location that isn’t sensitive. Or hell, just make a little sleeve out of aluminum foil. Literally just wrapping your phone in aluminum foil should prevent it from connecting to anything. A tinfoil hat won’t serve as an effective Faraday cage for your brain, but fully wrapping your phone in aluminum foil should do the job. Even better, as it’s a phone, such a foil sleeve should be quite testable. Build it, put your phone in it, and try texting and calling it. If surrounded fully by a conductive material, the phone should be completely incapable of sending or receiving signals.





  • Something you should keep in mind is that being a monopoly is not illegal, and it never has been. If you make a great widget and, through honest competition, corner that widget market, that’s perfectly legal.

    What ISN’T legal is using your market power to engage in anti-competitive behavior. It’s not illegal for Apple to dominate the phone market. It is likely illegal for Apple to use its dominance of the phone market to prohibit competing app stores from being installed on their phones. That is Apple operating in two distinct businesses - a phone manufacturer and a software retailer. Apple is using its market dominance as a phone manufacturer to gain an unfair advantage as a software retailer.

    This is a pretty damning violation of federal antitrust law.


  • I say we indict Sam Altman for both securities fraud and 8 billion counts of reckless endangerment. Him and other AI boosters are running around shouting that AGI is just around the corner, OpenAI is creating it, and that there is a very good chance we won’t be able to control it and that it will kill us all. Well, the way I see it, there are only two possibilities:

    1. He’s right. In which case, OpenAI is literally endangering all of humanity by its very operation. In that case, the logical thing to do would be for the rest of us to arrest everyone at OpenAI, shove them in deep hole and never let them see the light of day again, and burn all their research and work to ashes. When someone says, “superintelligent AI cannot be stopped!” I say, “you sure about that? Because it’s humans that are making it. And humans aren’t bullet-proof.”

    2. He’s lying. This is much more likely. In that case, he is guilty of fraud. He’s falsely making claims his company has no ability to achieve, and he is taking in billions in investor money based on these lies.

    He’s either a conman, or a man so dangerous he should literally be thrown in the darkest hole we can find for the rest of his life.

    And no, I REALLY don’t buy the argument that if the tech allows it, that superintelligent AI is just some inevitable thing we can’t choose to stop. The proposed methods to create it all rely on giant data centers that consume gigawatts of energy to run. You’re not hiding that kind of infrastructure. If it turns out superintelligence really is possible, we pass a global treaty to ban it, and simply shoot anyone that attempts to create it. I’m sorry, but if you legitimately are threatening the survival of the entire species, I have zero qualms about putting you in the ground. We don’t let people build nuclear reactors in their basement. And if this tech really is that capable and that dangerous, it should be regulated as strongly as nuclear weapons. If OpenAI really is trying to build a super-AGI, they should be treated no differently than a terrorist group attempting to build their own nuclear weapon.

    But anyway, I say we just indict him on both charges. Charge Sam Altman with both securities fraud and 8 billion counts of reckless endangerment. Let the courts figure out which one he is guilty of, because it’s definitely one or the other.



  • Bezos also has a rocket company. Plus there’s Richard Branson. And others.. And then you have private jet travel, massive mega yachts, and countless other extravagances. For a certain class of billionaire, having a private rocket company is a vanity project. These rocket companies are vanity projects by rich sci fi nerds. Yes, they’ve done some really good technical work, but they’re only possible because their founders were willing to sink billions into them even without any proof they’ll make a profit.

    What you are missing is that as people’s wealth increases, their resource use just keeps going up and up and up. To the point where when people are wealthy enough, they’re using orders of magnitude more energy and resources than the average citizen of even developed countries. Billionaires have enough wealth that they can fly rockets just because they think they’re cool, even if they have no real path to profitability.

    And no, the hypothetical of the robot skyscrapers is not “meaningless.” You just have a poor imagination. To have that type of world we only need one thing - a robot that can build a copy of itself from raw materials, or a series of robots that can collectively reproduce themselves from raw materials gathered in the environment. Once you have self-replicating robots, it becomes very easy to scale up to that kind of consumption on a broad scale. If you have self-replicating robots, the only real limit to the total number you can have on the planet is the total amount of sunlight available to power all of them.

    The real point isn’t the specific examples I gave. The point, which you are missing entirely, is that total resource use is a function of wealth and technological capability. Raw population has very little impact on it. If our automation gets a lot better, or something else makes us much wealthier, we would see vast increases in total resource use even if our population was cut in half.


  • The problem is too many people. If standard of living is to increase then the resource requirement is due to massive unsustainable population growth.

    They’re both important. And crucially, people in developed countries use a lot more resources than those in undeveloped countries. Just look at the resource utilization of our richest people. We have billionaires operating private rocket companies! If somehow, say due to really really good automation, orbital rockets could be made cheap enough for the average person to afford, we would have average middle class people regularly launching rockets into space and taking private trips to the Moon. Just staggering levels of resource use. If we could build and maintain homes very cheaply due to advanced robotics, the average person would live in a private skyscraper if they could afford it. Imagine the average suburban lot, except with a tower built on it 100 stories tall. If it was cheap enough to build and maintain that sort of thing, that absolutely would become the norm.



  • Sure. But those many works have affected the discipline of AI development. There’s an entire field of study on AI ethics and alignment. But those are affected by the combined effects of many works and authors. Planet of the Apes really is unique in that it is really the sole example anyone would bring up of why you shouldn’t experiment on apes to try to make them more intelligent.

    And to my knowledge, no one has attempted to engineer apes to be more intelligent. Obviously there is simply less economic drive to do so; it’s easier to be concerned about ethics when there’s not a ready path to profitability. But if some geneticist tomorrow puts out a paper proposing that we tinker with chimp DNA to make smarter chimps, I can guarantee you every single headline will reference Planet of the Apes. It’s similar to how you can’t right an article about resurrecting the woolly mammoth without throwing in a reference to Jurassic Park. Some singular works of fiction really do have a substantial effect on how the public understands an entire field of research.

    To my knowledge, no one has ever actually tried to engineer smarter chimps, though I assume there might actually be a lot to be gained in terms of scientific knowledge by doing so. We could probably learn quite a lot about the evolution of language and human evolution in general by trying to experiment with engineering smarter apes. But to my knowledge, no one has ever done so. The lack of profit is obviously a big factor, but I guarantee you, accidentally creating Planet of the Apes would be on the mind of anyone seriously contemplating that sort of scientific endeavor.