Their “manifesto”:
Superintelligence is within reach.
Building safe superintelligence (SSI) is the most important technical problem of our time.
We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence.
It’s called Safe Superintelligence
SSI is our mission, our name, and our entire product roadmap, because it is our sole focus. Our team, investors, and business model are all aligned to achieve SSI.
We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.
This way, we can scale in peace.
Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.
We are an American company with offices in Palo Alto and Tel Aviv, where we have deep roots and the ability to recruit top technical talent.
We are assembling a lean, cracked team of the world’s best engineers and researchers dedicated to focusing on SSI and nothing else.
If that’s you, we offer an opportunity to do your life’s work and help solve the most important technical challenge of our age.
Now is the time. Join us.
Ilya Sutskever, Daniel Gross, Daniel Levy
Lmao, no it’s not in reach.
More tech bro bullshit just to get fools to invest and him get rich (which will work)
LLMs are a start. I can see it being the machine/human interface to a broard array of specialized applications. If we want to see true ai we’ll need to add efficient complexity, and perhaps, if we don’t want it to exclusively be contained in a platonic type realm, we’ll need our programs direct access and explore our physical one.
He better watch out for the piledriver
He was careful not to mention AI…
This honestly looks like a grift to get a nice salary for a few years on VC money. These are not random sales goons peddling shit they don’t understand. They don’t even bother to define “superintelligence”, let alone what they mean by “safe superintelligence” .
I find it hard to believe this wasn’t written with malicious intent. But maybe I am too cynical and they are so used to people kissing their asses, that they think their shit doesn’t smell. But money definitely plays some role in this, they would be stupid to not cash in while the AI hype is hot.
There are very little people in the world that understand llms on such a deep technological level as Ilya.
I honestly don’t think there is much else in the world he is interested in doing other then work on aligning powerful ai.
Wether his almost anti commercial style end up accomplishing much i don’t know but his intention are literal and clear.
What do you mean by anti-commercial style? I am not from North America, but this seems like pretty typical PR copytext for local tech companies. Lot’s of pomp, banality, bombast and vague assertions of caring about the world. It almost reads like satire at this point, like they’re trying to take the piss.
If his intentions are literal and clear, what does he mean by “superintelligence” (please be specific) and in what way is it safe?
Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.
This is the guy who turned against Sam for being to much about releasing product. I don’t think he plans on delivering much product at all. The reason to invest isn’t to gain profit but to avoid losing to an apocalyptic event which you may or may not personally believe, many Silicon Valley types do.
A safe Ai would be one that does not spell the end of Humanity or the planet. Ilya is famously obsessed with creating whats basically a benevolent AI god-mommy and deeply afraid for an uncontrollable, malicious Skynet.
I don’t consider tech company boardroom drama to be an indicator of anything (in of itself). This is not some complex dilemma around morality and “doing the right thing”.
Is my take on their PR copytext unreasonable? Is my interpretation purely a matter of subjectivity?
Why should I buy into this “AI god-mommy” and “skynet” stuff? Guy can’t even provide a definition of “superintelligence”. Seems very suspicious for a “top mind in AI” (paraphrasing your description).
Don’t get me wrong, I am not saying he acts like a movie antagonist IRL, but that doesn’t mean we have any reason to trust his motives or ignore the long history of similar proclamations.
No i applaud a healthy dose of skepticism.
I am everything but in favor of idolizing silicon valley gurus and tech leaders but from Sutskeva i have seen enough to know he is one of the few to actually pay attention to
Artificial Super intelligence or ASI is the step beyond AGI (artificial general intelligence)
The later is equal or better in capacity to a real human being in almost all fields.
Artificial Super intelligence is defined (long before openai was a thing) as transcending human intelligence in every conceivable way. At which point its a fully independent entity that can no longer be controlled or shutdown.
Thank you for the clarification regarding ASI. That still leaves the question of the definition of “safe ASI”; a key point that is emphasized in their manifesto.
To use your example it’s like an early mass market car industry professional (say in 1890) discussing road safety and ethical dilemmas in roads dominated by regular drivers and a large share of L4/L5 cars (with some of them being used as part-time taxis). I just don’t buy it.
Mind you I am not anti-ML/AI. I am an avid user of “AI” (ML?) upscaling (specifically video) and to lesser extent stable diffusion. While AI video upscaling is very fiddly and good results can be hard to get right, it is clearly on another level with respect to quality compared to “classical” upscaling algorithms. I was truly impressed when I was able to run by own SD upscale with good results.
What I am opposed to is oligarchs, oligarch-wanabees, shallow sounding proclamations of grandiose this or that. As far as I am concerned it’s all bullshit and they are all to one degree or another soulless ghouls that will eat your children alive for the right price and the correct mental excuse model (I am only partially exaggerating, happy to clarify if needed) .
If one has all these grand plans for safe ASI, concern for humanity and whatnot, setup a public repo and release all your code under GPL (and all relevant documentation, patent indemnification, no trademark tricks etc.). Considering Sutskever’s status as AI royalty who is also allegedly concerned about humanity, he would be the ideal person to pull this off.
If you can’t do that, then chances are you’re lying about your true motives. It’s really as simple as that.
No need to clarify what you meant with the oligarchs theres barely any exaggeration there. Ghouls is quite accurate.
Considering the context of a worst case possible scenario (hostile takeover by an artificial superior) which honestly is indistinguishable from general end of the world doomerism prophecies but very alive in the circles of Sutskeva i believe safe ai consistent of the very low bar of
“humanity survives wile agi improves the standards of living worldwide” of course for this i am reading between the lines based on previously aquired information.
One could argue that If ASI is created the possibilities become very black and white:
-
ASI is indifferent about human beings and pursues its own goals, regardless of consequences for the human race. It could even find a way off the planet and just abandon us.
-
ASI is malaligned with humanity and we become but a. Resource, treating us no different then we have historically treated animals and plants.
-
ASI is aligned with humanity and it has the best intentions for our future.
For either scenario it would be impossible to calculate its intentions because by definition its more intelligent then all of us. Its possible that some things that we understand as moral may be immoral from a better informed perspective, and vice versa.
The scary thing is we wont be able to tell wether its malicious and pretending to be good. Or benevolent and trying to fix us. Would it respect consent if say a racist refuses therapy?
Of course we can just as likely hit a roadblock next week and the whole hype dies out for another 10 years.
-
I don’t know about y’all, but a company called “safe super intelligence” sure doesn’t sound like it could ever do anything sinister. Should probably go ahead and let this one train on government databases.
My not involved in human trafficking t-shirt is raising a lot of questions already answered by my shirt
Just noticed that the cropped image makes it look like he is doing a nazi salute and then the first sentence of their “manifesto” is “Superintelligence is within reach.” :)
The amount of AI companies is slowly passing the amount of cryptocurrencies. What’s gonna be the new flavor of the year?
I feel kind of bad commenting on his physical appearance but as a guy who balded in the same pattern…fuckin shave your head, dude. Or, since you’re rich as fuck, spent the 10k for a transplant. It looks so bad and not in a “wow it’s ugly but he can sure pull it off” kind of way. More like an “I never rescheduled the appointment I missed at the barber” kind of way
Pull an Elon Musk move then.
Call me skeptical, but haven’t you guys watched Terminator 2?
These guy will end up exploding their own facility with a hand detonator, when skynet becomes our overlords. You want to join that crowd?
I’m always going to Selfhosted my shit. Tell you that right now.
Unfortunately Terminator 2 is bit childish and naive in its script.
More realistically, these individuals will try and fly away with oligarch Peter Thiel to his end of the world bunker in New Zealand.
If in some fucked up reality this ever happens (IMO there are far more pressing problems in the world), I hope the New Zealanders will have a very long and unpleasant surprise in store for these individuals.