Well, I’m no stockologist, but I believe when your company has a perpetual sales backlog with a 15-year head start on your competition, that should lead to a pretty high valuation.
They’re not building them for themselves, they’re selling GPU time and SuperPods. Their valuation is because there’s STILL a lineup a mile long for their flagship GPUs. I get that people think AI is a fad, and it’s public form may be, but there’s thousands of GPU powered projects going on behind closed doors that are going to consume whatever GPUs get made for a long time.
Your financial problems are not my concern!
The lesson there is: Spare no expense on your IT budget!
Nothing bad will happen, as long as they spare no expense.
There’s not much cost with S3 object. It’s just a file system in Linux, and replication is a protocol standard.
Use object storage for media and backups, then use s3 replication to put a copy somewhere else.
So a really big death ray? Perhaps some sort of doomsday device?
So you’re saying a space orbiting death ray will cast an area large enough to generate solar power eh? HEY ELON!
Golly, thanks Apple. It’s not like I can go buy a 256GB DIMM right now. 16GB what a joke.
So they shouldn’t lease buildings, or subscribe to water and power? Should they also not use document archival and storage services that have existed for decades?
If you have enough users and systems that this is a problem then you should be centrally managing it. I get that you want to inventory what you have, but I’m saying that you’re probably doing it wrong right now, and your ask is solved by using a central IAM system.
It sounds like you’re probably looking for some kind of SAML compliant IAM system, where credentials and access can be centrally managed. Active Directory and LDAP are examples of that.
Is there a benefit to doing CoW with Pandas vs. offloading it to the storage? Practically all modern storage systems support CoW snaps. The pattern I’m used to (Infra, not big data) is to leverage storage APIs to offload storage operations from client systems.
Well, 1ms of latency is 300km of distance, so unless you have something really misconfigured or overloaded, or you’re across the country, latency shouldn’t be an issue. 10-20ms is normally the high water mark for most synchronous replication, so you can go a long way before a protocol like DNS becomes an issue.
The whole point of Asimov’s three laws were to show how they could never work in reality because it would be very easy to circumvent them.
Don’t worry, companies found a way to get around Moore’s law: Buy more systems and build more datacenters.
You seriously think we’re going to slow down on infrastructure?
I find a lot of stuff is using docker compose, which works with Podman, but using straight docker is easier, especially if it’s nothing web-facing