Depends. If they get access to the code OpenAI is using, they could absolutely try to leapfrog them. They could also just be looking at ways to get near ChatGPT4 performance locally, on an iPhone. They’d need a lot of tricks, but succeeding there would be a pretty big win for Apple.
Last I checked, iPhones don’t have terabytes of RAM. Nothing that runs on a small battery powered device is ever going to be in the ballpark of ChatGPT. At least not in the foreseeable future.
They don’t, but with quantization and distillation, as well as fancy use of fast ssd storage (they published a paper on this exact topic last year), you can get a really decent model to work on device. People are already doing this with things like OpenHermes and Mistral (given, 7B models, but I could easily see Apple doubling ram and optimizing models with the research paper I mentioned above, and getting 40B models running entirely locally). If the start of the network is good, a 40B model could take care of a vast majority of user Siri queries without ever reaching out to the server.
For what it’s worth, according to their wwdc note, they’re basically trying to do this.
But every apple user has assured me that iPhones are so much more secure and that apple isn’t like mean ol Google and toooootally doesn’t collect all the same data from you.
Finally Apple is ready to use all that training data they say they don’t collect.
Article says it’s likely an OpenAI partnership.
Then is this even a race?
Depends. If they get access to the code OpenAI is using, they could absolutely try to leapfrog them. They could also just be looking at ways to get near ChatGPT4 performance locally, on an iPhone. They’d need a lot of tricks, but succeeding there would be a pretty big win for Apple.
People are really racing to destroy the planet so their phone can make a crappy summary of what’s on wikipedia.
Not even a summary of what’s on Wikipedia, usually a summary of the top 5 SEO crap webpages for any given query.
Well yeah, but to be fair we now know exactly how much glue to put into our zesty pizza sauce.
Last I checked, iPhones don’t have terabytes of RAM. Nothing that runs on a small battery powered device is ever going to be in the ballpark of ChatGPT. At least not in the foreseeable future.
They don’t, but with quantization and distillation, as well as fancy use of fast ssd storage (they published a paper on this exact topic last year), you can get a really decent model to work on device. People are already doing this with things like OpenHermes and Mistral (given, 7B models, but I could easily see Apple doubling ram and optimizing models with the research paper I mentioned above, and getting 40B models running entirely locally). If the start of the network is good, a 40B model could take care of a vast majority of user Siri queries without ever reaching out to the server.
For what it’s worth, according to their wwdc note, they’re basically trying to do this.
But every apple user has assured me that iPhones are so much more secure and that apple isn’t like mean ol Google and toooootally doesn’t collect all the same data from you.
They will also assure you that Apple totally doesn’t not collaborate with the CCP and allows them full access to all Chinese users data.
Apple users like to assure people of many things. :)