At least with the more advanced LLM’s (and I’d assume as well for stuff like image processing and generation), it requires a pretty considerable amount of GPU just to get the thing to run at all, and then even more to spit something out. Some people have enough to run the basics, but most laptops would simply be incapable. And very few people would have resources to get the kind of outputs that the more advanced AI’s produce.
Now, that’s not to say it shouldn’t be an option, or that they force you to have some remote AI baked into your proprietary OS that you can’t remove without breaking user license agreements, just saying that it’s unfortunately harder to implement locally than we both probably wish it was.
At least with the more advanced LLM’s (and I’d assume as well for stuff like image processing and generation), it requires a pretty considerable amount of GPU just to get the thing to run at all, and then even more to spit something out. Some people have enough to run the basics, but most laptops would simply be incapable. And very few people would have resources to get the kind of outputs that the more advanced AI’s produce.
Now, that’s not to say it shouldn’t be an option, or that they force you to have some remote AI baked into your proprietary OS that you can’t remove without breaking user license agreements, just saying that it’s unfortunately harder to implement locally than we both probably wish it was.