I guess? I’m surprised that the original model was on equal footing to the user prompts to begin with. Why was the removal of the origina training a feature in the first place? It doesn’t make much sense to me to use a specialized model just to discard it.
It sounds like a very dumb oversight in GPT and it was probably long overdue for fixing.
Because all of these models are focused on text prediction/QA, the whole idea of “prompts” organically grew out of the functionality when they tried to make it something more useful/powerful. Everything from function calling, agents, now this are just be bolted onto the foundation of LLMs.
Its why this seems more like a patch than an actual iteration of the technology. They aren’t approaching it at the fundamentals.
So they came up with the ai equivalent of the Linux nice command.
I guess? I’m surprised that the original model was on equal footing to the user prompts to begin with. Why was the removal of the origina training a feature in the first place? It doesn’t make much sense to me to use a specialized model just to discard it.
It sounds like a very dumb oversight in GPT and it was probably long overdue for fixing.
A dumb oversight but an useful method to identify manufactured artificial manipulation. It’s going to make social media even worse than it already is.
Because all of these models are focused on text prediction/QA, the whole idea of “prompts” organically grew out of the functionality when they tried to make it something more useful/powerful. Everything from function calling, agents, now this are just be bolted onto the foundation of LLMs.
Its why this seems more like a patch than an actual iteration of the technology. They aren’t approaching it at the fundamentals.