Assuming we can get AGI. So far there’s been little proof we’re any closer to getting an AI that can actually apply logic to problems that aren’t popular enough to be spelled out a dozen times in the dataset it’s trained on. Ya know, the whole perfect scores on well known and respected collage tests, but failing to solve slightly altered riddles for children? It being literally incapable of learning new concepts is a pretty major pitfall if you ask me.
I’m really sick and tired of this “we just gotta make a machine that can learn and then we can teach it anything” line. It’s nothing new, people were saying this shit since fucking 1950 when Alan Turing wrote it in a paper. A machine looking at an unholy amount of text and evaluation based on a new prompt, what is the most likely word to follow, IS NOT LEARNING!!! I was sick of this dilema before LLMs were a thing, but now it’s just mind numbing.
I started programing at such a young age that I don’t even remember how it went. Makes it difficult to teach as I find it hard to relate to newbies. I’m quite used to just learning my self and sometimes hitting roads that lead to nowhere. In the past that I actually remember, I’ve only been learning new paradigms, deepening my understanding of low level stuff and mastering my art. Hardly stuff I can give along to a newbie.