• 0 Posts
  • 16 Comments
Joined 1 year ago
cake
Cake day: July 5th, 2023

help-circle
  • Insane compute wasn’t everything. Hinton helped develop the technique which allowed more data to be processed in more layers of a network without totally losing coherence. It was more of a toy before then because it capped out at how much data could be used, how many layers of a network could be trained, and I believe even that GPUs could be used efficiently for ANNs, but I could be wrong on that one.

    Either way, after Hinton’s research in ~2010-2012, problems that seemed extremely difficult to solve (e.g., classifying images and identifying objects in images) became borderline trivial and in under a decade ANNs went from being almost fringe technology that many researches saw as being a toy and useful for a few problems to basically dominating all AI research and CS funding. In almost no time, every university suddenly needed machine learning specialists on payroll, and now at about 10 years later, every year we are pumping out papers and tech that seemed many decades away… Every year… In a very broad range of problems.

    The 580 and CUDA made a big impact, but Hinton’s work was absolutely pivotal in being able to utilize that and to even make ANNs seem feasible at all, and it was an overnight thing. Research very rarely explodes this fast.

    Edit: I guess also worth clarifying, Hinton was also one of the few researching these techniques in the 80s and has continued being a force in the field, so these big leaps are the culmination of a lot of old, but also very recent work.


  • Lots of good comments here. I think there’s many reasons, but AI in general is being quite hated on. It’s sad to me - pre-GPT I literally researched how AI can be used to help people be more creative and support human workflows, but our pipelines around the AI are lacking right now. As for the hate, here’s a few perspectives:

    • Training data is questionable/debatable ethics,
    • Amateur programmers don’t build up the same “code muscle memory”,
    • It’s being treated as a sole author (generate all of this code for me) instead of like a ping-pong pair programmer,
    • The time saved writing code isn’t being used to review and test the code more carefully than it was before,
    • The AI is being used for problem solving, where it’s not ideal, as opposed to code-from-spec where it’s much better,
    • Non-Local AI is scraping your (often confidential) data,
    • Environmental impact of the use of massive remote LLMs,
    • Can be used (according to execs, anyways) to replace entry level developers,
    • Devs can have too much faith in the output because they have weak code review skills compared to their code writing skills,
    • New programmers can bypass their learning and get an unrealistic perspective of their understanding; this one is most egregious to me as a CS professor, where students and new programmers often think the final answer is what’s important and don’t see the skills they strengthen along the way to the answer.

    I like coding with local LLMs and asking occasional questions to larger ones, but the code on larger code bases (with these small, local models) is often pretty non-sensical, but improves with the right approach. Provide it documented functions, examples of a strong and consistent code style, write your test cases in advance so you can verify the outputs, use it as an extension of IDE capabilities (like generating repetitive lines) rather than replacing your problem solving.

    I think there is a lot of reasons to hate on it, but I think it’s because the reasons to use it effectively are still being figured out.

    Some of my academic colleagues still hate IDEs because tab completion, fast compilers, in-line documentation, and automated code linting (to them) means you don’t really need to know anything or follow any good practices, your editor will do it all for you, so you should just use vim or notepad. It’ll take time to adopt and adapt.


  • As someone who researched AI pre-GPT to enhance human creativity and aid in creative workflows, it’s sad for me to see the direction it’s been marketed, but not surprised. I’m personally excited by the tech because I personally see a really positive place for it where the data usage is arguably justified, but we either need to break through the current applications of it which seems more aimed at stock prices and wow-factoring the public instead of using them for what they’re best at.

    The whole exciting part of these was that it could convert unstructured inputs into natural language and structured outputs. Translation tasks (broad definition of translation), extracting key data points in unstructured data, language tasks. It’s outstanding for the NLP tasks we struggled with previously, and these tasks are highly transformative or any inputs, it purely relies on structural patterns. I think few people would argue NLP tasks are infringing on the copyright owner.

    But I can at least see how moving the direction toward (particularly with MoE approaches) using Q&A data to support generating Q&A outputs, media data to support generating media outputs, using code data to support generating code, this moves toward the territory of affecting sales and using someone’s IP to compete against them. From a technical perspective, I understand how LLMs are not really copying, but the way they are marketed and tuned seems to be more and more intended to use people’s data to compete against them, which is dubious at best.


  • Not to fully argue against your point, but I do want to push back on the citations bit. Given the way an LLM is trained, it’s not really close to equivalent to me citing papers researched for a paper. That would be more akin to asking me to cite every piece of written or verbal media I’ve ever encountered as they all contributed in some small way to way that the words were formulated here.

    Now, if specific data were injected into the prompt, or maybe if it was fine-tuned on a small subset of highly specific data, I would agree those should be cited as they are being accessed more verbatim. The whole “magic” of LLMs was that it needed to cross a threshold of data, combined with the attentional mechanism, and then the network was pretty suddenly able to maintain coherent sentences structure. It was only with loads of varied data from many different sources that this really emerged.


  • My guess was that they knew gaming was niche and were willing to invest less in this headset and more in spreading the widespread idea that “Spatial Computing” is the next paradigm for work.

    I VR a decent amount, and I really do like it a lot for watching TV and YouTube, and am toying with using it a bit for work-from-home where the shift in environment is surprisingly helpful.

    It’s just limited. Streaming apps aren’t very good, there’s no great source for 3D movies (which are great, when Bigscreen had them anyways), they’re still a bit too hot and heavy for long-term use, the game library isn’t very broad and there haven’t been many killer app games/products that distinct it from other modalities, and it’s going to need a critical amount of adoption to get used in remote meetings.

    I really do think it’s huge for given a sense of remote presence, and I’d love to research how VR presence affects remote collaboration, but there are so many factors keeping it tough to buy into.

    They did try, though, and I think they’re on the right track. Facial capture for remote presence and hybrid meetings, extending the monitors to give more privacy and flexibility to laptops, strong AR to reduce the need to take the headset off - but they’re first selling the idea, and then maybe there will be a break. I’ll admit the industry is moving much slower than I’d anticipated back in 2012 when I was starting VR research.


  • Yeah, this is the approach people are trying to take more now, the problem is generally amount of that data needed and verifying it’s high quality in the first place, but these systems are positive feedback loops both in training and in use. If you train on higher quality code, it will write higher quality code, but be less able to handle edge cases or potentially complete code in a salient way that wasn’t at the same quality bar or style as the training code.

    On the use side, if you provide higher quality code as input when prompting, it is more likely to predict higher quality code because it’s continuing what was written. Using standard approaches, documenting, just generally following good practice with code before sending it to the LLM will majorly improve results.


  • I sit somewhere tangential on this - I think Bret Victor’s thoughts are valid here, or my interpretation of them - that we need to start revisiting our tooling. Our IDEs should be doing a lot more heavy lifting to suit our needs and reduce the amount of cognitive load that’s better suited for the computer anyways. I get it’s not as valid here as other use cases, but there’s some room for improvements.

    Having it in separate functions is more testable and maintainable and more readable when we’re thinking about control flow. Sometimes we want to look at a function and understand the nuts and bolts and sometimes we just want to know the overall flow. Why can’t we swap between views and inline the functions in our IDE when we want to see the full flow? In fact, why can’t we see the function inline but with the parameter variables replaced by passed values to get a feel for how the function will flow and compute what can be easily computed (assuming no global state)?

    I could be completely off base, but more and more recently - especially after years of teaching introductory programming - I’m leaning more toward the idea that our IDEs should be doubling down on taking advantage of language features, live computation, and co-operating with our coding style… and not just OOP. I’d love to hear some places that I might be overlooking. Maybe this is all a moot point, but I think code design and tooling should go hand in hand.


  • When I teach story points (not in an official Agile Scrum capacity, just as part of a larger course) I emphasize that the points are for conversation and consensus more than actual estimates.

    Saying this story is bigger than that one, and why, and seeing people in something like planning poker give drastically differing estimates is a great way to signal that people don’t really get the story or some major area wasn’t considered. It’s a great discussion tool. Then it also gives a really rough ballpark to help the PO reprioritize the next two sprints before planning, but I don’t think they should ever be taken too seriously (or else you probably wasted a ton of time trying to be accurate on something you’re not going to be accurate on).

    Students usually start by using task-hours as their metric, and naturally get pretty granular with tasks. This is for smaller projects - in larger ones, amortizing to just number of tasks is effectively the same as long as it’s not chewing away way more time in planning.






  • Yeah, I think framing it similar to the old days might help, but I could be wrong. Like, you aren’t signing up for (just to web-equivalent) PHP Fusion or something, you’re signing up for your gaming clan’s forum, or your roleplay group, or your Canadian phreak BB. The difference with Lemmy is just that you also indirectly sign up to receive content from a lot of other places using the same protocol.

    IMO, I think the framing/abstraction will make or break the future of the paradigm for mainstream consumption. Not to get into another repeat of the EEE discussion, but assuming nothing nefarious from something like Threads, that would mean people start an account there and then find a niche group with their friends to go hang out on instead.

    I also have to push back against the pushback against the paradigm going mainstream, because again IMO a move back toward decentralized platforms is really important for the future of the internet and quite frankly the global economy.

    Just editing to expand, but I think maybe there’s a problem in framing Lemmy or Mastodon as communities in themselves, because it really conflicts with the model of instancing and email that is being used to describe them.