• 0 Posts
  • 7 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle
  • Eh, that’s not quite true. There is a general alignment tax, meaning aligning the LLM during RLHF lobotomizes it some, but we’re talking about usecase specific bots, e.g. for customer support for specific properties/brands/websites. In those cases, locking them down to specific conversations and topics still gives them a lot of leeway, and their understanding of what the user wants and the ways it can respond are still very good.


  • Depends on the model/provider. If you’re running this in Azure you can use their content filtering which includes jailbreak and prompt exfiltration protection. Otherwise you can strap some heuristics in front or utilize a smaller specialized model that looks at the incoming prompts.

    With stronger models like GPT4 that will adhere to every instruction of the system prompt you can harden it pretty well with instructions alone, GPT3.5 not so much.






  • It’s usually a bad sign if a new social media platform only talks about another one (see bluesky, every other post is about Twitter) but I gotta say, most of the content on here is actually not referencing reddit (ignoring this very post) which I think is a good sign that the community here is flourishing.

    I know I get my shit posting and meme fix already through lemmy alone and a bunch of communities have migrated and are frequented enough that I don’t feel the need to open reddit anymore (at least at this point).