• tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    21 days ago

    As we’ve previously explored in depth, SB-1047 asks AI model creators to implement a “kill switch” that can be activated if that model starts introducing “novel threats to public safety and security,”

    A model may only be one component of a larger system. Like, there may literally be no way to get unprocessed input through to the model. How can the model creator even do anything about that?