Securing Generative AI: Defending the Future of Innovation and Creativity
Thank you Jordan for this informative and helpful article! You know that I have a healthy suspicion for fads in just about any sphere, but with all the hype around LLM's, I worry that we are about to plumb the depths of the possible negative outcomes of Overreliance on LLM-generated Content and Inadequate AI Alignment. We're already seeing the negative outcomes of other AI based technology, like facial recognition systems that fail to identify people of color, autonomous vehicles that fail to see pedestrians, and that lovely fictional "image enhance" feature being realized, but with a serious racial bias. I worry that humans with the latest new toy are going to (as we always do) apply it in increasingly inappropriate use-cases and cause widespread harm. I'd like to encourage anyone to consider the following questions when applying LLM's or really any of the non-linear data processing (or AI) technologies:
1. Does my use-case require that the outcomes *always* be correct (e.g. autonomous cars or kill-bots) or is some error acceptable?
2. If my new AI based system is sometimes wrong, will that cause more harm than the good it can do?
3. How much error is acceptable in my use-case?
4. Is there some way I can prevent my new AI based system from being wrong in a way that is completely convincing and thus causing exceptional harm?
JMH