Using ChatGPT for security and introduction of AI security
When AI learns from an attacking group, the parameters weights' values shift, causing the AI to make other choices. https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist
Alternatively, you can have an AI that does not learn.
"Garbage In, Garbage Out"
You do not get something for nothing in generative AI -- it is no more than statistical correlations of the data fed in. The behavior you observe is not an error -- it is mathematically logical result of extrapolating beyond realm of applicability. The AI does not know reality, and language models, for example, only know language, just as image models do not know actual objects. You cross your legs and you know that does not disconnect your underneath leg into two parts. But an AI cannot know this, if trained on static images. Thus, you get image "hallucinations." An attacker who controls the data from which AI draws inference can thus bias the output. Not just by needing to explicitly compute a difference overlay as in the panda example, but by using techniques of prompt engineering to feed in data that make the AI "hallucinate" as if drugged.