When Microsoft released Bing Chat, an AI-powered chatbot co-developed with OpenAI, it didn’t take long before users found creative ways to break it. Using carefully tailored inputs, users were able to get it to profess love, threaten harm, defend the Holocaust and invent conspiracy theories. Can AI ever be protected from these malicious prompts? What … Continue reading Can AI really be protected from text-based attacks?
Copy and paste this URL into your WordPress site to embed
Copy and paste this code into your site to embed