Everyone remembers how twitter turned Microsoft's Debut Chatting AI into a racist?
How could microsoft have done better?
I'd say that they shoulda raised it like an actual child, putting it to private account and screening for potential trolls until the AI has enough resilience and can decide against being offensive..
Your thoughts?
Reading:
https://en.wikipedia.org/wiki/Tay_(bot)
https://www.theverge.com/2016/…-microsoft-chatbot-racist
https://spectrum.ieee.org/tech…rs-of-online-conversation
https://twitter.com/tayandyou?lang=en
https://www.techrepublic.com/a…ts-tay-ai-bot-went-wrong/
One of the debut announements: https://www.zdnet.com/article/…nches-ai-chat-bot-tay-ai/