
Google, OpenAI Add Tools to Limit Dangers of AI Chatbots

To mitigate artificial intelligence tools’ dangers Google and OpenAI among others are adding controls that can limit what these tools can utter.
Artificial intelligence chatbots have spread misinformation, pushed partisan agendas, lied about famous people, and even given users suicide advice.
A controversial free-speech dispute over whether chatbots should be censored and who should decide it has been sparked by the arrival of a new wave of chatbots that were developed distant from the hub of the AI boom.
Recently, other chatbots with no restrictions and lax moderation have appeared with names like GPT4All and FreedomGPT. Many of them were produced for little to no cost by freelance programmers or volunteer teams who successfully imitated the techniques first proposed by AI academics.
Few groups built their models entirely from scratch. The majority of teams just modify the technology's response to commands by adding additional instructions to preexisting language models.
Large firms have accelerated the use of AI tools, but they have also struggled with how to uphold investor confidence and safeguard their reputations.
However, the risks seem to be just as numerous, and some claim that they create threats that need to be handled.
Misinformation watchdogs have raised worries about how unmoderated chatbots would exacerbate the issue. They are already concerned about how common chatbots might spread misinformation. Experts cautioned that these algorithms could generate descriptions of child pornography, divisive discourses, or misleading content.
Large firms have accelerated the use of AI tools, but they have also struggled with how to uphold investor confidence and safeguard their reputations. Independent AI developers don't seem to worry about this much. Critics argued that even if they did, they could lack the means to completely address problems.