CIO Insider

CIOInsider India Magazine

Separator

Google, OpenAI Add Tools to Limit Dangers of AI Chatbots

CIO Insider Team | Monday, 3 July, 2023
Separator

To mitigate artificial intelligence tools’ dangers Google and OpenAI among others are adding controls that can limit what these tools can utter.

Artificial intelligence chatbots have spread misinformation, pushed partisan agendas, lied about famous people, and even given users suicide advice.

A controversial free-speech dispute over whether chatbots should be censored and who should decide it has been sparked by the arrival of a new wave of chatbots that were developed distant from the hub of the AI boom.

Recently, other chatbots with no restrictions and lax moderation have appeared with names like GPT4All and FreedomGPT. Many of them were produced for little to no cost by freelance programmers or volunteer teams who successfully imitated the techniques first proposed by AI academics.

Few groups built their models entirely from scratch. The majority of teams just modify the technology's response to commands by adding additional instructions to preexisting language models.

Large firms have accelerated the use of AI tools, but they have also struggled with how to uphold investor confidence and safeguard their reputations.

The censorship-free chatbots present intriguing new opportunities. An unconstrained chatbot can be downloaded by users and used on their personal devices without the interference of Big Tech. They could then train it on personal emails, private communications, or top-secret papers without worrying about violating someone's privacy. Volunteer programers can create innovative new add-ons more quickly and possibly carelessly than larger businesses would dare.

However, the risks seem to be just as numerous, and some claim that they create threats that need to be handled.

Misinformation watchdogs have raised worries about how unmoderated chatbots would exacerbate the issue. They are already concerned about how common chatbots might spread misinformation. Experts cautioned that these algorithms could generate descriptions of child pornography, divisive discourses, or misleading content.

Large firms have accelerated the use of AI tools, but they have also struggled with how to uphold investor confidence and safeguard their reputations. Independent AI developers don't seem to worry about this much. Critics argued that even if they did, they could lack the means to completely address problems.



Current Issue
Wepsol : Envisioning Digital Workplace Transformation & IT Excellence



🍪 Do you like Cookies?

We use cookies to ensure you get the best experience on our website. Read more...