
IBM Drafts Policy to Outline its Employees' Usage of Third-Party Generative AI

During IBM’s AI Innovation Day event in Bengaluru on June 20, the company announced that it is working on writing a policy that will outline how its employees use third-party generative artificial intelligence (AI) products like OpenAI's ChatGPT and Google's Bard.
Speaking on the rise of generative AI and how such tools are used for internal processes, Gaurav Sharma, vice president at IBM India Software Labs, said the company is evaluating the segment and its veracity, “since these tools are built on untrusted sources that can’t be used." He added that a policy is “still being framed" around the use of generative AI applications such as ChatGPT.
The drafting of an internal policy on the usage of such technologies was further supported by Vishal Chahal, director of automation at IBM India Software Labs.
Although the policy is still being developed, no outright restrictions have been implemented.
Bern Elliot, vice-president and analyst at Gartner, said at the time, “It is important to understand that ChatGPT is built without any real corporate privacy governance, which leaves all the data that it collects and is fed without any safeguard.
IBM won't be the first company to consider limiting ChatGPT usage.
Global financial institutions Goldman Sachs, JP Morgan, and Wells Fargo are reportedly said to have banned internal use of ChatGPT due to worries that private client and customer information would be exposed to OpenAI's test data.
The announcement of IBM's policy comes in the wake of a report from the Singapore-based cyber security company Group-IB, which asserted that information from over 100,000 ChatGPT accounts was scraped and sold on underground markets.
Explaining why such internal bans are taking place, Jaya Kishore Reddy, co-founder and chief technology officer at Mumbai-based AI chatbot developer Yellow.ai said, “There are a lot of chances that generative AI tools can generate misinformation. There is an accuracy problem, and people may even misinterpret the generated information. Further, the data fed into these platforms are used to train and fine-tune responses — this may result in leakage of a company’s confidential information."
Bern Elliot, vice-president and analyst at Gartner, said at the time, “It is important to understand that ChatGPT is built without any real corporate privacy governance, which leaves all the data that it collects and is fed without any safeguard.