Why does ChatGPT creator OpenAI’s CTO fears AI can be misused and should be regulated?
OpenAI’s CTO fears AI can be misused, it has also acknowledged that ChatGPT can give incorrect answers on multiple occasions, and the chatbot might even “produce harmful instructions or biased content”. This is a problem with generative language models in general, not just ChatGPT. As per the sources, Mira Murati, CTO at OpenAI, the company that developed ChatGPT, and Dall-E, the creator are concerned that AI may be misused and “used by bad actors.” She continues by saying that it’s still possible for various stakeholders to get involved and that regulations might be required.
Murati said the company was pleased with the reception while pointing out that it still faces difficulties over the wildly popular AI chatbots. She stated: “We didn’t expect to feel this much joy after bringing our child into the world. A big neural network that has been trained to predict the next word, ChatGPT is essentially a large conversational model. Its challenges are similar to those of the base large language models in that it may invent information.”
It’s interesting to note that a former Google employee expressed a similar worry about the company’s LaMDA, a competitor to ChatGPT. According to reports, the module occasionally produced stereotypical content that some people might consider to be racist and sexist.
The call for regulation comes as AI is being incorporated into a growing number of areas of our lives, including healthcare and finance. Both the risks and the potential advantages of AI are obvious. We can ensure that AI’s advantages are realised while minimising its negative effects on society by regulating it.
As we move forward with this potent technology, caution and responsibility are essential, as highlighted by OpenAI’s call for regulation of AI. Governments, business executives, and individuals must collaborate to make sure AI is used for society’s benefit rather than its detriment.
The post ChatGPT Creator OpenAI’s CTO Fears AI Can be Misused appeared first on Analytics Insight.