As OpenAI’s multimodal generative AI platform ChatGPT-4o is gaining traction among users, tech experts have flagged issues related to data privacy and possibilities of cybercrimes. Fixing accountability for misuses and hallucinations, intellectual property and copyright disputes, voice cloning, and lack of transparency in use of data, are some of the other potential threats, highlighted by them.
Lok Sabha Election Results 2019
This assumes significance as currently there is no dedicated law in the country to deal with the flipside of generative AI models like GPT-4o. The Digital India
Lately, the government also came up with some advisories that generative AI companies should only make their platforms public after testing their models, and check for any bias or discrimination.
“As far as GPT 4o is concerned, this program is now capable of generating the latest unprecedented, unknown cyber security ramifications, infections, malware, and computer contaminants. We will now see more scam calls and messages,” said Pavan Duggal, Supreme Court lawyer and cyber law expert.
“When people tend to believe whatever they see as correct, disinformation, misinformation and mispropaganda are going to be key elements and challenges. Deepfake content is going to be guiding left, right and centre as far as GPT-4o is concerned,” Duggal said, adding that there is a need for a law to regulate artificial intelligence
GPT-4o supports multiple capabilities – text, audio, vision, and image and generates combination of text, audio, and image as outputs. It’s like an AI assistant which gives answers based on the prompts. OpenAI on Thursday announced advanced features of GPT-4o free of cost to users.
One of the recent issues that came up post the launch of GPT-4o was the intellectual property dispute when hollywood
“If tomorrow, generative AI creates any sort of data, who is accountable for it and if there is some sort of hallucination, who will be liable. Today, the issues are deepfakes, voice cloning, tomorrow there will be something else, therefore we need strict laws to regulate AI,” said Saakshar Duggal, a practicing advocate in the Delhi High Court
AI hallucinations means false information from large language models (LLMs), which power AI chatbots.
According to Saakshar, GPT-4o requires access to users’ camera for its vision capabilities, which means the model can see the users and their surroundings to give answers to certain prompts, and that could be a privacy threat. Similarly, he also pitched for transparency in use of data by the generative AI companies.
Similarly, the generative AI platform’s memory application feature stores the conversation and uses it to train the models, leading to risk of privacy, experts said.
Globally, the European Union has an AI Act, China also has laws to regulate generative AI. Besides, New York has taken steps to prevent AI from perpetuating bias or interfering with elections. In India
Ashima
With regard to national security and data privacy issues, the data currently used by the global companies are stored in their servers outside India, a policy expert said, adding that the same can be misused by global surveillance agencies.