Home News U.S. AI policy group sues OpenAI, saying its GPT-4 model threatens public...

U.S. AI policy group sues OpenAI, saying its GPT-4 model threatens public safety

0

An AI policy organization in the United States accused the GPT-4 model released by OpenAI of violating consumer protection rules and asked the US Federal Trade Commission to immediately prevent OpenAI from launching a new GPT model and conducting an independent evaluation.

The group, the Center for AI and Digital Policy (CAIDP), filed a complaint today arguing that the AI text generation tool launched by OpenAI is “biased, deceptive and risky to public safety” and violates consumer protection rules.

A previous high-profile open letter called for a moratorium on large-scale generative AI experiments. CAIDP President Mark Rotenberg is also one of the signatories to the letter, which also includes AI researchers and OpenAI co-founder Elon Musk. Like the letter, the complaint also calls for a slowdown in the development of generative AI models and stricter government regulation.

CAIDP pointed to the possible threat posed by OpenAI’s GPT-4 generative text model announced in mid-March. These threats include the potential for malicious code to be produced by GPT-4, as well as biased training data that can perpetuate stereotypes or unfairly differentiate race and gender in recruitment, among other things. The complaint also points to serious privacy concerns in OpenAI’s product interface — such as a recently discovered bug that allowed OpenAI ChatGPT’s chat history and possibly payment information to be seen by other users.

OpenAI has previously publicly acknowledged the possible threat of AI text generation, but CAIDP believes that GPT-4 is beyond the bounds of harming consumers and should lead to regulatory action. CAIDP is seeking to hold OpenAI accountable for violating Title 5 of the Federal Trade Commission Act, which prohibits unfair and deceptive trade practices. The complaint alleges that “OpenAI made GPT-4 available to the public for commercial purposes in full knowledge of these risks.” CAIDP also called the generative model’s confident fabrication of facts that did not exist a form of deception.

In the complaint, CAIDP asks the FTC to halt any further commercial deployment of GPT models and to request an independent evaluation before any future models are launched. It also calls for the creation of a publicly accessible reporting tool similar to where consumers can submit fraud complaints. And seek clarity from the FTC on rules for generative AI systems, building on the agency’s ongoing, but still relatively informal, research and evaluation of AI tools.

The FTC has previously expressed interest in regulating AI tools, warning that biased AI systems could lead to law enforcement action, and at an event this week with the Justice Department, FTC Chair Lena Khan said the agency would look for signs that large incumbent tech companies are trying to crowd out competition.

Exit mobile version