Results for ""
The Federal Trade Commission's investigation of ChatGPT may shed light on how consumer protection law applies to artificial intelligence.
The FTC is especially concerned about whether the AI company has violated consumer protection rules, according to a 20-page civil investigative demand (CID).
The FTC has set its sights on AI for years, from biased outputs to overstated marketing claims. The probe of ChatGPT, on the other hand, may constitute an unprecedented level of disclosure from AI's poster child, which has so far kept quiet regarding the construction and maintenance of ChatGPT. Although the FTC's policy is to undertake nonpublic investigations, Section 6(f) of the FTC Act allows the Commission to "make public from time to time" some of the information obtained during studies, such as where disclosure would serve the public interest.
The majority of the FTC's questions to OpenAI concern how the company collects, sources, and retains data, as well as how it trains ChatGPT and evaluates the accuracy and reliability of its outputs, including an explicit request for more information about OpenAI's process of "reinforcement learning through human feedback." The CID further requests that OpenAI provide a list of all data sources, including websites, third-party services, and data scraping technologies.
The FTC's inquiry focuses on consumer protection problems: "reputational harm" and "privacy and data security." The Commission will review OpenAI's data-gathering techniques and data retention regulations, such as retaining private customer information or limiting quick injection threats regarding privacy and data security. Unsurprisingly, the FTC is interested in OpenAI's privacy procedures. After all, the FTC has brought numerous privacy and data security lawsuits in recent years, particularly against internet corporations.
Opening an investigation only sometimes results in significant disclosures. Nonetheless, this is the closest the general public has come to peering behind the curtain of ChatGPT. Depending on how the OpenAI probe plays out, the outcome might be an operationalized framework for algorithmic openness in the United States.
Note: This article was first posted on The Regulatory Review, a website run by the Penn Program on Regulation.