Get featured on INDIAai

Contribute your expertise or opinions and become part of the ecosystem!

“Participants suggested that developers may only have a six- to nine-month advantage until others can reproduce their results. It was widely agreed upon that those on the cutting edge should use their position on the frontier to responsibly set norms in the emerging field,” the paper reads. “This further suggests the urgency of using the current time window, during which few actors possess very large language models, to develop appropriate norms and principles for others to follow.”

During a meeting held in October 2020 to consider GPT-3, two questions arose: “What are the technical capabilities and limitations of large language models?” and “What are the societal effects of widespread use of large language models?” Co-authors of the paper described “a sense of urgency to make progress sooner than later in answering these questions.”

Open AI’s GPT-3 was the largest known language model at 175 billion parameters until late last year. Since then, Google has released a trillion-parameter language model. Large language models are trained using vast amounts of text scraped from sites like Reddit or Wikipedia as training data, stated a report in VentureBeat. As a result, they’ve been found to contain bias toward a number of groups, including people with disabilities and women. GPT-3, which is being exclusively licensed to Microsoft, seems particularly biased towards Black people and Muslims. Largescale language models could also perpetuate the spread of disinformation and could potentially replace jobs.

“Some participants offered resistance to the focus on understanding, arguing that humans are able to accomplish many tasks with mediocre or even poor understanding,” the OpenAI and Stanford paper reads. Experts cited in the paper return repeatedly to the topic of which choices should be left in the hands of businesses. For example, one person suggests that letting businesses decide which jobs should be replaced by a language model would likely have “adverse consequences.”

Before these discriminatory datasets become the mainstream, there should be stringent action to curtail the extent of bias taken sooner than later, recommends the paper.

Want to publish your content?

Publish an article and share your insights to the world.

ALSO EXPLORE

DISCLAIMER

The information provided on this page has been procured through secondary sources. In case you would like to suggest any update, please write to us at support.ai@mail.nasscom.in