Results for ""
The American-based AI startup Anthropic has published a Responsible Scaling Policy (RSP), a series of technical and organisational protocols that are expected to assist them in managing the risks of developing competent AI systems.
The firm believes that the capabilities of AI models can create significant economic and social value besides increasingly severe risks. Anthropic mentioned that their RSP predominantly concentrates on catastrophic risks like those where an AI model directly leads to large-scale devastation. These risks are developed due to the misuse of the AI models or from the models that trigger destruction by performing autonomously contrary to the designers’ aim.
The RSP released by Anthropic briefs a framework, AI Safety Levels (ASL), for communicating the catastrophic risks. They are modelled loosely per the US government’s biosafety level (BSL) standards for handling dangerous biological materials. The main aim is to require safety, security and operational standards suitable to a model’s potential for catastrophic risk with greater ASL levels that require increasingly strict demonstrations of safety.
A few of the properties of the framework are as follows: