Amazon joins Microsoft, Meta Platforms, and Alphabet in announcing AI initiatives. In addition, the company announced that its Amazon Web Services (AWS) cloud platform would offer new AI language models. 

The product, named Amazon Bedrock, will enable clients to augment their software with text-generating AI systems, similar to OpenAI's ChatGPT chatbot. The announcement suggests that the largest provider of cloud infrastructure will not cede a popular growth area to competitors such as Google and Microsoft, which have begun offering developers access to large language models. Large language models are, in general, AI programmes that have been trained with vast quantities of data to generate human-like text in response to user input. 

Through its Bedrock generative AI service, Amazon Web Services will provide the following:

  • Access to its own Titan language models.
  • Language models from startups AI21 and Google-backed Anthropic.
  • A model for converting text into images from startup Stability AI.

A single Titan model can generate text for blog posts, emails, and other documents. The other can facilitate inquiry and customization.

Amazon Bedrock language model

Amazon Bedrock is a new service for developing and scaling generative AI applications, which can generate text, images, audio, and synthetic data in response to inputs. Amazon Bedrock provides customers with simple access to foundation models (FMs)—the ultra-large ML models upon which generative AI relies—from the leading AI startup model providers, such as AI21, Anthropic, and Stability AI, as well as exclusive access to the Titan family of foundation models developed by AWS. No single model performs every function. Amazon Bedrock enables a variety of foundation models from industry-leading providers, giving AWS customers the flexibility and choice to use the finest models for their particular requirements. Amazon Bedrock is the simplest method to develop and scale generative AI applications utilizing FMs.

Amazon EC2

Ultra-large ML models necessitate enormous computing power to execute. AWS Inferentia chips provide the most significant energy efficiency and lowest cost for operating demanding generative AI inference workloads (such as running models and responding to production queries) at scale on AWS.

AWS Trainium chips

Generative AI models must be trained to provide the appropriate response, image, insight, or other focus the model addresses. In addition, new Trn1n instances (the server resource where the computation occurs, which in this case runs on AWS's custom Trainium chips) provide massive networking capability, which is essential for training these models rapidly and cost-effectively. As a result, developers will be able to train models more quickly and affordably, leading to an increase in services powered by generative AI models.

Amazon CodeWhisperer

Consider yourself a software developer with an AI-powered coding buddy to help you code faster and easier. Amazon CodeWhisperer accomplishes this. It employs generative AI beneath the hood to generate real-time code suggestions based on a user's comments and preceding coding. Individual coders can use Amazon CodeWhisperer for free, with no usage limits (premium tiers with features like additional enterprise security and administrative capabilities are also available for professional use).

Furthermore, Amazon claims that these products are only the beginning. A commitment to long-term reasoning is one of Amazon's core values. According to the researchers, they are in the earliest phases of a technological revolution that will continue for decades.

Sources of Article

Image source: Unsplash

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE