Results for ""
Google opened Bard, its generative AI chatbot, to the public to "improve its systems" and compete with OpenAI's ChatGPT.
Google's limited access to Bard, its ChatGPT competitor, is a crucial step in the company's effort to recoup what many consider to be lost ground in the emerging race to deploy artificial intelligence. Users can join a waitlist at bard.google.com. Bard will first be available to the United States and the United Kingdom. However, Google claims the rollout will be gradual and has not provided a timeline for full public access.
"We are expanding access to Bard in the United States and the United Kingdom, with additional nations to follow; it is an early experiment that enables collaboration with generative AI. Hope Bard inspires more innovation and curiosity and will improve with feedback," tweeted Google CEO Sundar Pichai.
Similar to OpenAI's ChatGPT and Microsoft's Bing chatbot, Bard provides users with a text box and the option to ask inquiries about any topic. However, given these bots' well-documented tendency to fabricate information, Google emphasizes that Bard is a "complement to search" that users may use to develop literary drafts, bounce ideas off of, or speak with.
Kate Carwford, a Microsoft researcher, emphasized Bard's controversial response. She uploaded a screenshot of her dialogue with Google's artificial intelligence chatbot, in which she inquired about the dataset. According to Bard, its dataset originates from a variety of sources. First, it provided a list of publicly accessible datasets, which included text and code data sets from various web sources such as Wikipedia, Github, and Stack overflow. Bard then enumerated Google's internal data, which includes Google Search, Gmail, and other third-party products and data. If it is true that Bard uses Gmail data, this is a severe violation of privacy.
Image Source: Twitter
However, Google has dismissed the report and clarified its position on Twitter. Google responded to Crawford's tweet by tweeting, "Bard is an early experiment based on Large Language Models, and it will make mistakes. It is not trained on data from Gmail.
Conclusion
As with ChatGPT and Bing, there is a critical notice under the primary text box telling users that "Bard may display inaccurate or offensive information that does not represent Google's views" - the AI equivalent of "abandon all trust, ye who type here."
A generative AI tool, such as Bard or ChatGPT, will only sometimes deliver factually accurate information. Frequently, AI tools fabricate stories and offer them as facts. Even businesses concur with this statement. The Google Bard website also cautions visitors that the AI chatbot will only sometimes provide accurate information. OpenAI, the parent company of ChatGPT, stated the same thing when it introduced the GPT4 language model, the successor to the GPT 3 language model that powers ChatGPT.