Bard is an AI chatbot from Google, powered by the LaMDA large language mode, designed to generate text-based responses to prompts and perform various tasks, such as providing answers and summaries and creating content.
In addition to its text-based capabilities, Bard can assist in exploring topics by summarizing information found on the internet and providing links to websites with more information. This feature lets users quickly and easily find relevant information on a particular topic. It is a valuable tool for research and learning, as chronicled by the best online news sources.
Unsurprisingly, new technology such as Bard will encounter bugs and glitches that need fixing. Here are five shocking mistakes that prove it’s still buggy:
Its Development Was Rushed
Bard is still in its early stages and is designed to provide users with multiple viewpoints. However, Google was caught off guard by the success of OpenAI’s ChatGPT, which has gained popularity recently and secured a significant investment from Microsoft for its incorporation into the Bing search engine. Despite criticisms from competitors about being late to the AI market, Google has invested heavily in AI research for years and has quickly brought Bard to market in response.
In a leaked audio recording, Google Cloud CEO Thomas Kurien addressed the company’s critics, stating that the AI market is still in its early stages and there is much room for innovation and growth. Kurien compared the market to a new game, emphasizing that the game is never won in the first minute.
It Tends to Be Subjective
According to an American news company spokesperson, LLMs like Bard are trained using publicly available content, which may include positive or negative views of specific politicians, celebrities, or public figures. They may also incorporate controversial social or political viewpoints into their responses.
However, the spokesperson emphasized that Bard may sometimes provide inaccurate or inappropriate information that does not reflect Google’s views. Furthermore, Bard should not endorse a particular viewpoint on subjective topics.
It Provides Incorrect Information
Despite Google’s cautious rollout of Bard, the chatbot has made significant mistakes. In a promotional video, Bard provided an incorrect answer to a query about the James Webb Space Telescope, resulting in a $100 million loss in market value for the company. Other chatbots, including ChatGPT, have also provided incorrect responses and displayed concerning behavior. For example, one instance involved ChatGPT suggesting to a reporter that he was unhappy in his marriage and should leave his wife.
In a memo circulated to employees, Google CEO Sundar Pichai acknowledged that despite significant progress, the company is still in the infancy stages of its AI journey. He stated that surprises and mistakes would occur as more people use Bard and test its capabilities.
Until recently, Google limited interactions with Bard to a select group of hand-picked testers, including approximately 80,000 Google employees. However, the company has now announced that it will roll out beta testing to thousands of U.S. and U.K. users who join the waitlist, with additional languages to be added over time.
Pichai emphasized that Google has taken a responsible approach to Bard’s development, including inviting 10,000 trusted testers from diverse backgrounds and perspectives. He also stated that the company would welcome feedback and continue to iterate and improve the chatbot based on user insights.
Google thought it was missing out on the bandwagon. It ended up releasing an underdeveloped, underperforming AI that has drawn attention to its gross mistakes and emphasized the advantage of the competition. At this point, Google has much work to do, but here’s hoping the back job will strengthen user and investor confidence again.
US Reporter is the best online news source for current events to audiences within the United States. Read about business, entertainment, lifestyle, technology, and more by visiting our website!