top of page
  • community959

Google Bard: Achievements and Flaws



By: Ethan Shen


Google Bard is Google’s new AI chatbot that will compete with OpenAI’s ChatGPT and Microsoft’s Bing chatbots. The chatbot is an LLM (large language model) AI that learns by analyzing information from the internet. However, misinformation and untrustworthy sources on the internet can sometimes cause the chatbot to make mistakes and make things up. That is why Google Bard is not intended for use as a search engine.

Bard is mainly intended for casual use, designed to explore the endless possibilities of AI chatbots. It can write blogs, generate ideas, and respond to opinion questions. Also, the chatbot does not give the same answer every time. When asked what the most important event in American history was, Bard at first responded with a list of important events. When the chatbot was asked the same question later, the chatbot answered confidently that it was the American Revolution. This means the chatbot should not be trusted to give a consistent answer, and that you shouldn’t use it as a search engine.


Google Bard also annotates some responses so you can view their sources. However, annotation does not necessarily mean better sources. When Bard wrote that the most important event in American history was the American Revolution, it cited a blog, “Pix Style Me”, which is written in both English and Chinese and is decorated with cartoon cats. This, clearly, is not a trustworthy source. An average blog post is clearly less trustworthy than textbooks and peer-reviewed papers.


However, even though it sometimes makes mistakes and uses untrustworthy sources, Google Bard is more cautious than its competitors. Bard refuses to give answers about specific people. Similarly, when asked about medical, legal, or financial matters, it does not give a response as it recognizes the risk of giving incorrect information.


In conclusion, Google Bard is an interesting chatbot. It behaves differently than its competitors in how cautious it is in providing incorrect information. I guess we will just have to wait to see how the chatbot evolves.

4 views0 comments
bottom of page