Google's AI chatbot Bard makes a factual blunder in its first demonstration.
A rival to OpenAI's ChatGPT, Google's AI chatbot Bard was revealed on Monday and will "become more freely available to the public in the coming weeks." Bard's initial demo contained a factual inaccuracy, according to experts, so the bot isn't off to the best start.
What fresh information has the James Webb Space Telescope revealed that I can impart to my 9-year-old? Google posted a GIF of Bard responding. One of Bard's three bullet points stated that the telescope "took the very first photographs of a planet outside of our own solar system."
Astronomers pointed out on Twitter that this is untrue and that the first image of an exoplanet was really taken in 2004 as stated on NASA's website.
For the record: JWST did not capture "the very first image of a planet outside our solar system," astrophysicist Grant Tremblay wrote on Twitter. Bard will undoubtedly be beautiful.
Bruce Macintosh, the director of the UC Santa Cruz Observatories, also caught the inaccuracy. I imagine an exoplanet 14 years before JWST was put into operation, therefore it feels like you should find a better example, he tweeted.
In a following tweet, Tremblay said, "I do enjoy and appreciate that one of the most powerful organizations on the planet is using a JWST search to sell their LLM. Awesome! However, ChatGPT, etc., are regularly *very confidently* wrong, despite appearing eerily impressive. Watching to see if LLMs eventually self-correct will be intriguing.
Tremblay highlights the tendency of AI chatbots like ChatGPT and Bard to vehemently assert erroneous information as fact as one of the primary problems with these systems. The systems commonly "hallucinate" or make up information as they are essentially autocomplete systems.
Instead of searching a database of experimentally backed facts for information, they are trained on vast corpora of text and analyze patterns to determine which word follows the next in any given sentence. Due to their probabilistic nature as opposed to deterministic nature, they have earned the moniker "bullshit generators" from a renowned AI professor.
Even though there is already a great deal of false and misleading information online, Microsoft and Google's desire to use their products as search engines has exacerbated the issue. There, the chatbots respond with the self-proclaimed authority of an all-knowing machine. https://ejtandemonium.com/
Microsoft, which yesterday debuted its new Bing search engine powered by AI, has tried to allay these worries by placing the burden of responsibility on the user. According to the company's disclaimer, "Bing is powered by AI, therefore surprises and mistakes are conceivable." Verify the data and offer comments so we can develop and learn.
According to a Google representative who spoke with The Verge, "This underscores the need of a thorough testing procedure, something we're kicking off this week with our Trusted Tester program." We'll combine outside input with our own internal testing to make sure that Bard's responses uphold a high level for quality, safety, and information based on actual data. http://sentrateknikaprima.com/
A rival to OpenAI's ChatGPT, Google's AI chatbot Bard was revealed on Monday and will "become more freely available to the public in the coming weeks." Bard's initial demo contained a factual inaccuracy, according to experts, so the bot isn't off to the best start.
What fresh information has the James Webb Space Telescope revealed that I can impart to my 9-year-old? Google posted a GIF of Bard responding. One of Bard's three bullet points stated that the telescope "took the very first photographs of a planet outside of our own solar system."
Astronomers pointed out on Twitter that this is untrue and that the first image of an exoplanet was really taken in 2004 as stated on NASA's website.
For the record: JWST did not capture "the very first image of a planet outside our solar system," astrophysicist Grant Tremblay wrote on Twitter. Bard will undoubtedly be beautiful.
Bruce Macintosh, the director of the UC Santa Cruz Observatories, also caught the inaccuracy. I imagine an exoplanet 14 years before JWST was put into operation, therefore it feels like you should find a better example, he tweeted.
In a following tweet, Tremblay said, "I do enjoy and appreciate that one of the most powerful organizations on the planet is using a JWST search to sell their LLM. Awesome! However, ChatGPT, etc., are regularly *very confidently* wrong, despite appearing eerily impressive. Watching to see if LLMs eventually self-correct will be intriguing.
Tremblay highlights the tendency of AI chatbots like ChatGPT and Bard to vehemently assert erroneous information as fact as one of the primary problems with these systems. The systems commonly "hallucinate" or make up information as they are essentially autocomplete systems.
Instead of searching a database of experimentally backed facts for information, they are trained on vast corpora of text and analyze patterns to determine which word follows the next in any given sentence. Due to their probabilistic nature as opposed to deterministic nature, they have earned the moniker "bullshit generators" from a renowned AI professor.
Even though there is already a great deal of false and misleading information online, Microsoft and Google's desire to use their products as search engines has exacerbated the issue. There, the chatbots respond with the self-proclaimed authority of an all-knowing machine. https://ejtandemonium.com/
Microsoft, which yesterday debuted its new Bing search engine powered by AI, has tried to allay these worries by placing the burden of responsibility on the user. According to the company's disclaimer, "Bing is powered by AI, therefore surprises and mistakes are conceivable." Verify the data and offer comments so we can develop and learn.
According to a Google representative who spoke with The Verge, "This underscores the need of a thorough testing procedure, something we're kicking off this week with our Trusted Tester program." We'll combine outside input with our own internal testing to make sure that Bard's responses uphold a high level for quality, safety, and information based on actual data. http://sentrateknikaprima.com/