Google Gen AI Fails Aren't Stopping

It was a busy weekend for Google as they raced to contain yet another AI disaster.

Google can’t seem to catch a break as their AI troubles continue. 

This time, it was their new AI Overview feature in its Search Platform that was causing them problems. The feature is supposed to provide generative AI-based answers to search queries on top of your search results. 

In theory, this would be an awesome addition to the platform, which is facing heavy competition from the likes of ChatGPT, Anthropic, and Perplexity. Except what happens if your product fails in such a way that it provides wrong, misleading, biased, and, in the worst case, dangerous answers?

You get a disaster on your hands and a product that is telling you:

  • To use nontoxic glue to keep the cheese from sliding off your pizza.

  • To eat one rock a day.

  • That Barak Obama was the first Muslim President. 

  • To use black beans as a thermal paste for your computer.

  • That taro can be cooked in either 15-20 minutes or 1.5 hours (you can decide).

  • To smoke 2-3 cigarettes a day as a pregnant woman

Users on social media were having a ball this weekend, sending the most off-the-wall answers to see what they could get the AI to say. Exploitation of features in this manner is very common; in fact, you can count on the average user to test all use cases to find your vulnerabilities. The issue is that these vulnerabilities need to be figured out in testing as you don’t want your core user base finding these themselves. 

Googles Other Big AI Fails

To make matters worse, this isn’t the first time Google has failed in such a drastic manner. 

Google famously had to rebrand their company's chatbot, Bard, to Gemini after its horrible launch. The company accidentally included an erroneous response in its promotional video for the product, sending Google shares sliding.  

Later, Gemini’s image generator had even more problems as it began generating photos of diverse groups of people in inaccurate settings, including as German soldiers in 1943. 

AI has a history of bias, and Google was trying to overcome this issue by introducing a wider diversity of ethnicities into its image generation. However, the company overcorrected, leading to inaccurate and, at times, offensive generated images.

The Big Problem

It looks like Google has the habit of releasing unfinished products early or, even worse, sending out half-baked finished products that have lots of inaccuracies. 

Who can blame them? They have to compete with all of the startups that are taking over the scene. Nobody wants to fall behind in the AI rat race, especially a conglomerate like Google, which has been dominating the way we search for decades. 

Controversy builds for any company when they release suboptimal products, as it can be seen as a low-quality effort to create something that is not only worse than their startup competitors but is also blatantly giving wrong and biased answers. This leads to a loss of trust and credibility for a brand that is slowly losing its grip on its core search business. 

Can we trust the product? The answer is no, as Google needs to work harder to overcome these challenges and release a quality product with no problems. The fact that Google continues to release products with flaws and then explain them away is growing old and needs to stop before they destroy the brand that they have built so well.

Reply

or to participate.