top of page

The World of Misinformation Around AI

These days we are inundated with headlines about AI, and I fear that they are just as likely to mislead us as they are to educate us. Let’s talk through the three types of headlines I see the most often:


  • A company has done something incredible with AI

  • A company has made a terrible mistake with AI

  • A new AI model has made a SHOCKING advance



Doing something incredible

An example of a headline of a company doing something incredible is the announcement that Moderna is combining HR and IT so that it could smoothly decide which work should be done by humans and which by AI. They say they have 3000 custom-tailored ChatGPTs, which is astonishing for a company of 5000 employees.


The positive lesson to take from this headline is that we should all invest in building our capability to use AI. We need workshops, some specialized AI talent, an AI governance committee, and so on. If this kind of headline encourages companies that have been slow to learn about AI to get started that’s all to the good.


The problem with this kind of headline is that it vastly overstates the value of today’s AI tools and can create a feeling of panic as companies assume they are far behind. AI does add value today, but in most cases, it won’t make or break the company. The trick is to build the capability to use AI, not to rush into deploying half-baked solutions.


It’s also worth noting that Moderna lost nearly a billion dollars in the first quarter of 2025 and this dramatic AI news may simply be an attempt to distract investors from the terrible financial state.


Making a terrible mistake

Rather embarrassingly, the Chicago Sun-Times newspaper used AI to prepare an article on suggested summer reading. They published a list comprised of real authors, but the books didn’t exist. That’s not a good look for a newspaper.


More seriously Air Canada’s AI chatbot gave a passenger incorrect information about a discount. When the company denied the passenger the discount, the passenger sued and won the case.


The positive lesson from this case is that you need to know what you are doing with AI. Perhaps these companies read too many headlines about “AI doing something incredible” and thought they should jump in.


The problem with this kind of headline is that it gives too much power to AI naysayers in the company who want to ban all uses of AI. The proper lesson is to build the capability to assess what these wonderful AI tools can do, not to ban using them.


AI’s SHOCKING new advances

One of my favourite AI YouTubers, Wes Roth, is infamous for headlines such as AI Researchers SHOCKED After OpenAI's New o1 Tried to Escape. He’s not alone in hyping each step forward in AI progress.


The positive lesson from this kind of headline is that AI is continuing to make important advances. Anyone who thinks they can take a break from learning about AI should see these headlines as a wake-up call. Companies need to lean in to build capability around all aspects of AI.


The problem with this kind of headline is that they are exhausting and overstate the practical importance of these advances. Google recently created tools so that AI-generated video could include voice. Technically, that’s a big step forward, but for the vast majority of organizations that amazing new ability is of little value.


Summing up

In all cases, we need to dramatically tone down our response to the drama in AI headlines. Yes, some companies are making good use of AI so we should develop our AI capabilities to seize opportunities. Yes, some companies make big mistakes with AI so we should develop our AI capabilities to avoid mistakes.  Yes, AI is advancing so we should continue to develop our AI capabilities rather than thinking we are already far enough along.


AI is incredibly significant, just don’t believe all the positive and negative hype.

bottom of page