Case Study: Yes, chatbots are still part of your website.

Photo of author
Written By Scott Kubie

Founder and Director of Content Career Accelerator. Indie rock fan. East Coast midwesterner.

The chatbot did it. And the dog ate my homework!

Content people everywhere got to savor a morsel of schadenfreude this weekend thanks to a news story in the Guardian about a chatbot gone wild. I saw it, skimmed it, and popped off on LinkedIn, as is my habit.

I shared my incredulity that big companies – companies that, in my experience, are highly risk-averse, and who put their content people through legal and compliance wringers – are now adding generative AI tools to their websites that invent content and publish it instantly.

Turns out that’s not quite what happened in this particular case … but my incredulity stands, because it is happening.

Having dug into the details, however, I’ve learned that this whole thing is part of an older, longer, and, in 2024, even more embarrassing tradition of enterprise content mismanagement: forking. We’ll get to that in a moment.

First, the story. From the Guardian:

Air Canada ordered to pay customer who was misled by airline’s chatbot

Canada’s largest airline has been ordered to pay compensation after its chatbot gave a customer inaccurate information, misleading him into buying a full-price ticket.

Air Canada came under further criticism for later attempting to distance itself from the error by claiming that the bot was “responsible for its own actions”.

So a support chatbot on Air Canada’s website tells a guy he has 90 days to get a bereavement fair refund, which he understandably believes. That wasn’t actually the policy. Air Canada tells him to stuff it when he comes looking for his money. He has the receipts (screenshots, people!), and sues. Air Canada says they’re not responsible for what a chatbot says. The tribunal thinks that’s ridiculous, Air Canada loses, we all rejoice.

I and many others assumed it must have been a generative AI chatbot. Larry Swanson pointed out some analysis and basic detective work (looking at a calendar!) by Maaike Groenewege that shows it couldn’t have been ChatGPT, but was more likely an** NLU** (natural language) bot. Not my area of expertise, but at a high-level, NLUs attempt to parse conversational speech, and can reply with a pre-programmed facsimile of it. They’re the ones on support websites that keep giving you links to articles that don’t answer your question and you eventually reply AGENT over and over until you get to chat with a person.

Which reminds me of a meme I made in 2020…

Guy looking at another girl while with girlfriend meme. You're the guy, your girlfriend is "Making your support content usable", the hot girl walking by is "Building a chatbot"

The tribunal wasn’t buying what Air Canada was selling. From the decision:

Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives – including a chatbot. It does not explain why it believes that is the case.

I know why they believe it’s the case: They’re not user-centered**. **The leaders who offered this defense don’t see the chatbot as part of their “official” website. They have a business-centered perspective on their sites and channels: separate projects, separate products, separate initiatives.

But customers don’t experience internal products, teams, or departments. Customers experience your website, and your brand, as a whole.

In effect, Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions. This is a remarkable submission.

“This is a remarkable submission” is legalese for “Are you fucking kidding me?”

While a chatbot has an interactive component, it is still just a part of Air Canada’s website. It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot.

Love it. I want to buy these guys a drink! 🍻 I have to repeat my favorite bit:

…it is still just a part of Air Canada’s website.

This should be obvious to Air Canada and every other company, but it often is not. It’s why I give a talk called _What Even Is a Website? _(hire me!), exploring the vast differences in mental models different stakeholders hold about what feels to everyone like a simple and singular thing: “the website”.

Again from the decision, emphasis mine:

While Air Canada argues Mr. Moffatt could find the correct information on another part of its website, it does not explain why the webpage titled “Bereavement travel” was inherently more trustworthy than its chatbot. It also does not explain why customers should have to double-check information found in one part of its website on another part of its website. There is no reason why Mr. Moffatt should know that one section of Air Canada’s webpage is accurate, and another is not.

Correct. Customers shouldn’t have to double-check information found via your chatbot against your website, just as they shouldn’t have to double-check information in your TV advertisements, in your social media posts, in your interactive kiosks, in your on-site search results, in your support centers.

This is nothing new to content people.

I learned a good word for what happened from Karen McGrane many moons ago, in a cautionary tale she shared in a workshop about adapting web content for mobile experiences: forking.

Forking is when you take one information object – like an airline’s bereavement ticket policy – and fork it into two objects, to serve two different platforms. Or three for three. Or three for four, or six for ten!

Forks are often stored in different databases and managed by different teams. You can already imagine, perhaps have already lived, the problems:

  • What if something changes but only one team gets notified?
  • What if a timely update is required, but one platform can be updated instantly and another requires hands-on work from an engineering team?
  • What if subject matter experts aren’t even notified when their information gets forked, and therefore don’t know to notify anyone when the facts change?

Forking is a short-term fix for a long-term problem. What’s really needed is:

  • a robust content model +
  • a single-sourcing / COPE tech stack +
  • the requisite digital governance to maintain a single source of truth and serve it to multiple platforms with accuracy and consistency.

That’s a tall order for some companies – and absolutely doable. But, like cold showers and thinking of baseball, you can reduce the urge to fork with content design. A few years ago, I helped a team at a financial services company do just that.

The team managed intranet content in a knowledge base on-site phone agents used to answer questions for sales people in the field. Questions about complex topics like investment vehicles, estates, taxes, and so on. Wrong information could be ruinous for individuals, and a liability for the company.

The team had been doing their best, but much of the extensive web of content had been created organically over time and did not exemplify content design best practices. It was hard and stressful for agents to seek answers inside long, dense articles while someone waited impatiently on the phone.

During my research, we discovered what you might call a “guerrilla forking” solution many agents used to solve their problem. When they found an answer they might need again, they printed it out and taped it up in their cubicle. Copy, paste, print.

Clever! But a massive risk for the company. If you printed an answer on Monday, but the information changed on Tuesday, how would you know? You obviously can’t push updated content onto an already-printed piece of paper.

And that, in a way, is what was happening at Air Canada. The website said one thing, the chatbot (accessed through the website, no less) said another. One brand, one site, two sources of truth.

Thankfully, the content management team I worked with found their way to Confab (RIP) and eventually to me, and I led them to the good word that is content design. We ran top task analysis to identify the answers and articles agents relied on most, and created a plan to prioritize immediate improvements to them using new principles for readability and usability. These principles were then rolled out through the experience as their workload allowed.

Metrics went up, errors went down, and their content became a more trusted source of answers … so much so that they were eventually able to start experimenting with 🥁🥁🥁 chat-based interfaces!

You’ve got to get your house in order first, is what I’m saying.

I don’t know anything about Air Canada or their content operations. It’s possible this incident was an outlier for them. But many other companies, maybe yours, are trying to run with AI before they’ve learned to walk with content operations.

The risk of tripping and slamming face-first into concrete – or a Canadian small-claims court – is only going to grow for companies laying off content experts, UX researchers, technical writers, and others skilled in making sense of complex information spaces.

I expect to see more stories like this in the coming months and years, not fewer. But smart companies can avoid the worst of it by giving a damn about digital governance, content operations, and content design. And by hiring, or not losing, the people who care about that stuff, too.

In the meantime? I’ll be here with my popcorn. 🍿

Join 2,500+ focused on what's next in UX Content.

Opportunities, tips, and tools from real people in the world of UX content, straight to your inbox. Subscribe for free.

Item added to cart.
0 items - $0.00