Using AI and automation? Know your liability risks

Engineers having a conversation at tech electronics factory

Canadian businesses are increasingly exploring AI and automation to find efficiencies and gain competitive advantages.

No longer in the aspirational realm of self-driving cars, these technologies have found applications in virtually every industry. Common uses include:

  • answering customer questions and troubleshooting common issues online
  • designing newer versions of existing products in manufacturing
  • providing predictive maintenance schedules in transportation
  • analyzing medical imaging scans and doing medical triaging in healthcare
  • automating crop harvesting in agriculture.

While it’s exciting to consider all the ways AI and automation can accelerate innovation and improve the way businesses operate, it’s important to know that these technologies may present a liability risk that has not yet been addressed by clear and consistent regulations.

Key AI and automation liability concerns

There are a few important factors companies should consider when assessing the impact of failures, challenges, or errors in AI and automation. They include:

Regulatory ambiguity

Canada and the US don’t have consistent regulations relating to AI and autonomous machines, while the EU is advanced in theirs. In the absence of clear legislation in Canada and the US, it’s hard to establish negligence or pinpoint the cause of harm, creating significant uncertainty in determining legal fault when AI-related problems happen. Although the EU is on the forefront of regulation, these global inconsistencies make it difficult to attribute liability when doing business across borders.

Increased difficulty proving due diligence

Traditional legal concepts of “foreseeability” are challenged by AI’s self-learning nature. It’s difficult to argue that a business could have reasonably predicted an AI’s action when the AI’s algorithm is constantly evolving. This makes it harder for businesses to defend themselves against liability claims, as they may struggle to demonstrate they took all reasonable precautions.

Complex attribution of fault

The question of who is liable becomes significantly more complicated. Is it the AI developer, the software provider, the business implementing the system, or the AI itself? This ambiguity can lead to protracted legal battles and increased costs for businesses facing liability claims

Heightened risk of litigation

The lack of clear legal precedents and the difficulty in assigning fault mean that businesses using AI are more likely to face litigation if a problem occurs. Plaintiffs may have a stronger case, as the unpredictability of AI can make it difficult for businesses to prove they weren’t negligent.

Case study: Air Canada found liable for chatbot’s bad advice

A recent Air Canada liability case demonstrates that attempting to hold AI responsible for its actions rather than those who create or deploy them is unsuccessful in court. 

In a case heard in small claims court, it was alleged that the airline’s chatbot provided poor advice to a customer regarding a rebate availability and refused to honour the information provided by their chatbot. Air Canada claimed the online tool was a separate legal entity that is responsible for its own actions. The court disagreed and ordered reimbursement.

Strategies to manage risks associated with AI and autonomous machines

In the absence of clear regulation, it becomes even more important that companies exercise risk management strategies when integrating AI and autonomous machines.

  • Avoid “AI hallucination” by fact checking: AI hallucination refers to instances where an artificial intelligence model generates outputs—such as text, images, or other data—that are factually incorrect. To minimize this issue, it’s important to train employees on how to use AI systems and build in a feedback loop with human review to fact check.
  •  Use AI appropriately: While AI models are useful for tasks like data analysis and pattern recognition, prediction, and automation of repetitive tasks, it’s inadvisable to rely on it for legal, financial, or health advice. Always consult a professional when making decisions that could have significant impact on the company, employees, and customers. 
  • Monitor chatbot and other customer-facing AI: The Air Canada case made it clear—companies are responsible and liable for what AI chatbots communicate to customers. Ensure there’s a process to properly train, monitor, and adjust AI representatives of your organization.
  • Document AI system design, development, and operation: Prioritize transparency in AI systems to demonstrate that your business is taking steps to mitigate potential harm.

As AI and automation continue to drive innovation and efficiency, it becomes increasingly important to take a proactive and informed approach to managing the inherent liability risks. By focusing on responsible AI deployment and staying abreast of future regulatory developments, you can approach this transformative era with greater confidence and mitigate potential legal and financial exposures.

For more information

Please contact us at gcs.ca@aviva.com

Read more like this

The content in this article is for information purposes only and is not intended to be relied upon as professional or expert advice.

Copyright in the whole and every part of this site belongs to Aviva Canada Inc., unless otherwise indicated, and may not be used, sold, licensed, copied or reproduced in whole or in part in any manner or form or in or on any media to any person without the prior written consent of Aviva Canada Inc.