The use of AI in clinical practice will continue to grow as sophisticated systems are deployed in the healthcare setting. With this comes an increased risk of patients coming to harm as a result of failures in AI. Faithfulness hallucination is one example of a risk that was simply non-existent in the pre-AI world. Hallucination in the context of clinical note summaries in particular (whereby the AI model generates summary content that is incorrect or inaccurate when compared with the source medical notes) is one area of risk that has the potential to lead to serious consequences. Of course, understanding how an AI device has reached a decision may not be transparent (known as the ‘black box problem’), and this of itself is an issue, particularly when considering where responsibility for harm might lie. Choosing then whether to pursue a claim for clinical negligence or product liability (or both) poses yet further questions for a patient that may have come to harm. For healthcare providers and insurers, there will be questions of how to respond to such claims and how blame should be apportioned.