The insurance industry has been swift to adopt artificial intelligence (“AI”). According to consulting firm McKinsey & Company, 76% of insurers surveyed have already begun using generative AI in their day-to-day operations. [1] This adoption spans the different facets of insurers’ work cycles, including claims, underwriting, legal, and risk management. Policyholders and their attorneys must remain aware of the potential pitfalls of AI implementation, particularly as it pertains to claims management.
While significant progress has been made in understanding and implementing AI, serious ethical considerations remain:
- The “Black Box” Problem: A major issue with AI is that it is often impossible to understand how the system reached a particular decision. This opacity obscures the decision-making process and creates opportunities for potential biases and privacy concerns. [2] As a result, insurers may be unable to explain to policyholders why a claim determination was made. The National Association of Insurance Commissioners addressed this issue in their 2023 bulletin, “The Use of Artificial Intelligence Systems in Insurance.” As the Commissioners found, “AI . . . can present unique risks to consumers, including the potential for inaccuracy, unfair discrimination, data vulnerability, and lack of transparency and explainability.” [3] The bulletin further “emphasize[s] the importance of the fairness and ethical use of AI; accountability; compliance with state laws and regulations; transparency; and a safe, secure, fair, and robust system.” [4]
- Hallucinations: AI models can also produce inaccurate or misleading information—a phenomenon commonly known as “hallucinations.” This can lead to flawed decision-making. Relying on an AI model that cannot explain its reasoning may generate inaccurate claim determinations.
These concerns are not merely theoretical. Recent reporting and pending lawsuits reveal that major commercial insurers face scrutiny for using algorithmic tools to deny claims with little to no individualized review. [5] These developments underscore the real-world consequences when AI systems are deployed by insurers without adequate safeguards.
As insurers increasingly incorporate artificial intelligence and automated tools into claims-handling processes, insurers remain obligated to conduct prompt claims investigations and ensure that their generative AI models do not misrepresent the scope of coverage available to a policyholder or pertinent facts giving rise to a claim. Technology may assist in identifying patterns, flagging potential issues, or organizing large volumes of data, but the obligation to conduct a reasonable investigation ultimately rests with the insurer. Indeed, given that most states have adopted a version of the Model Unfair Claim Settlement Practices Act (National Association of Insurance Commissioners, Unfair Claims Settlement Practices Act, Model Law No. 900 (“NAIC”)), and insurers are generally obligated to place the interests of their policyholder equal to or ahead of their own, the use of AI, potential lack of human oversight, and susceptibility to bias may conflict with insurers’ statutory duties. Under the NAIC model legislation, unfair claims practices include, among other things, “[f]ailing to adopt and implement reasonable standards for the prompt investigation and settlement of claims arising under its policies,” “[f]ailing to acknowledge and act reasonably promptly upon communications with respect to claims,” and “[r]efusing to pay claims without conducting a reasonable investigation based upon all available information.” Id.
This raises a critical question: Are insurers maintaining meaningful “human in the loop” oversight? Are trained professionals genuinely reviewing AI-generated recommendations before claim decisions are finalized, or is the technology effectively making decisions autonomously without regard for the well-worn rules that govern claim handling practices? Reliance on automated tools cannot substitute for a thoughtful and transparent evaluation of the claim itself and the insurer’s responsibility to look for coverage. Just as lawyers may be questioned by courts for improperly relying on AI-generated case citations that may be the product of AI hallucinations, if an insurer cannot explain the basis for a claim decision because it was generated by an opaque algorithm or automated scoring system, courts and regulators may question whether the insurer truly satisfied its duty to investigate and had a reasonable basis for whatever claim determination ultimately was made.
These issues become increasingly important as AI tools grow more embedded in the claims process. While insurers often describe these systems as efficiency-enhancing technologies that can expedite payments to policyholders, their use raises important questions about transparency, accountability, and the extent to which automated decision-making can meaningfully support—rather than replace—the investigative responsibilities central to claims handling. Understanding how these technologies are deployed in practice is therefore critical to evaluating whether they align with insurers’ existing legal obligations, and policyholders should strongly consider seeking discovery regarding insurers’ use of AI tools when litigating wrongful claim denials.
[1] Nick Milinkovich et al., The Future of AI for the Insurance Industry, McKinsey (July 15, 2025), https://www.mckinsey.com/industries/financial-services/our-insights/the-future-of-ai-in-the-insurance-industry.
[2] Matthew Kosinski, What Is Black Box AI and How Does It Work?, IBM (Oct. 29, 2024), https://www.ibm.com/think/topics/black-box-ai.
[3] Nat’l Ass’n of Ins. Comm’rs, NAIC Model Bulletin: Use of Artificial Intelligence Systems by Insurers (Dec. 4, 2023), https://content.naic.org/sites/default/files/cmte-h-big-data-artificial-intelligence-wg-ai-model-bulletin.pdf.pdf.
[4] Id.
[5] Michelle M. Mello et al., The AI Arms Race in Health Insurance Utilization Review: Promises of Efficiency and Risks of Supercharged Flaws, 45 Health Affs. 1 (2026); Lawrence J. Bracken II et al., Discovery into Insurer’s Use of AI to Deny Claims Allowed by Court, Nat’l L. Rev. (Mar. 23, 2026), https://natlawreview.com/article/court-allows-discovery-insurers-use-ai-deny-claims; Josh Recamara, Insurers Expand AI Use – Report, Ins. Bus. (July 16, 2025), https://www.insurancebusinessmag.com/us/news/technology/insurers-expand-ai-use–report-542713.aspx.