Skip to content
AI-Related Regulations in Insurance and How to Comply
March 25, 202612 min read

AI-Related Regulations in Insurance and How to Comply

Artificial intelligence is reshaping the insurance sector. Underwriting, claims processing, and customer service are increasingly supported by systems that can analyze data faster and more consistently than manual processes. As adoption expands, lawmakers and regulators are increasing scrutiny of how insurers deploy AI. Expectations now focus on fairness, transparency, accountability, and consumer protection. 

For insurance executives, AI compliance is no longer a back-office task. It must be built into the design and governance of every model, turning regulatory requirements into a catalyst for stronger operations and deeper consumer trust. 

National Landscape: Complexity Across Jurisdictions

 

National Landscape: Complexity Across Jurisdictions

The regulatory landscape for AI in insurance continues to evolve.  National frameworks and federal oversight bodies have established foundational guidance that influences how insurers design, deploy, and monitor AI systems.  These frameworks emphasize governance, transparency, documentation, and risk management. Federal agencies have also clarified that existing laws apply to AI-enabled systems.

The Federal Trade Commission, Consumer Financial Protection Bureau, Department of Justice, and Equal Employment Opportunity Commission have issued joint statements confirming that AI must comply with laws governing discrimination, civil rights, unfair or deceptive practices, and consumer protection.  

National Association of Insurance Commissioners (NAIC)

The National Association of Insurance Commissioners (NAIC) issued the Model Bulletin titled Use of Artificial Intelligence Systems by Insurers. The bulletin outlines expectations for responsible AI governance within insurance organizations. Key principles include fairness, ethical use of AI, accountability, transparency, strong governance, clear documentation, and alignment with state and federal laws. 

The bulletin also emphasizes that insurers remain responsible for decisions supported by AI systems. Organizations must maintain oversight of vendor solutions, document model behavior, test systems for bias and accuracy, and implement human review where appropriate. Ongoing monitoring is expected to identify model drift, unintended outcomes, and potential discriminatory impacts. 

National Institute of Standards and Technology (NIST)

The National Institute of Standards and Technology (NIST) provides guidance that supports responsible AI development and evaluation. Publications, such as NIST SP 1270 titled Towards a Standard for Identifying and Managing Bias in AI Systems, focus on identifying and managing bias within AI systems. These resources provide practical frameworks for testing, documentation, and risk management that insurers can incorporate into governance programs. 

 

State-Level Spotlight: How Regulations Are Evolving

 

 

State-Level Spotlight: How Regulations Are Evolving

While national frameworks provide direction, many enforceable requirements are emerging at the state level. States are introducing laws, regulations, and bulletins that address how AI can be used in underwriting, claims handling, utilization review, and other insurance decisions. Although requirements vary, most state initiatives emphasize fairness testing, human oversight, transparency, and documentation. 

Many states have adopted the NAIC Model Bulletin or issued similar guidance. Others have introduced their own legislation or regulatory directives that govern the use of algorithms and predictive models. Even in states that have not formally adopted the bulletin, regulators often expect insurers to operate in alignment with its principles.

The following states have formally adopted the NAIC framework and expectations: 

Alaska Maryland Pennsylvania
Arkansas Massachusetts Rhode Island
Connecticut Michigan Vermont
Delaware Nebraska Virginia
District of Columbia Nevada Washington
Illinois New Hampshire West Virginia
Iowa North Carolina Wisconsin
Kentucky Oklahoma  

Other states have introduced their own AI-specific guidance and regulatory requirements for insurers: 

California New York Washington
Colorado Texas  

These frameworks address areas such as governance, transparency, fairness testing, and oversight of algorithmic decision systems. 

Other states have taken different approaches. Some have adopted the NAIC Model Bulletin, while others have enacted their own laws or regulatory directives that restrict how AI may be used in insurance operations or in consequential decisions affecting consumers. Even where the NAIC guidance has not been formally adopted, regulators often expect insurers to operate in alignment with similar governance, fairness, and consumer protection principles. 

 

Arizona: Health Coverage Decision Oversight

 

Arizona: Health Coverage Decision Oversight

Arizona enacted legislation restricting the use of AI in health insurance decision making. The law limits the ability of automated systems to independently deny or delay medical care coverage. AI tools may assist with analysis or recommendations, but a licensed medical professional must review and approve the final coverage determination.

Key priorities for carriers:

  • Ensure AI systems do not independently deny or delay medical coverage decisions.

  • Require licensed medical professionals to review AI-supported recommendations before final determinations are made.

  • Maintain documentation showing that human review occurred for coverage decisions involving automated tools. 

 

California: Data Rights and Decision Transparency

 

California: Data Rights and Decision Transparency

California continues to set the pace in data and AI regulation. The California Consumer Privacy Act (CCPA) and its expansion under the California Privacy Rights Act (CPRA) empower consumers to know when automated systems are used in areas like pricing or coverage. Insurers must be prepared to disclose this use clearly and defend the reasoning behind such decisions.

In parallel, the California Department of Insurance has issued guidance requiring that AI tools used to review and approve care or treatment decisions (known as utilization management) must not introduce discriminatory outcomes. Regulators have also reminded insurers that AI cannot replace licensed professional judgment – misuse could trigger liability under consumer protection and civil rights laws.

Future legislation, such as AB 682, is expected to push for even more transparency, including reporting obligations tied to claim denials influenced by AI.

Key priorities for carriers:

  • Keep human review in the loop for medical and claims decisions.
  • Test and document AI systems for bias, accuracy, and audit readiness.
  • Build processes to meet CPRA disclosure timelines. 

 

Colorado: AI Oversight for P&C and E&S Underwriting Models

 

Colorado: AI Oversight for P&C and E&S Underwriting Models

Colorado Division of Insurance (CO) implemented via state law passed in 2021 required its Insurance Commissioner to implement rules obligating insurers to show that algorithms and predictive models (e.g., underwriting systems) do not unfairly discriminate on protected-class characteristics. This suggests CO’s regulation potentially covers underwriting or risk-assessment models, which could include P&C or E&S underwriting if used there.

Several additional states, including Illinois, Maryland, and Vermont, have adopted versions of the NAIC Model Bulletin. The result is a patchwork environment where requirements vary but share common themes: governance, fairness testing, and transparency.

For carriers writing business in multiple states, the challenge is harmonization building frameworks that can flex to state-specific rules while maintaining consistent governance across the enterprise.

Key priorities for carriers:

  • Conduct fairness testing on predictive models used in underwriting and pricing decisions.

  • Maintain documentation demonstrating that algorithms do not produce unfairly discriminatory outcomes.

  • Prepare governance and testing records that may be reviewed during regulatory examinations.

Tip: Explore the interactive state map for an overview of evolving requirements. 

 

Florida: Human Oversight of Claims Decisions


Florida: Human Oversight of Claims Decisions

Florida enacted legislation reinforcing human oversight in claims handling decisions. The law requires that a qualified human professional review and make the final determination when AI or automated tools are used to evaluate insurance claims.

The legislation also prohibits insurers from relying solely on AI systems to deny or adjust a claim. Regulators may review documentation of AI use as part of market conduct examinations.

Key priorities for carriers:

  • Ensure claim denials are made by qualified human professionals.

  • Prohibit automated systems from serving as the sole basis for denying or adjusting claims.

  • Maintain documentation demonstrating that human review occurred.

  • Prepare to provide records related to AI use during regulatory examinations.

 

2026.03.25-blog-ai-related-regulations-in-insurance-and-how-to-comply-section-7


Maryland: Utilization Review Safeguards

Maryland enacted legislation governing the use of artificial intelligence tools within health insurance utilization review processes. AI systems may assist with clinical analysis or data review, but licensed clinicians must make final decisions regarding treatment approvals or denials.

The law reinforces that automated technologies must support clinical judgment rather than replace it.

Key priorities for carriers:

  • Use AI tools to support clinical review rather than independently determine coverage outcomes.

  • Require licensed clinicians to approve final utilization review decisions.

  • Maintain documentation demonstrating that clinical professionals evaluated AI-supported recommendations. 

 

2026.03.25-blog-ai-related-regulations-in-insurance-and-how-to-comply-section-8

 

Nebraska: Coverage Determination Review Requirements

Nebraska enacted legislation limiting the use of artificial intelligence systems in health insurance coverage determinations. AI tools may assist with analysis or recommendations, but they cannot independently make coverage decisions.

Final determinations must involve qualified professionals who review the available information and approve the outcome.

Key priorities for carriers:

  • Ensure AI tools do not independently determine coverage outcomes.

  • Require qualified professionals to review and approve final coverage decisions.

  • Maintain documentation demonstrating human oversight of AI-supported determinations. 

 

New York: Mandatory Audits and Reporting

 

New York: Mandatory Audits and Reporting 

New York’s regulators have established some of the most detailed requirements to date. Under the NYDFS Circular Letter No. 7 (2024), insurers must demonstrate that AI models are actuarially sound, regularly audited for bias, and used with appropriate transparency.

Carriers must provide consumers with timely written explanations when AI influences adverse underwriting outcomes. Boards and senior management are expected to directly oversee compliance, reinforcing that AI governance is not just a technical matter but an enterprise-wide responsibility.

Key priorities for carriers:

  • Form a governance committee reporting to the board on AI use.
  • Incorporate statistical fairness metrics into ongoing monitoring.
  • Maintain audit files that include bias testing results, actuarial justifications, and standardized consumer notices. 

 

Texas: Broad Governance and Insurance-Specific Measures


Texas: Broad Governance and Insurance-Specific Measures 

Texas has adopted a dual approach. The Texas Responsible AI Governance Act (TRAIGA), effective 2026, applies across industries and focuses on transparency, consent, and accountability. At the same time, SB 815 – currently under review – targets insurance directly by requiring disclosure when AI contributes to claims or coverage determinations and granting regulators the ability to audit such use.

Key priorities for carriers:

  • Introduce disclosure checkpoints within claims and underwriting workflows.
  • Prepare internal audit packages in case of regulator reviews.
  • Update consumer consent processes within digital policy platforms. 

 

Washington: Early Oversight and Advisory Guidance


Washington: Early Oversight and Advisory Guidance 

Washington has embraced the NAIC Model Bulletin on AI, requiring insurers to adopt written governance frameworks, test for fairness, and establish controls around third-party vendors.

In 2025, the state formed an AI Advisory Board to study insurer practices and recommend additional safeguards. While advisory in nature, its findings are likely to influence future mandates, particularly in areas like training data transparency and disclosure of AI-generated content.

Key priorities for carriers:

  • Align programs with NAIC’s standards on governance and documentation.
  • Track Advisory Board updates to anticipate new obligations.
  • Prepare for disclosure requirements if generative AI tools are used in claims or customer interactions. 

Global Standards for Responsible AI  

 

Global Standards for Responsible AI  

Alongside state and national rules, international standards add another layer of guidance. These global frameworks help define what responsible, compliant AI looks like – especially for companies operating across jurisdictions.

ISOIEC JTC 1 SC 42

This is the international standards subcommittee for AI under ISOIEC. They are creating foundational standards for AI including data quality, trustworthiness, AI lifecycle, robustness and bias assessment.

European Data Protection Board (EDPB)

The EDPB has published a Bias Evaluation document that provides guidance on performance, fairness and transparency for systems that handle personal data.

Artificial Intelligence Act of the European Union

The Artificial Intelligence Act of the European Union, which entered into force in August 2024, establishes risk-based obligations for AI systems including high risk systems. These obligations influence fairness, transparency, accuracy, and conformity assessment. 

 

Building and Maintaining AI Compliance

 

Building and Maintaining AI Compliance 

As regulations evolve, compliance can’t be treated as an afterthought – it must be built into the foundation of every AI initiative. For insurers, this means implementing governance frameworks, transparency measures, and ongoing monitoring that keep pace with both business objectives and regulatory demands. Below are key focus areas to help ensure your AI systems remain compliant, explainable, and trusted.

1. Governance and Oversight

Board Accountability: Establish a board-level AI governance committee that reviews model approvals, risk assessments, and audit results.

Cross-Functional Involvement: Include compliance, actuarial, data science, legal, and operations leaders in governance reviews.

Policy Framework: Develop written AI policies that define acceptable use, escalation procedures, and periodic review cycles.

2. Risk Management and Bias Testing

Fairness Audits: Apply statistical fairness metrics (e.g., disparate impact ratio, equalized odds) at onboarding and on a recurring basis.

Model Drift Monitoring: Deploy automated alerts to flag when a model’s performance or fairness metrics deviate from baseline.

Challenger Models: Run alternate models in parallel to test assumptions and validate decision-making consistency.

3. Transparency and Consumer Rights

Clear Disclosures: Provide consumers with understandable explanations when AI contributes to adverse decisions, within regulatory deadlines (e.g., NYDFS’s 15-day requirement).

Adverse Action Workflows: Automate notice generation, ensuring explanations reference both data sources and model logic.

Explainability Tools: Use interpretable AI methods to provide audit-ready rationales without exposing proprietary code.

4. Vendor and Third-Party Oversight

Contractual Safeguards: Require vendors to agree to audit rights, data security standards, and compliance obligations.

Validation of Vendor Models: Demand transparency around training data sources and fairness testing.

Third-Party Certifications: Prioritize vendors who maintain SOC 2 Type II, ISO 27001, or equivalent security certifications.

5. Training and Accountability

Role-Specific Training: Underwriters, adjusters, and claims handlers should understand how AI informs their workflows and what disclosure obligations apply.

Compliance Playbooks for Staff: Create role-based guidance so employees know when to escalate AI-related concerns.

Culture of Accountability: Encourage staff to challenge model outputs if results appear inconsistent with underwriting guidelines or claims best practices.  

6. Documentation and Reporting

Model Cards and Fact Sheets: Maintain standardized documentation for each AI system, capturing inputs, intended use, limitations, and fairness metrics.

Version Control and Change Logs: Track updates, retraining events, and recalibrations to maintain audit trails.

Annual Transparency Reports: Summarize governance activities, fairness audit outcomes, and consumer disclosure statistics for regulators and internal stakeholders.

7. Continuous Improvement and Audit Readiness

Regular Internal Audits: Conduct quarterly reviews of AI systems, workflows, and vendor compliance.

Regulator Simulations: Run “mock exams” to ensure readiness for NYDFS, California DOI, or other state audits.

Feedback Loops: Incorporate human oversight measures to implement lessons learned from disclosures, consumer complaints, and regulator inquiries into model improvement.

 

The direction of regulation is unmistakable. Insurers will soon be required to demonstrate that every AI system is transparent, explainable, and accountable. Waiting for enforcement is not a strategy, it’s an exposure.

Those who act now to embed governance, documentation, and bias testing into their operations will do more than stay ahead of regulators. They will set the standard for responsible innovation, earn the trust of policyholders and distribution partners, and secure a lasting competitive edge.

AI is redefining the business of insurance. The carriers that embrace compliance not as a constraint, but as a foundation for credibility and leadership, will be the ones that shape the future of the industry. 

 

Learn more about getting the most from AI and where humans add the greatest value by downloading our infographic on AI vs. Human: Who Handles What in Insurance?

Share this article

Related Articles