Skip to content
AI-Related Regulations in Insurance and How to Comply
September 16, 20256 min read

AI-Related Regulations in Insurance and How to Comply

Artificial intelligence (AI) is rapidly reshaping the insurance sector. Underwriting, claims processing, and customer service are now deeply influenced by algorithms that can parse data faster than human teams. But with innovation comes scrutiny. Regulators in multiple states are tightening expectations around how insurers deploy AI, focusing on fairness, transparency, and consumer protections.

For insurance executives, AI compliance is no longer a back-office task. It must be built into the design and governance of every model, turning regulatory requirements into a catalyst for stronger operations and deeper consumer trust. 

2025.09.16-blog-ai-related-regulations-in-insurance-and-how-to-comply-section-1

 

 

State-Level Spotlight: How Regulations Are Evolving

While federal regulators have begun issuing guidance on the responsible use of AI, most of the concrete requirements for insurers are emerging at the state level. These regulations vary widely in scope and enforcement, but they share common objectives (many adopted from the NAIC Model Bulletin): ensuring fairness, preventing discrimination, and maintaining consumer trust.

For carriers, the challenge is navigating this patchwork of rules while building compliance frameworks that are flexible enough to adapt across jurisdictions. Below is a look at four statesCalifornia, New York, Washington, and Texas where regulatory expectations are already shaping insurer obligations. 

 

2025.09.16-blog-ai-related-regulations-in-insurance-and-how-to-comply-section-2

 

California: Data Rights and Decision Transparency

California continues to set the pace in data and AI regulation. The California Consumer Privacy Act (CCPA) and its expansion under the California Privacy Rights Act (CPRA) empower consumers to know when automated systems are used in areas like pricing or coverage. Insurers must be prepared to disclose this use clearly and defend the reasoning behind such decisions.

In parallel, the California Department of Insurance has issued guidance requiring that AI tools used to review and approve care or treatment decisions (known as utilization management) must not introduce discriminatory outcomes. Regulators have also reminded insurers that AI cannot replace licensed professional judgment – misuse could trigger liability under consumer protection and civil rights laws.

Future legislation, such as AB 682, is expected to push for even more transparency, including reporting obligations tied to claim denials influenced by AI.

Key priorities for carriers:

  • Keep human review in the loop for medical and claims decisions.
  • Test and document AI systems for bias, accuracy, and audit readiness.
  • Build processes to meet CPRA disclosure timelines. 

 

2025.09.16-blog-ai-related-regulations-in-insurance-and-how-to-comply-section-3

 

New York: Mandatory Audits and Reporting 

New York’s regulators have established some of the most detailed requirements to date. Under the NYDFS Circular Letter No. 7 (2024), insurers must demonstrate that AI models are actuarially sound, regularly audited for bias, and used with appropriate transparency.

Carriers must provide consumers with timely written explanations when AI influences adverse underwriting outcomes. Boards and senior management are expected to directly oversee compliance, reinforcing that AI governance is not just a technical matter but an enterprise-wide responsibility.

Key priorities for carriers:

  • Form a governance committee reporting to the board on AI use.
  • Incorporate statistical fairness metrics into ongoing monitoring.
  • Maintain audit files that include bias testing results, actuarial justifications, and standardized consumer notices. 

 

2025.09.16-blog-ai-related-regulations-in-insurance-and-how-to-comply-section-4

 

Washington: Early Oversight and Advisory Guidance 

Washington has embraced the NAIC Model Bulletin on AI, requiring insurers to adopt written governance frameworks, test for fairness, and establish controls around third-party vendors.

In 2025, the state formed an AI Advisory Board to study insurer practices and recommend additional safeguards. While advisory in nature, its findings are likely to influence future mandates, particularly in areas like training data transparency and disclosure of AI-generated content.

Key priorities for carriers:

  • Align programs with NAIC’s standards on governance and documentation.
  • Track Advisory Board updates to anticipate new obligations.
  • Prepare for disclosure requirements if generative AI tools are used in claims or customer interactions. 

2025.09.16-blog-ai-related-regulations-in-insurance-and-how-to-comply-section-5

 

Texas: Broad Governance and Insurance-Specific Measures 

Texas has adopted a dual approach. The Texas Responsible AI Governance Act (TRAIGA), effective 2026, applies across industries and focuses on transparency, consent, and accountability. At the same time, SB 815 – currently under review – targets insurance directly by requiring disclosure when AI contributes to claims or coverage determinations and granting regulators the ability to audit such use.

Key priorities for carriers:

  • Introduce disclosure checkpoints within claims and underwriting workflows.
  • Prepare internal audit packages in case of regulator reviews.
  • Update consumer consent processes within digital policy platforms. 

 

2025.09.16-blog-ai-related-regulations-in-insurance-and-how-to-comply-section-6

 

National Landscape: Complexity Across Jurisdictions

Several additional states, including Illinois, Maryland, and Vermont, have adopted versions of the NAIC Model Bulletin. The result is a patchwork environment where requirements vary but share common themes: governance, fairness testing, and transparency.

For carriers writing business in multiple states, the challenge is harmonization building frameworks that can flex to state-specific rules while maintaining consistent governance across the enterprise.

Tip: Explore the interactive state map for an overview of evolving requirements.

 

2025.09.16-blog-ai-related-regulations-in-insurance-and-how-to-comply-section-7

 

Building and Maintaining AI Compliance 

As regulations evolve, compliance can’t be treated as an afterthought – it must be built into the foundation of every AI initiative. For insurers, this means implementing governance frameworks, transparency measures, and ongoing monitoring that keep pace with both business objectives and regulatory demands. Below are key focus areas to help ensure your AI systems remain compliant, explainable, and trusted.

1. Governance and Oversight

Board Accountability: Establish a board-level AI governance committee that reviews model approvals, risk assessments, and audit results.

Cross-Functional Involvement: Include compliance, actuarial, data science, legal, and operations leaders in governance reviews.

Policy Framework: Develop written AI policies that define acceptable use, escalation procedures, and periodic review cycles.

2. Risk Management and Bias Testing

Fairness Audits: Apply statistical fairness metrics (e.g., disparate impact ratio, equalized odds) at onboarding and on a recurring basis.

Model Drift Monitoring: Deploy automated alerts to flag when a model’s performance or fairness metrics deviate from baseline.

Challenger Models: Run alternate models in parallel to test assumptions and validate decision-making consistency.

3. Transparency and Consumer Rights

Clear Disclosures: Provide consumers with understandable explanations when AI contributes to adverse decisions, within regulatory deadlines (e.g., NYDFS’s 15-day requirement).

Adverse Action Workflows: Automate notice generation, ensuring explanations reference both data sources and model logic.

Explainability Tools: Use interpretable AI methods to provide audit-ready rationales without exposing proprietary code.

4. Vendor and Third-Party Oversight

Contractual Safeguards: Require vendors to agree to audit rights, data security standards, and compliance obligations.

Validation of Vendor Models: Demand transparency around training data sources and fairness testing.

Third-Party Certifications: Prioritize vendors who maintain SOC 2 Type II, ISO 27001, or equivalent security certifications.

5. Training and Accountability

Role-Specific Training: Underwriters, adjusters, and claims handlers should understand how AI informs their workflows and what disclosure obligations apply.

Compliance Playbooks for Staff: Create role-based guidance so employees know when to escalate AI-related concerns.

Culture of Accountability: Encourage staff to challenge model outputs if results appear inconsistent with underwriting guidelines or claims best practices.  

6. Documentation and Reporting

Model Cards and Fact Sheets: Maintain standardized documentation for each AI system, capturing inputs, intended use, limitations, and fairness metrics.

Version Control and Change Logs: Track updates, retraining events, and recalibrations to maintain audit trails.

Annual Transparency Reports: Summarize governance activities, fairness audit outcomes, and consumer disclosure statistics for regulators and internal stakeholders.

7. Continuous Improvement and Audit Readiness

Regular Internal Audits: Conduct quarterly reviews of AI systems, workflows, and vendor compliance.

Regulator Simulations: Run “mock exams” to ensure readiness for NYDFS, California DOI, or other state audits.

Feedback Loops: Incorporate human oversight measures to implement lessons learned from disclosures, consumer complaints, and regulator inquiries into model improvement.

 

The direction of regulation is unmistakable. Insurers will soon be required to demonstrate that every AI system is transparent, explainable, and accountable. Waiting for enforcement is not a strategy, it’s an exposure.

Those who act now to embed governance, documentation, and bias testing into their operations will do more than stay ahead of regulators. They will set the standard for responsible innovation, earn the trust of policyholders and distribution partners, and secure a lasting competitive edge.

AI is redefining the business of insurance. The carriers that embrace compliance not as a constraint, but as a foundation for credibility and leadership, will be the ones that shape the future of the industry. 

 

Learn more about getting the most from AI and where humans add the greatest value by downloading our infographic on AI vs. Human: Who Handles What in Insurance?

Share this article

Related Articles