AM Best Rating Upgrades: Building an Insurer Financials Dataset for Risk Teams
insurancedatasetsrisk

AM Best Rating Upgrades: Building an Insurer Financials Dataset for Risk Teams

UUnknown
2026-02-28
10 min read
Advertisement

Build a versioned insurer financials dataset that joins AM Best ratings, statutory filings and claims to power reinsurance and risk models.

Stop guessing at counterparty strength — build a longitudinal insurer financials dataset that blends AM Best ratings, statutory filings and claims to power reinsurance and risk decisions

Risk teams, reinsurers and broker analytics squads consistently hit the same roadblocks: scattered rating changes from AM Best, statutory numbers in inconsistent formats across states, and claims data locked in vendors or legacy systems. The result is slow, fragile models and missed reinsurance opportunities. This guide shows how to aggregate these signals into a robust, versioned insurer financials dataset optimized for risk modeling and reinsurance decisioning in 2026.

Why combine AM Best, statutory filings and claims into one longitudinal dataset (and why now)

AM Best ratings — especially Financial Strength Ratings (FSR) and issuer credit ratings — are a primary market signal of insurer creditworthiness. But ratings are lagging, high-level indicators. To make operational reinsurance decisions you need a timeline: how ratings move, how statutory capital & reserves evolved, and how claims are developing at the same time.

2026 is accelerating demand for this integrated approach because:

  • Real-time counterparty monitoring is expected by front offices and cedents; underwriting desks want near-real time alerts for rating actions and reserve deterioration.
  • Regulatory digitization and API-first vendor services have reduced friction to ingest filings and claims into pipelines (NAIC modernization and state DOI efforts continue to push structured reporting).
  • Climate and catastrophe volatility requires tighter correlation of claims outcomes to balance sheet signals when setting reinsurance layers and pricing.

Example: AM Best upgraded Michigan Millers Mutual Insurance Company to an FSR of A+ in January 2026, citing balance sheet strength and reinsurance pooling with Western National. That upgrade mattered because the pooling arrangement and assigned reinsurance affiliation code directly changed the company’s capital support profile — a signal your reinsurance team needs in a timeline together with statutory capital and claims trends. (Source: Insurance Journal)

Core data sources and identifiers

To build a usable longitudinal dataset, collect and normalize these sources:

  • AM Best ratings feed — FSR, Long-Term ICR, outlooks, rating action timestamps, and reinsurance affiliation codes (e.g., “p” for pool). Note licensing — AM Best is paywalled and contractual.
  • Statutory filings (NAIC & state DOIs) — annual/statutory statements, Exhibit of Premiums and Losses, Schedule P (loss development), RBC components, and statutory surplus.
  • Claims data — ceded and direct claims by AY/CY, loss development triangles, large loss registries (vendor sources such as Verisk / ISO and state bulletins). Internal claims systems feed severity & frequency.
  • Reinsurance treaties & affiliations — pooling agreements, quota share/catXL contracts, ceding percentages, collateral arrangements.
  • Corporate actions and news — M&A, regulatory approvals, conservatorships, and rating press releases (e.g., Michigan Millers joining Western National pool).
  • Entity identifiers — NAIC company code, Legal Entity Identifier (LEI), FEIN, and AM Best entity IDs. These are critical for deterministic joins.

Provenance & licensing checklist

  • Confirm data licensing for AM Best and vendor claims datasets.
  • Record ingestion timestamps and original document hashes for audit.
  • Keep raw filings (PDFs/CSV/XBRL) archived alongside the normalized tables.

Designing the longitudinal data model

Your dataset should model time-first observations for each entity. Key tables:

  • entities — persistent company metadata (NAIC_code, LEI, legal_name, parent_group, domicile).
  • ratings — timestamped AM Best entries (fsr, long_term_icr, outlook, affiliation_code, source_url).
  • statutory_snapshots — balance sheet and key ratios by reporting_period (statutory_surplus, rbc, adj_assets, ceded_reinsurance_asset).
  • claims_triangles — AY/CY indexed cumulative losses and paid losses to enable development analysis.
  • reinsurance_positions — treaty-level cessions, reinsurer counterparty IDs, collateral percent, effective_dates.
  • corporate_events — M&A, pool admissions, regulatory approvals (with links to press release & approval documents).

Example CREATE TABLE snippets (conceptual):

CREATE TABLE ratings (
  entity_id STRING,
  observed_at TIMESTAMP,
  fsr STRING,
  long_term_icr STRING,
  outlook STRING,
  affiliation_code STRING,
  source_url STRING,
  raw_payload JSON,
  PRIMARY KEY (entity_id, observed_at)
);

CREATE TABLE statutory_snapshots (
  entity_id STRING,
  report_date DATE,
  statutory_surplus NUMERIC,
  total_admitted_assets NUMERIC,
  rbc_ratio NUMERIC,
  ceded_reinsurance_asset NUMERIC,
  raw_filing_url STRING
);

ETL architecture: ingest, normalize, version

Follow a disciplined, auditable ETL pattern:

  1. Ingest raw files from AM Best API, NAIC blade downloads, state DOI portals, vendor claims S3 buckets and internal claims DBs. Use connectors with retry and backoff. Record ETag/ETL run id.
  2. Parse & normalize into canonical schemas. Map currency units, accounting basis (statutory vs GAAP), and NAIC line codes. Keep raw payload for repro.
  3. Entity resolution using NAIC codes, LEI, and fuzzy name matching. Persist resolved entity_id and confidence scores.
  4. Versioning — treat every ingestion as immutable; store change history (SCD Type 2) or append-only event stream. Ratings must be stored with effective and publish timestamps to handle retroactive rating adjustments.
  5. Quality checks — value ranges, triangles column monotonicity, reserve roll-fowards and cross-checks with RBC and surplus. Flag anomalies for analyst review.
  6. Catalog & lineage — integrate with a data catalog (e.g., DataHub, Amundsen) and capture field-level lineage and licensing metadata.

Tools and patterns commonly used in 2026:

  • Orchestration: Apache Airflow or Dagster for scheduled refreshes and backfills.
  • Transformation: dbt for tests & transformations; SQL-first models for auditability.
  • Storage: Columnar warehouses (Snowflake, BigQuery) or Delta Lake for time travel and ACID.
  • Streaming: Kafka / Pulsar for claims event streams and intraday rating alerts.

Feature engineering: signals that matter for reinsurance and risk models

Build features that capture both level and momentum. Examples:

  • Rating features: current FSR, days since last upgrade/downgrade, count of actions in 12 months, probability-of-transition proxies (two-year downgrade risk).
  • Capital & leverage: statutory_surplus/total_admitted_assets, RBC ratio, net_written_premiums / surplus (leverage).
  • Claims dynamics: 12m rolling paid & incurred, paid/severity percentiles, reserve development ratios (1yr, 5yr), large loss frequency.
  • Ceded exposure: top-10 reinsurer concentration, collateralization percent, treaty structure flags (quota-share vs XL).
  • Event & affiliation: pool membership flag, reinsurance_affiliation_code, and effective date (Michigan Millers example — membership changed support profile).

Sample SQL: compute 12-month rolling earned loss ratio and detect a rating upgrade within 90 days

WITH loss_12m AS (
  SELECT entity_id, report_date,
    SUM(incurred_loss) OVER (PARTITION BY entity_id ORDER BY report_date ROWS BETWEEN 11 PRECEDING AND CURRENT ROW) AS incurred_12m,
    SUM(earned_premium) OVER (PARTITION BY entity_id ORDER BY report_date ROWS BETWEEN 11 PRECEDING AND CURRENT ROW) AS premium_12m
  FROM statutory_snapshots
)
SELECT l.entity_id, l.report_date,
  incurred_12m / NULLIF(premium_12m,0) AS lr_12m,
  EXISTS (
    SELECT 1 FROM ratings r
    WHERE r.entity_id = l.entity_id
      AND r.fsr > rlag.fsr
      AND r.observed_at BETWEEN l.report_date - INTERVAL '90 DAY' AND l.report_date
  ) AS recent_upgrade
FROM loss_12m l
LEFT JOIN LATERAL (SELECT * FROM ratings r2 WHERE r2.entity_id = l.entity_id ORDER BY observed_at DESC LIMIT 1) rlag ON TRUE;

Using the dataset in models and decision workflows

Once normalized and feature-engineered, your longitudinal dataset can power:

  • Counterparty scoring — Bayesian or gradient-boosted models that combine rating transitions and reserve development to estimate default / impairment risk over a 1–5 year horizon.
  • Reinsurance pricing — layer selection logic where ceded expected loss and counterparty capital certainty affect premium loadings and collateral requirements.
  • Portfolio rebalancing — automated alerts when concentration to a reinsurer or group exceeds thresholds relative to their FSR or recent rating outlook.
  • Stress testing — scenario runs that simulate a rating downgrade of a pooled member and calculate capital shortfall across treaties.

Example: How Michigan Millers’ A+ upgrade affects reinsurance decisions

Context: On 2026-01-16 AM Best upgraded Michigan Millers to A+, revising the outlook to stable and assigning a p reinsurance affiliation code after the company joined Western National’s pooling agreement. For an in-force portfolio ceded to Michigan Millers this implies:

  • Reduced counterparty loading and possibly lower collateral requirement because the pool provides evident capital support.
  • Immediate model signal to re-evaluate treaty pricing and capacity limits for any retroactive or upcoming renewals.
  • Need to update the entity’s affiliation flag in reinsurance_positions and recompute concentration metrics across the Western National group.

Validation, governance and model risk

Strong governance prevents erroneous actions on bad data:

  • Data checks: reconcile statutory surplus to RBC numerators, ensure loss triangle monotonicity, and verify that rating event timestamps match press releases.
  • Explainability: log features and model inputs for every reinsurance quote and allocation decision to meet model governance requirements.
  • Access controls and PII: claims data often contains personal data; implement role-based access and anonymization where required.

Operational patterns: alerts, dashboards and playbooks

Translate dataset outputs into short decision loops:

  • Alert rule: Trigger a “reinsurance review” if (a) FSR downgrades one notch, or (b) 12m rolling LR increases 30% and reserve development > 10% in last 2 quarters.
  • Dashboard KPIs: top 20 counterparty exposures by ceded premium and collateral, rolling loss ratios, rating-change heatmap.
  • Playbook: For downgrade alerts, auto-generate a packet — latest statutory snapshot, last 3 years of claims triangles, active treaties and ceded concentrations — to support trading desk negotiations.

Key developments through late-2025 and early-2026 affecting how you build and use insurer datasets:

  • Structured regulatory reporting is increasing. Expect more API delivery of filings and structured formats, shortening ingestion cycles.
  • AI-assisted analysis is mainstream: LLMs and explainable AI help synthesize filings and highlight unusual reserve commentary, but always pair AI outputs with numeric checks.
  • Counterparty risk becomes dynamic: rating agencies and regulators are both adding more granular disclosure expectations for reinsurance credit risk; modelers must incorporate affiliation networks and collateral flows.
  • Data mesh adoption inside large insurers means data products (ratings-product, claims-product) will be reused across teams — design your dataset as a product with SLAs.

30/60/90 day practical implementation plan

Start small, iterate fast. A pragmatic roadmap:

  1. Day 0–30: Ingest AM Best rating history and one year of statutory snapshots for a pilot set (10–25 entities). Build entity resolution and minimal catalog entries.
  2. Day 30–60: Add claims triangles and reinsurance positions for pilot entities. Implement basic quality checks and a sample counterparty dashboard.
  3. Day 60–90: Build feature store views, integrate into a scoring model for downgrade probability, and operationalize a single alert playbook tied to trade desk actions.

Code examples — minimal ingestion and validation (Python)

Illustrative snippet: fetch a rating entry from a mock AM Best endpoint and upsert into Snowflake (pseudocode).

import requests
from datetime import datetime

resp = requests.get('https://api.ambest.example/v1/ratings?company=Michigan+Millers', headers={'Authorization':'Bearer ...'})
data = resp.json()

rating_row = {
  'entity_id': 'naic_12345',
  'observed_at': datetime.fromisoformat(data['published_at']),
  'fsr': data['fsr'],
  'long_term_icr': data['long_term_icr'],
  'affiliation_code': data.get('affiliation_code'),
  'raw_payload': data
}

# Upsert to warehouse (using your connector)
warehouse.upsert('ratings', rating_row, keys=['entity_id','observed_at'])

Final checklist before go-live

  • Legal sign-off on AM Best and vendor claims licenses.
  • Entity mapping coverage at >95% for pilot entities.
  • Automated reconciliation tests: statutory surplus vs prior filings.
  • Operational alert with assigned owner and SLA for triage.
“An integrated, time-first insurer financials dataset turns high-level ratings into actionable, auditable signals for reinsurance strategy.”

Call to action

If your team is designing a counterparty scoring engine, reinsurance pricing workflow, or automated alerting for rating actions, start with a focused pilot: ingest AM Best ratings and 12 months of statutory snapshots for your top 25 counterparties, add claims triangles, and run the 30/60/90 plan above.

Need a jump-start? We can provide a sample pipeline, prebuilt schema and dbt models for AM Best + NAIC ingestion, or a trial API feed for pilot testing. Contact our data engineering team to request the pilot package and a 1:1 technical walkthrough.

Reference: AM Best upgrade of Michigan Millers Mutual (Insurance Journal, 2026-01-16) — https://www.insurancejournal.com/news/midwest/2026/01/16/854699.htm

Advertisement

Related Topics

#insurance#datasets#risk
U

Unknown

Contributor

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
2026-02-28T02:16:48.067Z