Back to blog
Regulatory

The AI Lending Reckoning

By Robert GoodyearJanuary 7, 202510 min read
The AI Lending Reckoning

The AI Lending Reckoning

CFPB Circular 2022-03 contains one sentence that should terrify every fintech founder who shipped a black-box credit model: "ECOA and Regulation B do not permit creditors to use complex algorithms when doing so means they cannot provide the specific and accurate reasons for adverse actions."

The Bureau chose precise language. Not "should endeavor to provide" or "make reasonable efforts." The phrase "do not permit" forecloses the interpretive latitude that fintechs have exploited for years. The startups claiming to be "cloud-native AI-powered lending platforms" are operating on borrowed time, and the enforcement actions have already begun.

The Explainability Problem

Sarah applies for a loan. Gets denied. Under ECOA, she's entitled to know why. The adverse action notice must contain "the specific reasons for the adverse action taken."

The lender uses a gradient-boosted decision tree with 1,400 input variables. The model says no. The compliance team generates SHAP values to produce "reasons." They send Sarah a letter: "insufficient credit history."

One problem: Sarah has a 750 FICO score and twelve years of credit history. The actual driver was a combination of her employer's ZIP code, the time of day she applied, and a proxy variable correlated with race that the model learned from training data. The SHAP explanation was a post-hoc approximation. It identified variables that correlate with the decision, but it did not identify the actual reasons for the decision.

CFPB's footnote 6 in the same circular: "Creditors that rely on post-hoc explanation methods may not be able to validate the accuracy of those approximations, which may not be possible with less interpretable models."

Academic research confirms this gap between explanation and accuracy. Bussmann et al. (Frontiers in AI, 2021) found "no correlation between the feature rankings produced by" SHAP and LIME frameworks, meaning the features one method identifies as important differ from those identified by another method applied to the same model. Both cannot be accurate, so at least one is wrong, and an explainability technique that produces different answers depending on which technique you select satisfies auditors without satisfying the law.

The Massachusetts Precedent

In July 2025, the Massachusetts Attorney General settled with a student loan lender for $2.5 million. The allegations: AI underwriting models caused disparate impact against Black, Hispanic, and non-citizen applicants. The company used "Cohort Default Rate" variables that disproportionately penalized minority applicants. They failed to test models for disparate impact. Their adverse action notices did not reflect the true denial reasons.

The consent order requires: written AI governance policies, an Algorithmic Oversight Team with a designated chairperson, annual fair lending testing of all AI underwriting models, detailed model inventories, and use of interpretable models enabling accurate adverse action notices.

That last requirement warrants attention. The AG did not require "better explanations" of black-box models or more sophisticated post-hoc approximation techniques. The order requires interpretable models, full stop. The distinction matters because it forecloses the approach most AI lending companies have taken: build opaque models, bolt on explanation layers, hope regulators accept the output.

This case establishes a template that state AGs are now coordinating around, with the legal theory proven and every company running unexplainable credit models now operating under enforcement risk that did not exist eighteen months ago.

The Upstart Impasse

The poster child for AI lending spent years under voluntary independent monitorship. NAACP Legal Defense Fund participated. Relman Colfax conducted the review. The March 2024 Final Monitor Report found statistically and practically significant approval disparities for Black applicants.

The monitor recommended adopting a Less Discriminatory Alternative model. The company rejected the recommendation, arguing that alternative models performing within "uncertainty intervals" would "unacceptably compromise the accuracy of its models."

The resulting deadlock illuminates the fundamental tension in AI lending. The monitor warned that without proper LDA methodology, "even an elaborate model testing protocol risks simply becoming window-dressing."

The company originally held CFPB's first-ever No-Action Letter but requested termination in June 2022 to avoid CFPB review requirements for model changes. The protection they sought is gone, the disparities persist, and the methodology dispute remains unresolved.

This pattern will repeat across the industry as companies build models optimized for accuracy that encode historical bias, and post-hoc explainability techniques cannot reliably identify that bias. When testing reveals disparate impact, the company faces a choice between accepting a less accurate but fairer model or defending the disparity. Most will discover that "accuracy" measured on historically biased data is itself a biased metric.

What "AI-Powered" Actually Means

Walk through any fintech marketing deck and count the "AI-powered" claims. Then ask a simple question: powered by AI to do what, exactly?

The pattern is consistent across the sector, with companies claiming "AI-powered lending" falling into predictable categories:

Workflow automation marketed as AI lending. Document OCR, application routing, and exception handling are valuable operational capabilities, but they are not credit decisioning. The model that says yes or no runs on traditional scorecards or third-party providers the company does not control.

Loan marketplaces with matching algorithms. These platforms connect borrowers to lenders based on eligibility criteria, making the "AI" a recommendation engine that provides useful infrastructure but does not perform underwriting.

Aggregated experience claims. When a company claims "A zillion years of credit expertise," it typically means a bunch of people averaging a few years each, which represents impressive individual credentials but does not constitute a model governance program.

None of this is credit decisioning AI in the regulatory sense, because the actual decisioning—the model that produces the yes or no—either runs on traditional scorecards, runs on third-party models the company does not control, or runs on models they cannot explain to regulators.

The regulatory burden falls on the model while the marketing emphasizes everything except the model. Companies raise Series A rounds to fund marketing and sales rather than Model Risk Management functions, and the gap between capability claims and regulatory infrastructure will surface during examination.

The OCC Position

OCC Bulletin 2011-12 on model risk management has governed bank model governance since 2011. The August 2021 Comptroller's Handbook update explicitly addresses AI: "Banks should tailor the level of explainability to the model's use. AI models used in credit underwriting would be subject to relatively high standards for documentation and validation to adequately demonstrate that the model is fair and operating as intended."

"Relatively high standards." The OCC has conducted thematic reviews of large banks and found some classified AI/ML models as "low risk" with insufficient governance and "limited information on efforts to evaluate bias and fair lending issues."

The examiners are paying attention, and the companies claiming AI-powered credit decisioning will face examination. Those examinations will require documentation of model methodology, validation reports, fair lending testing, and governance procedures.

Early-stage companies with modest funding rounds do not have this infrastructure because their announcements funded marketing and sales rather than Model Risk Management functions.

The EU Compliance Cliff

The EU AI Act enters enforcement August 2, 2026, with credit scoring classified as "high-risk AI" under Annex III. Requirements include conformity assessments, risk management systems, technical documentation, human oversight, and transparency enabling deployers' compliance.

Penalties reach 3% of worldwide annual turnover, and extraterritorial application means any AI whose output is used in the EU falls within scope. American fintechs with European customers or European investors will face compliance obligations they have not planned for because the Act requires systems designed to "allow deployers to implement human oversight," and black-box models with SHAP explanations do not satisfy this requirement.

The companies that cannot explain their models to American regulators will not be able to demonstrate compliance to European regulators, which creates material market access risk for any company with cross-border ambitions.

The Glass-Box Alternative

Cynthia Rudin's 2019 paper in Nature Machine Intelligence has accumulated 7,600 citations under the title "Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead."

Her core argument is that the accuracy-interpretability tradeoff is largely myth, because interpretable models—scoring systems, decision trees, rule lists—often achieve comparable performance to black-box alternatives on structured data. The belief that complexity yields accuracy is empirically unsupported in many domains. More importantly, trying to explain black boxes "is likely to perpetuate bad practice and can potentially cause great harm to society" because the explanation itself becomes a liability when it satisfies an auditor while failing to identify the actual decision drivers, enabling continued harm under a veneer of compliance.

The implication for credit decisioning is direct: models designed for interpretability from inception do not require explanation layers because the decision logic is the model. Adverse action notices can accurately reflect decision factors because the decision factors are known.

Building interpretable models requires discipline and means accepting constraints where feature engineering and domain expertise matter more than raw model complexity. You cannot throw 1,400 variables at a gradient-boosted ensemble and expect explainability. Some companies have made this investment, but most have not, and the companies that invested in interpretable architecture will face lower regulatory risk while the companies that wrapped black boxes in SHAP values face increasing scrutiny.

The Infrastructure Gap

The math is straightforward: AI in credit lending will continue to exist, and regulators are requiring that models be explainable, validated, fair, and governed. Meeting those requirements at scale is an infrastructure problem that most AI lending companies have not solved.

Building OCC 2011-12 compliant model governance requires documented methodology, independent validation, ongoing monitoring, sensitivity analysis, and board oversight. Building ECOA-compliant adverse action generation requires models that can identify actual decision drivers rather than post-hoc approximations. Building fair lending compliance requires pre-deployment testing, LDA analysis, and ongoing monitoring for disparate impact. The companies that built growth engines without this infrastructure are discovering its absence through consent decrees and enforcement actions.

Infrastructure providers that enable compliant AI deployment operate under different economics than AI lending platforms because they carry no credit risk and face no lending regulations directly. They provide the documentation, validation, and governance capabilities that enable others to deploy models responsibly. The question facing the industry is whether AI in lending expands through compliant infrastructure or through repeated cycles of aggressive deployment followed by regulatory correction.

The Demographic Forcing Function

Cerulli Associates projects $124 trillion transferring from Boomers to younger generations through 2048, and those younger investors allocate 31% to alternative assets versus 6% for older generations. That allocation pattern creates demand for lending against assets that traditional systems cannot value, and the demand will intensify as the transfer accelerates.

The regulatory framework for responsible AI lending is now established through CFPB guidance, state AG enforcement actions, OCC examination standards, and the approaching EU AI Act requirements. The infrastructure to connect compliant models to this demand—purpose-built systems that can value alternative assets, generate accurate adverse action notices, satisfy model risk management requirements, and demonstrate fair lending compliance—remains underdeveloped. The companies that invested in interpretable architecture will capture the opportunity while the companies that wrapped black boxes in SHAP values face increasing scrutiny and enforcement risk.


Model risk management in alternative asset lending requires adherence to OCC 2011-12, ECOA adverse action requirements under CFPB Circular 2022-03, fair lending testing including LDA analysis, and emerging EU AI Act compliance for institutions with European exposure.

Having said that, this is practitioner advice, not financial advice, nor is it legal advice. You know what to do; find a professional and pay them to put their license at risk to make sure you're protected.

Robert Goodyear
Robert Goodyear
Founder/CEO

Robert Goodyear is the founder of Aaim, a financial technology company providing alternative asset infrastructure to financial institutions.

Let's talk about alternative asset lending

Schedule a consultation to discuss pledged-asset lending for your institution.