Marcus Johnson had a 680 credit score, a steady job as a school bus driver for eleven years, and $8,000 in savings. He applied to refinance his student loans through an online lender in 2023. Denied. No explanation beyond a vague “credit profile does not meet our criteria.” What Marcus didn’t know — and couldn’t have known — was that the AI loan approval process used by that lender factored in a variable called the “cohort default rate”: the average repayment behavior of other graduates from his college. His school, a historically Black institution with high rates of students taking on debt, had a high CDR.
Marcus was being punished for where he went to school, not for anything he did. This July, the Massachusetts Attorney General’s Office settled with a student loan company called Earnest Operations for exactly this practice — $2.5 million, with Black and Hispanic borrowers found to have been disproportionately denied.
Most people assume the AI loan approval process is fairer than a human loan officer. On paper, that seems reasonable. Algorithms don’t notice your accent. They can’t see your face across the desk. They don’t hold quiet prejudices about your neighborhood the way a 1960s banker did when redlining was policy. But the story is far more complicated — and far more personal — than that simple narrative suggests. The AI model doesn’t need to see your race. It just needs your zip code, your college name, your email provider, or whether you pay your phone bill before or after your rent. Those data points do the work of discrimination while the system maintains clean hands.
There are three things happening simultaneously in AI-driven lending right now, and they pull in opposite directions. AI is genuinely expanding credit access for people who were invisible to traditional banks. It is also, in documented cases, automating the same discrimination it was supposed to eliminate. And the regulatory guardrails that should referee this situation are, depending on your state, either tightening or collapsing. You need to understand all three before you apply for your next loan — or before you trust that your approval or denial was ever just about your finances.
What the AI Loan Approval Process Is Actually Telling You About Your Money
The optimists have a real point. The traditional FICO-based system locked out roughly 45 million Americans who are either credit-underserved or have no credit file at all. These aren’t people who’ve been irresponsible — they’re often gig workers, recent immigrants, young adults, or people who simply paid cash for everything. A system that only counts credit cards and car loans tells you almost nothing about a person who’s paid their rent on time for a decade.
AI changes that by reading different signals. Fintech lenders like Upstart and Petal analyze rent payment history, utility bills, and cash flow patterns. Platforms like OnDeck and Kabbage have included e-commerce and transportation data in small business models. Research across nine million applications at a major bank found that adopting an AI credit scoring model increased approval rates for underserved populations while actually reducing default rates for everyone — meaning the bank took on more customers and fewer losses at the same time. That’s not a talking point. That’s a peer-reviewed result.
The trouble starts when you ask what data those AI models are actually using — and whether anyone has checked it for bias.
The 3 Things That Actually Determine Whether You Get Approved
AI Can Read Signals Your Bank Never Could — and That’s a Double-Edged Sword
When an AI model says it’s evaluating your “creditworthiness,” what it’s actually doing is pattern-matching your profile against millions of past borrowers to estimate how likely you are to default. The model doesn’t know you. It knows correlations. And correlations, when trained on decades of American lending data, carry a lot of baggage.
Researchers at Brookings identified a particularly uncomfortable example: whether your email address contains your own name is a statistically significant predictor of whether someone repays a loan. That sounds benign. But economists Marianne Bertrand and Sendhil Mullainathan have shown that African Americans with names strongly associated with their race face substantial discrimination in hiring — and by extension, a name-based email address could function as a racial proxy inside a lending model without anyone programming it that way. The machine found the shortcut. It just didn’t understand what the shortcut was doing.
This is the core problem with the AI loan approval process: the model doesn’t know it’s discriminating. It’s optimizing for accuracy — minimizing the rate of bad loans — and it will use whatever variable best predicts repayment, regardless of whether that variable is a stand-in for race, gender, or income bracket. One study found that women are 15% less likely to be approved for a consumer loan than men with identical credit profiles. Not because the algorithm was told to favor men — but because men dominate the historical training data, and the model learned to “tune” more heavily to patterns associated with the majority group.
The algorithm didn’t set out to treat you unfairly. It learned to, from data that already did.

Proxy Discrimination Is the New Redlining — and It’s Harder to Prove
Here’s what makes this particularly hard to fight: the variables that drive discriminatory outcomes in AI models are often things no one would flag as suspicious. Your zip code. The college you attended. The device you use to apply. Whether you use a Mac or a PC correlates with both race and loan repayment behavior — a fact confirmed in academic research cited by Brookings. That doesn’t mean it’s fair to use it. It means the machine found a proxy that carries discriminatory weight while looking perfectly neutral.
The Wells Fargo case is the clearest recent example. An algorithm designed to assess creditworthiness was found to give higher risk scores to Black and Latino applicants compared to white applicants with virtually identical financial backgrounds, resulting in higher denial rates and worse loan terms for equally qualified borrowers. And because the algorithm’s inner logic is proprietary — protected as a trade secret — you can’t subpoena the code the way you could depose a biased loan officer.
The Massachusetts Earnest settlement laid out the anatomy of this problem clearly. The company’s student loan refinancing model automatically penalized applicants whose undergraduate institution had a high cohort default rate — the average default rate for all students from that school. A borrower with pristine personal credit was denied or given worse terms because of the aggregate behavior of students who attended their college, possibly years or decades earlier. The AG’s office didn’t have to prove intent. The discriminatory outcome was enough. But it took years of investigation, a state-level enforcement action, and $2.5 million in fines to surface a bias that affected who-knows-how-many borrowers before anyone noticed.
Most AI loan biases never get that kind of scrutiny. The CFPB — the federal agency best positioned to do this work nationally — had its operations shut down in February 2025. State attorneys general are filling some of that gap, but patchwork enforcement is not a system.

The Watchdogs Are Disappearing Right as the Technology Gets Stronger
There’s a timeline problem that doesn’t get enough attention. AI lending models are growing more sophisticated and more widely adopted at exactly the moment when federal consumer financial protection is weakest it’s been in decades. In October 2024, the CFPB fined Apple $25 million and Goldman Sachs $45 million for Apple Card failures linked to algorithmic transparency problems — and that was under an administration that still believed in enforcement. Today, the enforcement posture has fundamentally shifted.
At the same time, the laws that govern lending discrimination were written for a world that barely resembles this one. The Equal Credit Opportunity Act was passed in 1974. The Fair Housing Act in 1968. These statutes were designed to combat a problem that was nearly the opposite of today’s challenge: too little standardized data, and loan officers with too much unchecked discretion to deny people who “didn’t look creditworthy.” Now there’s too much data, and the discrimination happens inside models that no one outside the company can fully read.
The EU AI Act, which classified AI systems used in credit underwriting as high-risk and subject to mandatory bias audits, began phasing in during 2024 and extends through 2026. Colorado already requires insurers to test AI for discriminatory outcomes. New York City mandates annual bias audits for automated employment decision tools. These state and international frameworks are real — but they don’t cover most American borrowers, and they’re being assembled faster than they can be enforced.
What this means for you is that the AI loan approval process used on your application may never have been tested for disparate impact. The company may not have checked whether their model approves white applicants at a different rate than equally qualified Black applicants. They may not even know. And right now, in most states, there’s no law that says they have to find out.

What the AI Loan Approval Process Is Actually Telling You About Your Money
These three dynamics — AI as a tool for genuine inclusion, AI as a machine for laundering old biases, and a regulatory environment that can’t keep pace — aren’t separate stories. They’re the same story told from different vantage points.
The honest picture is this: the AI loan approval process is neither the savior nor the villain of financial fairness. It’s a mirror. A 2024 Urban Institute analysis found that Black and Brown borrowers were more than twice as likely to be denied a mortgage as white borrowers — and those numbers predate the AI revolution. AI didn’t create that disparity. But when a model is trained on historical data that reflects decades of discriminatory lending, it doesn’t just inherit those patterns. It institutionalizes them, scales them, and removes the human friction that sometimes stopped a biased decision from going through.
There’s a legitimate counter-argument worth taking seriously. A 2022 NYU study found that lending automation increased PPP loans to Black businesses by 12.1 percentage points. Alternative data genuinely helps people who have strong repayment habits but thin credit files — gig workers, immigrants, people recovering from medical debt. When AI is built to expand the definition of creditworthy rather than just replicate past approvals faster, it can actually close gaps that traditional banking never could.
The problem isn’t AI in lending. The problem is AI in lending without transparency, testing, or accountability.
The power dynamics here are clear. Banks and lenders hold the models. Regulators — at the federal level, at least — have stepped back. The burden of catching discrimination has shifted to state AGs, consumer advocates, and in many cases, individual borrowers who fight back without knowing why they were denied in the first place. Borrowers have the least access to information about the systems that control their financial lives. That’s not a technical issue. That’s a structural one.
| Story | What It Really Means | Who It Affects Most | What To Do Now |
|---|---|---|---|
| MA AG settles Earnest AI bias case for $2.5M | Proxy discrimination via “college CDR” is actionable even without proof of intent | Black and Hispanic student loan borrowers | Request written reasons for any denial; ask specifically about model variables |
| CFPB shut down Feb 2025 | Federal consumer financial oversight has materially weakened | All borrowers, especially those in states without strong AG enforcement | Know your state AG’s fair lending complaint process |
| 45 million Americans locked out of traditional credit | AI alternative data can genuinely expand access — if the model is built fairly | Gig workers, immigrants, thin-file borrowers | Look for lenders using alternative data (rent, utility, cash flow) if FICO is your weak point |
| Wells Fargo algorithm gave higher risk scores to Black/Latino applicants | AI bias can be systemic and invisible — without audits, no one knows | Anyone denied by a major institutional lender | If denied, compare denial rate data if available; file a fair lending complaint with your state AG |
Final Thoughts
You deserve to know why you were approved or denied for a loan. Not a form letter. Not “your profile doesn’t meet our criteria.” The actual variables — what the model weighted, what thresholds it used, and whether those thresholds produce different outcomes for different groups of people. The CFPB has said plainly that there are no technology exemptions to federal consumer financial protection laws. That principle is right. Enforcing it is another matter.
The AI loan approval process will keep getting more powerful. The models will get faster, pull more data points, and make decisions in seconds that used to take a loan officer days. Whether that makes lending fairer or just faster at being unfair depends almost entirely on whether companies are required to check — and right now, many of them aren’t. Keep records of every application you submit. Request an adverse action notice whenever you’re denied. And if you suspect the outcome wasn’t about your finances, it might not have been.

