Recommended for you

The New York Times’ recent exposé, “Tangent Line,” didn’t just dissect financial models—it laid bare a hidden architecture in applied mathematics where fairness is a casualty, not a principle. At first glance, the report appeared as a standard takedown of algorithmic bias: machine learning systems trained on skewed data produce skewed outcomes. But deeper scrutiny reveals a more insidious truth—math itself, in practice, is not neutral. It is shaped by choices, omissions, and incentives that tilt the very geometry of decision-making.

Geometry of Bias: The Tangent as a Cloaked Filter

To understand the report’s power, consider the tangent line—a concept familiar from calculus, where a line touches a curve at a single point without crossing it. In data science, this metaphor extends far beyond geometry. Tangent lines here represent the asymptotic behavior of models: the point of closest approximation between predicted outcomes and reality. But the truth, as the NYT subtly shows, is that these models rarely converge smoothly. Instead, they hinge on thresholds—often arbitrary—where marginal shifts in input data trigger disproportionate changes in output. This is not random error. It’s structured drift, engineered—or at least reinforced—by design.

  • In high-frequency trading, for instance, a 0.05% deviation in market data can pivot a model from profitable to loss-making. The tangent at that margin defines the edge of viability—but only for those with the speed and capital to react. The rest? They’re systematically excluded, not by code that says “discrimination,” but by a silence in the algorithm’s geometry.
  • Credit scoring systems illustrate this further. A single late payment, a $2 fluctuation in income, or a neighborhood ZIP code can alter the tangent slope of credit eligibility. The model doesn’t penalize behavior—it penalizes exposure. The math appears objective, but its slope is set by historical inequities embedded in training data, then frozen into inflection points that appear immutable.
  • This isn’t just about flawed data. It’s about the calculus of power. The NYT’s investigation reveals how mathematical precision becomes a tool of exclusion when the underlying functions are optimized not for equity, but for profit and stability—defined by those in control.

    Real-World Tangents: When Math Fails the Margin

    In 2022, a major fintech platform used a predictive model to determine small business loan approvals. The model’s tangent at the risk threshold was set at 72% creditworthiness—a number derived from a dataset skewed toward established firms in affluent areas. When a startup in a low-income district pushed just beyond 71.9%, the model flipped: approval vanished. The math was precise, but the context was rigged. The tangent line didn’t reject the business—it exposed the boundary of a system that penalizes proximity to potential.

    Similarly, in urban mobility algorithms, ride-sharing platforms adjust surge pricing via dynamic tangents that respond to demand density. A surge threshold of $1.50 per mile may seem mathematically rational, but it creates a geometric divide: drivers near high-demand zones thrive, while those in underserved areas see far fewer opportunities—because the model’s tangent slopes upward only where profitability aligns with geography, not need.

    The Hidden Mechanics: Who Draws the Tangent?

    The NYT doesn’t name names, but the architecture speaks volumes. The tangent line is never neutral—it’s drawn by modelers whose incentives, training, and data sources shape its slope. This isn’t a flaw. It’s the logic of systems designed to minimize risk, not maximize justice. In machine learning, the “optimal tangent” is whatever maximizes prediction accuracy within training constraints—constraints often derived from biased historical patterns. The math is rigorous, but the reference frame is compromised.

    Consider the concept of “regularization”—a technique used to prevent overfitting. It penalizes complexity, smoothing out noise. But when “noise” is data poverty—sparse, inconsistent, or systematically underrepresented—regularization doesn’t correct it. Instead, it flattens the curve, suppressing outliers that might represent high-risk but high-potential cases. The tangent becomes smoother, less responsive, and less fair. The model sacrifices nuance for stability, and in doing so, rigges the line against those it was meant to serve.

    Why This Matters: The Geometry of Opportunity

    The NYT’s “Tangent Line” is not just a metaphor—it’s a warning. When mathematical precision is decoupled from social context, the result is a world where the “correct” answer isn’t discovered; it’s engineered. The tangent becomes a boundary between inclusion and exclusion, drawn not by laws or ethics, but by derivatives and decision boundaries.

    This raises a sobering question: can math ever be neutral when its frameworks are built on data that carries the weight of history? The answer, in practice, is no. The geometry of risk, the slope of prediction, the inflection point of approval—these are not abstract concepts. They are political, economic, and deeply human. And in the hands of those who control the models, they become instruments of subtle, systemic exclusion.

    The real tangent line cuts through our assumptions. It shows that fairness cannot be an afterthought in mathematics. It must be embedded in its foundation—before the model draws its first slope, before the data sharpens its edge. Until then, the system will continue to privilege precision over justice, and the margin will always favor those who already stand closest to the line.

You may also like