Algorithmic Discrimination: Is the Apple Card Sexist?

Reading Time: 3 minutes.

An Apple card looking ominous Two people, with the same income, the same holdings, and similar credit scores apply for an Apple Card. One is male, the other female. The woman receives a credit limit 1/20th the size of the man. Why? No one can say.

David Heinemeier Hansson, software engineer and creator of Ruby on Rails, applied for an Apple Card at the same time as his wife. They have joint accounts, so they show the same holdings, same income, and she actually has a slightly better credit score. Still, Apple thought he was 20 times more reliable than her.

He’s not alone. Steve Wozniak founded a little company with a friend of his in a garage. That little company grew into Apple. Despite having shared accounts, Apple and Goldman Sachs decided to give him a credit limit that was ten times that of his wife.

Upon taking to Twitter, many more men and women spoke up with the same experiences.

What’s going on? Why does the Apple Card reinforce sexism? If anyone knows, they’re not talking, and there’s no other way to peek inside the black box that is controlling people’s financial capabilities and reinforcing sexism. This is the danger of the black box algorithm, and it could even endanger lives.

Goldman Sachs’ Card

Goldman Sachs certainly isn’t new to the financial world, but this is their first consumer credit card. Along with Apple’s software, it’s perhaps the most comprehensive and technology driven credit card ever. But how does it work? Apple’s financial partner for the Apple Card, Goldman Sachs, is using a black box algorithm to define spending limits and interest rates. Black box means we don’t know what it’s doing. In fact, we don’t even know what data they use. The Apple Card, though seeming high-end, supposedly has been easy for people to get, even those with low credit scores. This is, in part, due to this system. However, the same system is treating women as less reliable than men, all other things being equal.

Reinforcing Sexism, Without Realizing It

Goldman Sachs defends their algorithm, stating that they never even ask about a person’s gender when making the determination for credit limit. But that’s not entirely true. They likely find more than enough information to infer a person’s gender. In fact, in many cases, they can guess gender from name alone. However, credit card agencies don’t stop there. They could be looking at how you spend your money and where you spend it. These algorithms do the work that humans previously did. Those humans carried their own biases, and often their own sexism. When they saw purchases that aligned with things a woman may be buying, they’d decide she’s less trustworthy. Goldman Sachs may have unintentionally baked those sexist ideals into their algorithms by basing them off of the previous decisions made by sexists. Add large sums of money into the equation, and that sexism is multiplied.

Investigation and Solutions

A hand lowers the iPhone with the Apple Card to a readerLast month, New York’s Department of Financial Services began an investigation into the Apple Card and Goldman Sachs. The fact is, this kind of blatant discrimination is against the law. Goldman Sachs is trying to hide behind their algorithm, but anyone who works on these algorithms knows that you can—and should—test for biases. If you’re using historical data, or even current data from people, you’re going to have to pull out biases over racism, sexism, homophobia, xenophobia, antisemitism, Islamophobia, and more. You’re going to have to look at the results from these users and analyze why they may have discriminated against someone. If a data point reinforces sexist ideals, without improving the accuracy of your algorithm, it’s a data point that exists for sexist reasons alone, and must be thrown out.

This algorithm could decide how much money a mother can get for a month to feed herself or her children. It could make a huge difference in a person’s life. Algorithms with such a profound impact on our daily lives cannot be black boxes. They cannot be machines with hidden parts, doing unknown things with our data. We must be able to peek under the hood. The public needs to be able to audit the machines that can fundamentally alter the economy and our lives.

Arrows flowing in a circle. Text reads Humans > Create > Data > Fed into > Algorithms > Discover > Patters > Returned to > Humans

Humans create data, fed into man-made algorithms, which detect patterns to return to people.

On top of open sourcing large parts of AI, we need regulations that enforce “good” AI. Ethical AI isn’t a pie in the sky dream. If companies analyze the AI they create, and use only data points that remove, rather than reinforce bias, without affecting accuracy, then we can use AI to make humanity less prejudiced. However, many companies, like Apple and Goldman Sachs, don’t question the nature of humanity. They don’t question our outwardly sexist and racist past that defines these algorithms. They don’t question why some people may be disadvantaged due to their creations. That’s dangerous. That’s dystopian. As the humans that control this world, we can’t give in to machines that don’t support our vision for a better humanity. If we want better AI, we have to teach it not to be racist, sexist, or discriminatory in any other way. Since companies won’t do this on their own, we must legislate it. If we don’t regulate AI now, one day, for someone, it may be too late.


Sources:
,