David Heinemeier Hansson, software engineer and creator of Ruby on Rails, applied for an Apple Card at the same time as his wife. They have joint accounts, so they show the same holdings, same income, and she actually has a slightly better credit score. Still, Apple thought he was 20 times more reliable than her.
He’s not alone. Steve Wozniak founded a little company with a friend of his in a garage. That little company grew into Apple. Despite having shared accounts, Apple and Goldman Sachs decided to give him a credit limit that was ten times that of his wife.
Upon taking to Twitter, many more men and women spoke up with the same experiences.
What’s going on? Why does the Apple Card reinforce sexism? If anyone knows, they’re not talking, and there’s no other way to peek inside the black box that is controlling people’s financial capabilities and reinforcing sexism. This is the danger of the black box algorithm, and it could even endanger lives.
The @AppleCard is such a fucking sexist program. My wife and I filed joint tax returns, live in a community-property state, and have been married for a long time. Yet Apple’s black box algorithm thinks I deserve 20x the credit limit she does. No appeals work.
— DHH (@dhh) November 7, 2019
Goldman Sachs’ Card
Goldman Sachs certainly isn’t new to the financial world, but this is their first consumer credit card. Along with Apple’s software, it’s perhaps the most comprehensive and technology driven credit card ever. But how does it work? Apple’s financial partner for the Apple Card, Goldman Sachs, is using a black box algorithm to define spending limits and interest rates. Black box means we don’t know what it’s doing. In fact, we don’t even know what data they use. The Apple Card, though seeming high-end, supposedly has been easy for people to get, even those with low credit scores. This is, in part, due to this system. However, the same system is treating women as less reliable than men, all other things being equal.
Reinforcing Sexism, Without Realizing It
Goldman Sachs defends their algorithm, stating that they never even ask about a person’s gender when making the determination for credit limit. But that’s not entirely true. They likely find more than enough information to infer a person’s gender. In fact, in many cases, they can guess gender from name alone. However, credit card agencies don’t stop there. They could be looking at how you spend your money and where you spend it. These algorithms do the work that humans previously did. Those humans carried their own biases, and often their own sexism. When they saw purchases that aligned with things a woman may be buying, they’d decide she’s less trustworthy. Goldman Sachs may have unintentionally baked those sexist ideals into their algorithms by basing them off of the previous decisions made by sexists. Add large sums of money into the equation, and that sexism is multiplied.
Investigation and Solutions
This algorithm could decide how much money a mother can get for a month to feed herself or her children. It could make a huge difference in a person’s life. Algorithms with such a profound impact on our daily lives cannot be black boxes. They cannot be machines with hidden parts, doing unknown things with our data. We must be able to peek under the hood. The public needs to be able to audit the machines that can fundamentally alter the economy and our lives.
On top of open sourcing large parts of AI, we need regulations that enforce “good” AI. Ethical AI isn’t a pie in the sky dream. If companies analyze the AI they create, and use only data points that remove, rather than reinforce bias, without affecting accuracy, then we can use AI to make humanity less prejudiced. However, many companies, like Apple and Goldman Sachs, don’t question the nature of humanity. They don’t question our outwardly sexist and racist past that defines these algorithms. They don’t question why some people may be disadvantaged due to their creations. That’s dangerous. That’s dystopian. As the humans that control this world, we can’t give in to machines that don’t support our vision for a better humanity. If we want better AI, we have to teach it not to be racist, sexist, or discriminatory in any other way. Since companies won’t do this on their own, we must legislate it. If we don’t regulate AI now, one day, for someone, it may be too late.
Sources:
- CNBC
- Luke Dormehl, Cult of Mac
- Andy Meek, BGR
- Taylor Teleford, Washington Post
- Neil Vigdor, New York Times
- James Vincent, The Verge