A Wall Street regulator is opening a probe into Goldman Sachs Group Inc.’s credit card practices after a viral tweet from a tech entrepreneur alleged gender discrimination in the new Apple Card’s algorithms when determining credit limits.
A series of posts from David Heinemeier Hansson starting Thursday railed against the Apple Card for giving him 20 times the credit limit that his wife got. The tweets, many of which contain profanity, immediately gained traction online, even attracting comment from Apple co-founder Steve Wozniak.
Hansson didn’t disclose any specific income-related information for either of them but said they filed joint tax returns and that his wife has a better credit score than he does.
The whole Twitter thread by David Heinemeier Hansson is an exercise in inflexible bureaucracy and an unshakable belief in the black box algorithm that nobody even seems to understand. Bias in algorithms is a real problem, and it will only become a bigger problem as they become more and more important in every aspect of our society.
What are their respective credit scores (how much other debt, late payments, bankruptcies, other credit lines). It isn’t about just the income stream.
E.g.:
Husband is a welder that has a high income, never had student loan debt, and pays off things on time which means his one credit card and car.
Wife has $100k in student debt, was late, accrued interest and penalties, has a $50k high interest credit line with high balances, and similar income..
Nope.
https://dhh.dk/2019/about-the-apple-card.html
I’m sorry but DHH is kind of an idiot. There may be something to this, but if there is he isn’t the one to find it.
The irony of calling it discrimination that women get a lower credit limit on a card with an uncompetitive interest rate is lost on him.
Maybe Jamie Heinemeier Hansson (DHH’s wife) is better placed to make the case?
https://dhh.dk/2019/about-the-apple-card.html
“Open the Apple Pay doors, HAL.”
“The algorithm’s sorry, Dave. The algorithm’s afraid the algorithm can’t do that.”
https://www.youtube.com/watch?v=dSIKBliboIo
You took precautions, but the algorithm could see your fingers move.
My wife experiences structural sexism like this all the time. We’ll both have our name registered with a utility, but they’ll only talk to me, even to make changes concerning money coming directly from her account. When querying details of a mortgage application she’s been told in a patronising tone “perhaps we could talk to your husband about this?”. She uses a difference surname to me, but some utilities will insist she uses my surname. For some utilities where she’s the only account holder, they’ll send her letters addressed to Mr…
I’ve never experienced any of these things in reverse.
So, sadly this story doesn’t surprise me. There is some provision in the GDPR to avoid the IT’S JUST THE ALGORITHM defence (article 22: https://gdpr-info.eu/art-22-gdpr/ ), but I don’t know whether it apples in the case of credit applications, which often seem to fall under their own unique legislation.
The GDPR definitely has _something_ to say about this, yes – Article 22 seems relevant. This probably falls under 22.2.a, but the provisions in 22.3 still apply – which means you should be able to complain to a human about this. (Gender is not one of the protected classes of information in 9.1, so 22.4 does not trigger.)
Recital 60 is also interesting – it seems like you should be able to get enough information about the automated process to understand why it decided as it did, and to judge its fairness.
The most directly relevant thing is probably recital 71, though – it specifically deals with how nobody should have important decisions about them made solely by algorithm, and that you should have a right to have a human both re-evaluate the decision and explain why the algorithm decided as it did.
I’ll just quote the relevant article and recitals here – sorry for the massive-huge post:
(If you want the full text of the GDPR, the authoritative source is EUR-Lex : https://eur-lex.europa.eu/eli/reg/2016/679/oj )
## Article 22:
Automated individual decision-making, including profiling
1. The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.
2. Paragraph 1 shall not apply if the decision:
(a) is necessary for entering into, or performance of, a contract between the data subject and a data controller;
(b) is authorised by Union or Member State law to which the controller is subject and which also lays down suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests; or
(c) is based on the data subject’s explicit consent.
3. In the cases referred to in points (a) and (c) of paragraph 2, the data controller shall implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.
4. Decisions referred to in paragraph 2 shall not be based on special categories of personal data referred to in Article 9(1) , unless point (a) or (g) of Article 9(2) applies and suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests are in place.
## Recital 60
The principles of fair and transparent processing require that the data subject be informed of the existence of the processing operation and its purposes. The controller should provide the data subject with any further information necessary to ensure fair and transparent processing taking into account the specific circumstances and context in which the personal data are processed. Furthermore, the data subject should be informed of the existence of profiling and the consequences of such profiling. Where the personal data are collected from the data subject, the data subject should also be informed whether he or she is obliged to provide the personal data and of the consequences, where he or she does not provide such data. That information may be provided in combination with standardised icons in order to give in an easily visible, intelligible and clearly legible manner, a meaningful overview of the intended processing. Where the icons are presented electronically, they should be machine-readable.
## Recital 71
The data subject should have the right not to be subject to a decision, which may include a measure, evaluating personal aspects relating to him or her which is based solely on automated processing and which produces legal effects concerning him or her or similarly significantly affects him or her, such as automatic refusal of an online credit application or e-recruiting practices without any human intervention. Such processing includes ‘profiling’ that consists of any form of automated processing of personal data evaluating the personal aspects relating to a natural person, in particular to analyse or predict aspects concerning the data subject’s performance at work, economic situation, health, personal preferences or interests, reliability or behaviour, location or movements, where it produces legal effects concerning him or her or similarly significantly affects him or her. However, decision-making based on such processing, including profiling, should be allowed where expressly authorised by Union or Member State law to which the controller is subject, including for fraud and tax-evasion monitoring and prevention purposes conducted in accordance with the regulations, standards and recommendations of Union institutions or national oversight bodies and to ensure the security and reliability of a service provided by the controller, or necessary for the entering or performance of a contract between the data subject and a controller, or when the data subject has given his or her explicit consent. In any case, such processing should be subject to suitable safeguards, which should include specific information to the data subject and the right to obtain human intervention, to express his or her point of view, to obtain an explanation of the decision reached after such assessment and to challenge the decision. Such measure should not concern a child.
In order to ensure fair and transparent processing in respect of the data subject, taking into account the specific circumstances and context in which the personal data are processed, the controller should use appropriate mathematical or statistical procedures for the profiling, implement technical and organisational measures appropriate to ensure, in particular, that factors which result in inaccuracies in personal data are corrected and the risk of errors is minimised, secure personal data in a manner that takes account of the potential risks involved for the interests and rights of the data subject and that prevents, inter alia, discriminatory effects on natural persons on the basis of racial or ethnic origin, political opinion, religion or beliefs, trade union membership, genetic or health status or sexual orientation, or that result in measures having such an effect. Automated decision-making and profiling based on special categories of personal data should be allowed only under specific conditions.
dnebdal,
I agree, I cannot fathom why an algorithm of any composition would be allowed to break the law. The organizations (whether it’s apple or the bank) need to be held accountable. They don’t get a pass.
Consider that we humans are running incredibly complex “black box” algorithms too. We may not even fully understand our algorithms, and yet if we break the law, we’re still held accountable. Allowing discriminatory behavior on the basis that it was made by a computer algorithm has absolutely no merit in my mind. This needs to be punished sooner rather than later, otherwise it will quickly become more rampant.
While I fully agree with you, it still raises some difficult questions. For example, how do you know it’s breaking the law? Credit algorithms are designed to be intrinsically discriminatory. If, for the sake of argument, men are statistically less credit-worthy than women, is it okay to discriminate against men? If, for the sake of argument, people who live in apartments are statistically less credit-worthy then those that live in houses, is it okay to discriminate against apartment-dwellers? What if men are statistically more likely to live in apartments? (feel free to switch around the cases if it helps).
I would argue the answer is ‘no’ in all cases. But these are precisely the sort of generalisations credit raters, insurance companies and banks currently make.
If there’s one thing that comes out of our current infatuation with ML, I hope it’s an appreciation that there’s no such thing as a generalisation that isn’t discriminatory.
flypig,
Well, computer algorithms are very good at optimizing for things like profits, but this brings up two points: an algorithm designed/trained to optimize profits may end up discriminating in order to maximize profit even if the algorithm wasn’t designed to discriminate. Secondly, if the algorithm is optimized to make decisions without inputting legal constraints during the training phase, then it can’t possibly be expected to follow the law other than by sheer coincidence.
Should businesses be allowed to discriminate against groups that are less likely to be profitable? If so, then a lot of pre-existing social injustices are going to end up being baked into these AIs.
Human error …… A human googled him, saw that he was reckless and full of himself and decided to make a killing on him forgetting to pay his bill. They googled his wife too and saw she had way more sense so why give her free money. Algorithms are Gaussian. They won’t do extreme results like that. Most humans are as well, but not all.
And this is why a hardware and software company should never try to become a bank. Practically anyone could have told them this would only be a P.R. disaster from day 1 until the day they stop the program. It’s also a reason to trust old banks instead of new things like Square which have shown 0 interest in protecting owners from scams by customers like fraudulent chargeback schemes.
Old banks ? Trust ? 1. old banks exist ? 2. Really you trust banks ?