Also, it's always amusing to arrive at an article about data collection and be greeted by this: https://i.imgur.com/d4Z4sdd.png
I had thought credit karma and similar services were at least adding transparency to the industry but was recently in for a rude awakening when trying to get preapproval for our first mortgage. It turns out, the whole idea of a credit score is kind of a lie. Your credit report can be pulled by creditors, and creditors can interpret it in different ways as they see fit. Services like credit karma are just making up their own score based on the report which can be drastically different than what the creditor decides it is. And credit pulled for different uses somehow ends up with different scores, e.g. for a credit card approval your score will often show up higher than for a mortgage. You can check your credit report for free, as required by law, but nobody is required to tell you your credit score. You can also basically just pay to remove many things from your credit report.
It’s insane and the fact that this fairly old and low-tech institution has not managed to be regulated successfully or made fair to the average person, bodes very poorly IMO for regulation of privacy and fairness based on future technology.
There is no “the credit score”. Any lender is free to use whatever scoring algorithm they like. And they do. Some lenders may choose to use certain scores for certain things, but underwriting has no obligation to use one specific credit score.
Therefore there is no reason for people to care about their credit score, whether it be from FICO or credit karma or whoever.
The only thing you should do is make sure information on the credit report is accurate. And what is your source for claiming you can pay to get things removed from credit score? It doesn’t even make sense, as it just shows the status of your lines of credit. You can’t just make a line of credit disappear as no credit reporting agency or financial institution is going to want to commit fraud for any amount of money that an average person might offer.
This is the big lie. That "identity theft" should be the problem of the person who was impersonated. Pass laws that heavily fine entities that give false information to credit bureaus and fine credit bureaus who give out false information. "Identity theft" would no longer be something that people would worry about.
Wow, I did not know that the surveillance bureaus are exempt from libel. What an incredible law against the interests of the people. That explains why I've never heard about class action suites against them. I'll have to read up on it. Thanks for the info.
That's literally what not fraud is.
>In law, fraud is intentional deception to secure unfair or unlawful gain, or to deprive a victim of a legal right
If some guy walks into a bank and claim they're you and they believe it, the banks aren't gaining anything. If anything, they lost money. Wrongly reporting the default to the CRAs also doesn't benefit them either; it's not like they compensate the banks based on how many default reports they send in. Finally, it's missing the "intentional" part. Lax security practices are negligence at best.
Edit: mindslight mentions in this thread that: "The 'Fair' Credit Reporting Act explicitly immunizes the surveillance bureaus against the tort of libel". Amazing.
They would have just sold a loan.
>If anything, they lost money.
Can't they sell the defaulted loan to collections?
>Lax security practices are negligence at best.
Systemic, known, and long-standing negligence speaks to intention.
This approach is mainly used by third-party debt buyers, like Encore/Midland/Cavalry. And doesn't apply to public judgments, if they decide to pursue that route. Thankfully, I have managed to avoid that in my credit recovery journey.
First-party (original creditor) will laugh or tell you it isn't possible if you ask for a PFD settlement, but there are documented instances of it happening as early as 2010 in my research for "making the case" before these policies started to become more widespread.
It is definitely possible to do a "pay for delete" to have negative info removed from a credit report in exchange for payment.
I don’t see what the problem is if the owner of the debt agrees to erase the debt.
Renting an apartment is credit, I am not sure why you view it has something other than credit?
The owner of the property is loaning (credit) the use of their property to you for X amount of time in exchange for N amount of dollars, payable over monthly installments
How is that not credit?
A better example is employers using it for hiring choices which does happen as well, but using an apartment as an example of bad uses of credit I think it misguided
>>. It turns out, the whole idea of a credit score is kind of a lie. Your credit report can be pulled by creditors, and creditors can interpret it in different ways as they see fit.
Yes the individual or organization that is loaning you a large amount of money can choose how they use the credit report they obtain for you. Again here I am not sure why this is a revelation or a bad thing.
There are also many different and competiting credit scores no person as "a credit score" there are at least 5 if not more credit scores out there and different institutions will use them in different ways. FICO being the most common but not the only
>You can check your credit report for free, as required by law, but nobody is required to tell you your credit score. You can also basically just pay to remove many things from your credit report.
Yea I believe these institutions should also have to release your personal score with your annual free report, Congress should fix that omission from the law.
>The owner of the property is loaning (credit) the use of their property to you for X amount of time in exchange for N amount of dollars, payable over monthly installments
Not really. In most places you have to pay first and last month's rent, which means you're paying for the service before it's rendered. Therefore they're not extending credit, as you have already paid for the service in advance.
1. I would like you to define "most places" as for the first 35 years of my life I was a renter and in that time I only ever had to pay "First and last" in one instance. Most of the time it was The first months rent, and security deposit(which was often something small like $100 or $200), no "last months rent" so around here it was not "most places"
2. Even if you use that as a metric that is more like a down payment than a "services paid in advance", unless you are on some kind of month to month with no lease. Every lease I have ever signed shows the TOTAL of all payments which are paid in 12 installments just like a loan, if on month 6th you just move well you still owe the other 6 payments (less any down payment aka last month you prepaid)
The owner is absolutely extending you credit for the use of the property, If you sign a lease for an apartment for $1,500 a month for 12 months you are agreeing to pay $18,000 to the owner for the use of the property, you have both agreed to pay that over 12 equal payments, and in your hypothetical the owner has asked for a $3,000 down payment on that loan in exchange the owner adjusted the terms to 10 equal payments
As far as banks being able to make their own credit decision, that’s all fine and good but then they pretend like that’s not what’s happening. If I ask a bank why they denied my loan, they won’t tell me about the specific activities I’ve done that prove me un creditworthy (sometimes I can press to ask what they can see on my report and if they’re feeling nice they may tell me, but they don’t have to). The fact that there’s no party you can ask to just evaluate ahead of time what the result will be, or to see what i can do to get myself above the threshold, so the individual is always at a disadvantage of information asymmetry, is what makes the system bogus. And then on top of that, just the act of seeing if you qualify for the thing lowers your credit score further!
If I’ve been burned by this as a high paid tech worker with just a couple of mistakes in my autopay settings in the past, I can’t even imagine how big of a problem this is for people who have had actual hardships.
You have a fundamental misunderstanding of how a lease works, you are equating it to something like a prepaid phone bill, and that is not at all what a lease is
You are obligated the pay the full amount of the lease, if you up and move in the middle you still owe the landlord the full amount.
The property owner would run a credit check for the same reason as a lender would, to judge if you are responsible person that would repay this obligation under the terms of the agreement, it far closer to a loan then you seem to want to give it credit for.
Further the use of credit scores and other background checks only become more important the harder it becomes to evict bad tenants
Something that "there is an ongoing equal trade of values between both parties." would be terminable by either party the second that value proposition changes, this is not the case with Rental property where the interaction is governed not only by the terms of the lease but layers of federal and local laws
>If I ask a bank why they denied my loan, they won’t tell me about the specific activities I’ve done that prove me un creditworthy
If you scroll to my other comments I advocate for changing that, I am a big advocate of personal data ownership and believe any person should have the right at any time to request all data any company collected about them.
>> (sometimes I can press to ask what they can see on my report and if they’re feeling nice they may tell me, but they don’t have to).
You have the right to get an annual credit report from every credit agency every year, that is would they would see
>The fact that there’s no party you can ask to just evaluate ahead of time what the result will be, or to see what i can do to get myself above the threshold, so the individual is always at a disadvantage of information asymmetry, is what makes the system bogus.
Yes and no, there is 100% guarantee nothing is in life, but there are several ways you can get a good fact based analysis of your general credit worthiness, can it predict if a given institution would grant you a loan... no. but it can predict if you have a good chance at some institution giving you a loan.
The lower your general score obviously the less reliable these tools will be, If you have a FICO of 810 + provable long term income then chances are anyone will loan to you, if you have a FICO of 620 well that become more of crap shoot and will be based on many other factors than just your credit score, similarly if you have a high FICO score but unreliable income (self employed) then it also becomes more of a crap shoot.
>>If I’ve been burned by this as a high paid tech worker with just a couple of mistakes in my autopay settings in the past
I hear stories like this often but this is not my personal experience. Not saying it cant happen but companies I do business with do not insta report you if you autopay fails...
You have to be 60+ days over due before it shows up on the credit report... and with all the modern alerting and other tools I fail to believe that a "simple auto-pay mistake" is what caused one to become 60+ days delinquent on a payment
The FinTech industry has since grown to the point that being unbanked doesn't need to be a thing anymore. It's easy to pick up a prepaid debit card which you can receive ACH payments on.
There were a few similar programs a few years ago when federal grants were made available. Iirc, they were modeled on the stuff built to identify people vulnerable to becoming extremist terrorist types. Insurance companies have databases of way more lifestyle and other behavior data than people realize. (Everything from sports, politics, to porn and gambling habits — anything for sale) I’m pretty sure Georgia mashed that against Medicaid claim data to build the model.
It’s a reason why we all need to watch “pre-existing condition” debates closely. If insurers know that 45 year old divorced father of 3 who moves every 3 months is a smoker, gambler and drinker, they don’t want to write a policy — the guy is a trainwreck with no support system.
I suspect many think differently here but I don't think the problem lies in the knowledge but in how people with power are currently using the knowledge. Restricting the knowledge is simply or current best mitigation of the current conventions. Another world might use that knowledge to better support you and others who have seen hard times and restore what sounds like a lack of justice and help you find an environment where you could thrive and perhaps even that boss of yours. [edit: so that they could thrive but also have reduced negative impact]
In other words, rather than focusing on mitigating risk a sufficiently high quality system could help us maximize our lives. Unfortunately, the probability of something so pro-social being the outcome seems low.
Denied a loan, denied a job, kicked off a platform, in none of these situations is the company requires to justify their actions or be transparent in the policies and processes they used to reach that conclusion
This black box leaves people feeling powerless and out of control because they are.
One way to combat that is stronger data ownership laws, and the ability for people to get ALL information a company has collected about them.
So for example if you are denied a home loan, you should be able to request every single scrap of info that loan company collected about you (including any and all credit scores) they used to make that determination
While that's the biggest problem right now, transparency by itself isn't useful. I have transparency into my bills but I don't have the ability to change them. Negotiating power is being taken away from consumers. When's the last time you've seen anything which didn't have some form of liability limitation clause? When's the last time you've been able to negotiate that liability limitation?
>So for example if you are denied a home loan, you should be able to request every single scrap of info that loan company collected about you (including any and all credit scores) they used to make that determination
The reason why that doesn't happen is because they're almost certainly discriminating in ways they shouldn't be.
It's only a tiny leap to incorporate a similar magic number some company comes up with. Actually due to competition, if it correlates to risk, all companies will literally be forced to use the magic numbers or go into an adverse selection death spiral.
Unless there is regulation against it, like in CA. Not all regulation is bad.
E.g. if a young adult gets classified as "disorderly, drunk, unsuitable for reproduction, suitable only for low-skill work" based on their history of college partying, and then consequently denied work and social opportunities (as everyone doing background checks sees that summary), the prediction essentially becomes a sentence.
(The third season of Westworld, despite bad writing and even worse gunfights, was very good at bringing this point up.)
Loan risk algorithms will favor people "similar to" those who have paid back loans before, a sample group biased towards people that banks have already loaned to before. As a result, a lot of the factors are biased towards "from a white upper-middle-class suburban background."
And recidivism estimators, which are used as jail sentencing guidelines in some places.
Screening algorithms for job resumes, and college applications.
Algorithms send police to where crimes are reported. Crimes are reported because the police are there to witness them. The area gets designated a high-crime area. Regular people are arrested more often because regular activity is suspicious in a high-crime area, affecting their future prospects. The higher arrest rate is used to justify this.
It's a continuous spectrum rather than a single point. But if I were to pick a single "point" where it became a self-fulling prophecy? 1994, due to the widespread passage of three-strikes laws.
To expand on this: adverse selection is where a consumer has hidden information about the cost they can inflict on a provider. Usually this is talked about in terms of insurance (especially health insurance), but risk is risk and so the principles are the same.
My recollection of what economists predict is that there are two stable equilibria that achieve Pareto efficiency. The first is that discrimination (in the general yes/no sense) is completely forbidden and risks are totally pooled. The second is that complete discrimination is possible without limitation.
The worst outcomes are all found in attempted compromises. Not only do you have the costs of whatever tradeoff you chose between pooling and discrimination being less efficient than the Pareto points, but you also introduce a great deal of dead weight due to complex regulation and oversight, plus efforts made to evade regulation and oversight. Collectively we are worse off, even if individuals think otherwise.
I don't think allowing total discrimination is a viable option in this day and age. Which means banning it wholesale and encouraging the formation of universal risk pools.
Meta amusement: https://i.imgur.com/E5bFeb3.png
Many private companies surveilling everyone without anyone's knowledge is terrible. Companies using these scores are really only targeting a group of people that allow the collection to happen.
My car insurance company would sure love to put a tracker in my car but there is no way in hell that is going to happen. I'm sure my driving style would qualify me for higher rates, yet I've never caused an accident or made a claim where I was at fault.
Plus why single out insurance companies and ban them from creepily tracking you wherever you go and not all the other companies who do that with your cell phone.
Also to be pedantic, the don't want to put trackers in the car, that is expensive. They want to use your cell phone like all the other apps tracking you, or they want the car manufactures to let them in on their data since most modern cars have the ability to track and broadcast location.
>made a claim where I was at fault.
That's a big caveat. There are plenty of accidents that are truly not one parties fault, but according to law even being judged as 49% at fault is still "not at fault". And even in cases of 0% at fault as determine by the insurance adjusters, theres a big chance one still had contributing factors.
Also, there's a big chance not at fault claims will be taken into account when pricing you and raise your prices. Just right??
On the other hand, not at fault accidents do correlated with higher risk, and it's easy to see why. For example, people who get rear ended are not at fault. But following the car ahead too closely leads to needing to brake harder, increasing the chance of being rear ended. Following too closely also increases the chance of rear ending the vehicle in front.
I have no driving record and I've caused no accidents. If I put a tracking device in my car today my rates would go up because I accelerate quickly and brake hard when conditions allow. That is not right.
You did not do that, so they raise your rates.
Edit: not that you were at fault or did anything wrong.
My point is that the insurance companies strongest incentive is to pay out as little as possible. This includes payments made where their client was not at fault. If it were up to them, everyone would pay all their premiums on time and never drive.
To quote wikipedia:
If the likelihood of an insured event is so high, or the cost of the event so large, that the resulting premium is large relative to the amount of protection offered, then it is not likely that the insurance will be purchased, even if on offer. Furthermore, as the accounting profession formally recognizes in financial accounting standards, the premium cannot be so large that there is not a reasonable chance of a significant loss to the insurer. If there is no such chance of loss, then the transaction may have the form of insurance, but not the substance (see the U.S. Financial Accounting Standards Board pronouncement number 113: "Accounting and Reporting for Reinsurance of Short-Duration and Long-Duration Contracts").
If statistically other drivers who do that are more likely to, on average, result in a loss for the insurer, they are right to raise your rates. "Past Performance is no guarantee of future results"
With enough data, maybe it would support your assertion that you are, in fact, a very safe driver, by realizing that you only drive fast at certain times, in certain locations that are statistically lower risk.
This argument could as well be applied regarding certain protected characteristics of the driver (such as race, gender, sexual orientation, etc).
All it takes is for you to misjudge the conditions one time (are you claiming to be infallible?), or for another road user to do something you were unable to anticipate, and suddenly this driving habit of yours does contribute to a higher risk of an incident that results in a claim. (Even if the incident still isn't considered to be your fault; your driving style reduces safety margins for everyone.)
Giving the insurance company more ways to judge me only enables them to charge me more, without my actual level of risk changing at all.
This is illegal in the UK and EU, fortunately. A real victory for gender equality, even if it means women paying more.
If you take your cell phone in the car with you, they are already tracking you.
It's already happening, now, unfortunately. All your tweets (posted & liked) etc analyzed and flagged for bad language, drug/alcohol mentions, bigotry etc. and a report generated for HR.
The company in question is used by Sterling and HireRight, so this isn't some unknown/niche company.
The tweet thread under the link is well worth seeing.
Taken from the company in question's site (https://fama.io/product/):
> What Fama finds...
> Our machine learning technology has flagged hundreds of thousands of
instances of misogyny, bigotry, racism, violence and criminal behavior in
publicly available online content.
If you go and compare what the product actually seems to do - tracking down your Twitter account and grepping every Tweet you interacted with against a list of "thoughtcrime" words (like "hell" or "ass") - you can almost feel the next AI winter coming. How many more bullshit companies calling their fake, broken and trivial technology "AI" or "Machine Learning" will it take until the whole field of ML gets derailed by bad reputation? At least, (as much as I know history), the last AI winter involved companies trying but failing at AI. This time around, they're not even trying.
Funfact: when I first moved to USA I would practice emoting in the mirror so Americans could read my face and wouldn’t penalize me during interactions. Now people back home say I “grin like an american”
> can’t say anything in public
Ever noticed how Gen Z is switching back to private chatrooms and message groups? Public stuff is all polished and curated.
They don’t get that you could have 99.99999% of all possible information on a person AND that the missing 0.00001% could change everything you know about them.
It is true that some corporations are so big that they influence lawmakers just as ultra wealthy people always have. Indeed, if corporations were able to do the same things governments did and with the same authority, this process of influencing the government would be unnecessary.
I had drunkenly called HSBC and ended up ranting at the person on the phone a few times (lost cards etc) so I think this was a rating of how well behaved I was towards their staff.
The manager I was speaking to changed his tone fairly quickly after that screen came up!
So this is nothing new, but I guess the scale and opportunities for data points are new.
Although, having said that, a lot of marketing based scoring data is woefully inaccurate. I was once responsible for distributing a data set to company clients which covered interests and personal info for all UK population.
Not only was most of the data about my dad wrong, the things that were accurate were years out of date.
My information didn't exist. A colleague's email was completely wrong. Indicated he liked going on holiday but had never been outside of the UK.
So these scores might be useless anyway.
edit: got rid of "even if they exist".
That by itself is scary enough, but the much more likely case is the one where this system is rating you based on wrong information, and given Finagle's law..
Especially concerning when the justice system starts using this info; you really, really don't want to be the false positive.
In a low margin business a single bad customer can easily cost more than what is earned from ten good customers. In that situation, an oracle that rejects eight good customers per rejected bad customer would already be good enough.
If anything, they seem to be doubling down on this. From my discussions on the subject with people "in marketing", they are really convinced that not only is the data "accurate enough", but that collecting and exploiting it is actually for the client's benefit.
I wonder how one would go about quantifying the usefulness of all this tracking, because I doubt it's cheap. The best I could get was "we're able to see changes in sales which correlate to marketing campaigns". I can totally buy that and it seems somewhat more advanced of an answer than the more common one of "If didn't work they wouldn't be doing it".
What I wonder though is how they would go about quantifying how much better a tracking-based campaign worked than a "traditional" one would have.
How would they know?
But there's no process to appeal and the data might be laughably wrong. Outlawing cross-organizational aggregation might be a reasonable middle ground. It gives a further advantage to giants like Amazon, but all the problems compound at interorganizational integration.
"2. In addition to the information referred to in paragraph 1, the controller shall provide the data subject with the following information necessary to ensure fair and transparent processing in respect of the data subject:
(g) the existence of automated decision-making, including profiling, referred to in Article 22(1) and (4) and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject."
But that requires that the company even cares about following the GDPR correctly, or perhaps even the Data Protection Authority of a given EU country. I know of one small but popular enough data broker that's been operating for years in the EU and flaunting the GDPR without any punishment so far. Not sure if this is the place to advertise it. But I suppose somebody whose GDPR rights were violated has to lodge a complaint first.
And we could just dramatically limit how much of this can even be used for housing and employment decisions, and crank up the penalties for misuse.
What I would like to see is a rule that if a company collects data for more than in-house use (so it doesn't apply to a company who simply has records of it's customers) they must make a reasonable effort to notify you and allow you to examine the data and challenge anything you believe to be incorrect.
I don't think so. It's only been around for about 15 years or so. Internet infrastructure wasn't as advanced then, but neither were there as many people who would've used it.
The only reason I can see for the state of things is that the credit bureaus wanted to nickel-and-dime people for credit monitoring, and because they could water down the regulations, they did.
Another thing that was sorely needed was to make the domain a .gov or something other than a .com to prevent so many imposters.
And I just now read on Wikipedia a list of companies excluded from the requirements, like ChexSystems.
You only have to disprove that if the applicant can first show the existence of bias, which with a secret proprietary algo that they don't have access to either the inputs or outputs of or know exists, is pretty hard to do.
They wouldn't out the company because hurting the company for no private gain doesn't help them, and they wouldn't sue the company because they’d be a beneficiary, not an injured party, and so would have no damages to claim.
And also because both acts would destroy their future employability in the field.
Or better yet "Why dont employees commit career suicide for relatively minor offences that dont affect them personally and inflict no direct harm?"
Why would they? HR is not your friend.
There's a lot of use in applying insurance analysis techniques to filter out bad hires much better than conventional practices. Instead of interviews, vetting, tests, headhunters, you can adopt the latest in data analysis to cut costs and improve your KPIs across the board.
Predict an employee's productivity by analyzing online browsing behavior. Fixed qualities like attention span and drift; escapism and procrastination; pleasure-seeking vs productive, curious, prosocial browsing habits. How do these overlap in a typical workday? Are you focused or dispersed? Does your attention cycle? Do you complete tasks to the end, what happens when you get stuck on a problem?
This doesnt even get into
friendship networks, purchasing behavior, public displays of attitudes.
Can you game the system? Probably not. These days that requires you to play a losing game of tweaking your personality down to the smallest meticulous detail. Essentially going against the grain of your natural flow ordering all aspects of your identity to suit an opaque and almost certainly fault-laden.
People flip out about the Chinese social score, without realizing that it was pioneered in the US private-sector.
Say no to mandatory medical intervention.
Written from my Android 2.2
Or am I being naïve for even entertaining such a thought?
The question to ask yourself is 'what can reasonably be found out about me?' You can be found on the smallest trace of evidence, but is it reasonable to expect someone to do that? You're not exactly a government spy. Just take obvious steps like not using your real name and you should be fine.
* They prefer 84 years of credit history to put you in a lower risk category. Having 2 lines of credit paid every month versus 1 would make a huge difference in this regard.
* Oil company credit cards are seen as very high risk, and even having 1 is seen as a negative.
* Department store credit cards are also viewed negatively.
The weightings and mixing of variables is the secret sauce...but reading through it all made me very happy with my decision to turn down an offer from one of the major reporting agencies.
 PDF Warning
I skimmed the article and there's little mention of phone data collection. There's one mention of insurers using phones to collect driving data, but that's it. I was expecting some vast surveillance network using your phone to score you.
If you spend your whole life being anonymous, be prepared to be treated as a high risk consumer who cannot participate in certain things, because of a low social score.
Granted, this is the tragedy of the commons. By acting in one's own interest: get a good score, people poison the well for everone: everyone must work and self-censor to keep a good score.
I've been wondering if all this data is being used for stock market manipulation or timing the market. What exactly are those quant's using for inputs?
While they're at it, perhaps they can change some US senators' bank balances just for fun.
The authors have a site which includes their letters to the FTC, etc which provide greater detail and references: https://www.representconsumers.org/surveillance-scoring/
It’s not a bad one though, to be fair
Now I get ads for whiskey in the mail. I wonder how much that reduced my future employment prospects.