Don’t Let Computers Get Above their Pay Grade

Robot Dressed As Executive

Many organizations today use computer-based algorithms, sometimes enhanced with Artificial Intelligence (AI), to make (or help make) decisions. Many of these decisions have lifelong impacts on us and our families. Algorithms do three good things for their users and for the people they affect. First and most important, they greatly reduce the possibility of decisions influenced by typical prejudices. Second, they impose a level of uniformity on the appetite for risk which naturally varies by individual evaluator. Third, they are more efficient, disposing of easy cases and highlighting concerns in the more difficult.

But algorithms are still human creations with all the flaws that implies. They offer the appearance of objectivity but cannot take into account every factor that a decision deserves. Further, assuring lack of bias is very difficult. It can creep in despite best intentions.

The nub of the problem is the algorithm’s attempt to appear objective by reducing squishy reality to numbers. The challenge is that when high-impact decisions are less than clear-cut, computers have serious limitations. They need human help. We humans have some advantages that even the most sophisticated software will be hard put to replicate. You could call it the Human Edge which we describe below. The following four situations show where it needs to be called into play:

  • In finance, decisions on small business loans affect our careers. Decisions on home mortgages affect the school systems where we can send our kids. Entrepreneurs can and do max out their easily obtained credit cards to start businesses, but then have to pay interest rates that hark back to the inflationary 1970s. (Auto loans are less consequential. If the algorithm thinks you can’t afford the payments on that Mercedes, a VW will still get you there!)
  • In criminal justice, decisions occur at several stages. The accused may be required to post bail to stay out of jail (and maybe keep their jobs) before their trials. The amount of bail is supposed to be based on flight risk and public safety. If you don’t have the cash, the bail bondsman posts it plus 10%. This is a penalty even if you’re found innocent or the charge is dropped. If you are convicted, it’s up to the judge to sentence you, based on your history and demonstrations of remorse. However her discretion may be limited by minimum sentence laws—an algorithm without a computer. Once eligible for parole, the decision is supposed to be based on an assessment of rehabilitation. At each stage, the deciding factors are hard to measure.
  • In hiring employees, people with non-traditional career paths can be too easily and inappropriately rejected. For example, a professional firm may only want to hire people very likely to prove promotable. One criterion might be the occurrence of a promotion in the previous five years. But where does that leave a well-qualified candidate who took a couple of years off with a new baby? It’s unintentional bias against women.
  • In admissions to higher education, so-called aptitude tests have proved to more accurately measure family affluence. High school grades are not consistently handed out. Yet subjective and unquantifiable factors like motivation and character are easy to overlook. (The Covid-19 pandemic has accelerated the move away from reliance on these tests.)

Grading subjective factors on a one-to-ten scale and weighting the factors can produce a result that appears objective. Even if individual factors are objective (like income in a loan application), their weighting is still subjective. Unfortunately, many people tend to give more respect to numbers than they deserve. Two decimal places of “accuracy” doesn’t make them less subjective.

Managing the process means setting boundaries. Algorithms can take care of the easy cases. More difficult ones then get attention from skilled people who bring not just their subject matter expertise and experience but their “human edge”. They have abilities like:

  • Nuanced judgment based on circumstances and context. They can differentiate between situations that are the same only on the surface or see the common elements in seemingly different situations.
  • Emotional intelligence and empathy.
  • An instinct for recognizing when they’re being hoodwinked or manipulated, not to mention lied to.
  • An eye for anomalies, i.e., data points that just don’t seem to fit together. (Machines can learn to get better at this.)
  • Plain old common sense applied when an algorithm produces strange or incongruous results.
  • Intuition, imagination, and creativity.
  • A sense of fairness, decency, and the golden rule — the essence of ethics. They can see when an algorithm would violate that sense based on data that a human could recognize as stray, incorrect, or irrelevant,
  • Being accountable for results without the defense that “the algorithm made me do it.”

These are the kind of traits you would hope would be exhibited by your banker, your potential new boss, your college’s Dean of Admissions, or—not needed, one hopes—your trial judge or parole board.

We use the word “edge”in three senses: a boundary, an advantage and sharpness. We suggest these human edge traits will not if ever be duplicated in software for a very long time.

How might the human edge work in practice? Suppose for a type of bank loan, the algorithm generates the probability of default. The loan product might be acceptably profitable at a 3% rate. If the algorithm calculates an applicant’s probability as 2% or 4%, that’s easy: yes and no respectively. But what about 3.2% or 2.8%? This is where the value of a human gets real. What is it about the borrower that might suggest giving the loan to the 3.2% applicant and denying it to the 2.8%? Being right would mean avoiding both a real and an opportunity loss.

In these cases, the human edge can create value on both sides of a transaction and sometimes for society as a whole. Yes, mistakes will happen, but not for lack of due diligence.

Decisions that algorithms are allowed to make (the 2% and 4% cases in the banking example) should not go unquestioned any more than those made by people. Periodic review, at least on a sample basis, is needed. Algorithms need fine-tuning to improve their performance. They also need to be checked for evidence of unintentional bias or adverse impacts on certain groups. This is particularly true if ML is involved. Guardrails need to be established to recognize when an algorithm is acting in unexpected ways, so it can be examined by a human for reasonableness.

In short, the computer’s pay grade is sufficient for fairly easy decisions or decisions with limited consequences for people’s lives. But when decisions are consequential and close, they need to be bucked up to a higher pay grade.
In summary, computers and people need to complement one another, each doing what they do best. Boundaries need to be set and respected to make decisions that balance multiple objectives that include fairness and human decency.

Addendum

The perils of poorly understood and insufficiently vetted algorithms made international news when one of those algorithms was used to generate predicted marks of British students bound for university. The computer generated marks were used as surrogates for the normal A-Level exams that had to be cancelled due to COVID-19. For many students these ‘virtual’ test scores led to dramatic downgrades and rejections of their applications to University. The article below cites widespread use of algorithms by the British government in situations where they have lifelong consequences.
See Algorithms can drive inequality. Just look at Britain’s school exam chaos

About The Prometheus Endeavor
Our mission is to apply our knowledge and management experience to further the IT and Digital Endeavors of society, its institutions, and businesses. The Prometheus Endeavor does not do consulting or represent vendors. For over 30 years, members have advised and managed some of the most successful deployments of IT.

Note: This post is in part adapted from my 2019 article entitled “Robots, Algorithms, Ethics, and the Human Edge” originally published in the Cutter Business Technology Journal.

Author

4 Comments

  1. Richard Hardin

    Great piece Paul. I wonder if you have additional thoughts on recent AI advances in the hiring/screening world such as Pymetrics, where they claim and attempt to have very high-powered algorithms counter vailing the factors you discuss above. Interested in your thoughts on where “all this” is heading.

  2. Anonymous

    Algorithms will surely get better over time, but there will always be the close calls on matters of serious importance to the applicant’s life, so I see a continuing role for people in these situations, just as I believe there will always be a role for dermatolists and radiologists before a patient is put under the knife. The idea of such a decision made without human review should give us chills.

  3. Paul Clermont

    Algorithms will surely get better but there will always be close calls. When those are about matters that will have serious impact on a person’s life, there’s a role for expert review by a human. This is similar to the role of dermatolists and radiologists. Would we really want to go under the knife without a human expert reviewing and concurring with the computer’s recommendation? The idea should send chills through us!

Leave a Reply

Your email address will not be published. Required fields are marked *