Dec 2, 2022

Hiring, Fast and Slow

It’s no secret that when picking stocks, most people, including professionals, can barely keep up with a monkey. What’s less known, though, is that human beings in general are poor forecasters, and it impacts us in many ways beyond stocks. Further, there is little evidence that people can be trained to be better forecasters, and in fact, we struggle to even acknowledge this serious shortcoming.

One such place where good forecasting could be very helpful is in hiring. Hiring is largely a forecasting exercise, is highly subjective and has poor results. But I have learned some ways to improve this process in spite of our forecasting limitations, primarily by removing our instincts from the process. Let me explain.

I’ve worked in information technology all of my professional life, and have spent a significant portion of this time hiring technical people, primarily software engineers. Early in my career, I learned how much differently a person may perform at a job compared with the impression I had from their interview. I soon realized that job applicants are presenting their best side and will work hard to conceal their shortcomings.

I also learned that almost anyone can look good on a resume or a reference. One of the first hiring tips I received was to read through a candidate’s resume but to then set it aside, giving little value to its content. It’s not that people lie but rather that facts are fungible. The same is true with references. They are as good as candidates at projecting an overly optimistic picture.

I also noticed that several different people interviewing the same candidate can have widely different opinions, and all sides can be widely wrong. And their opinions can change for no apparent reason.

Not surprising, for many years our hiring success wasn’t much better than the monkeys picking stocks.

I did learn several other things, though. I learned that the best indicator of future performance is past performance, and the best approach is to try to figure out what a candidate has done in the past. Not what they have wanted to do, not some spongy analysis of how amazing they have been, but what specific actions have they actually done.

I learned to avoid discussions on dreams and goals. I learned that even if people could evaluate themselves well, you’re probably not going to get a reliable answer. I rarely asked “what-if” questions. It’s another self-evaluation that is of little help in evaluating their past performance.

I learned to collect as many facts as possible based primarily on my questions, not their rehearsed summaries of their activities. I focused on specific examples, drilling into the smallest details. It is harder for a candidate to eulogize on their behavior once they are asked to describe a specific interaction, such as a certain topic discussed with a peer, or the smallest details of a program they wrote.

I learned that high GPAs, awards and fancy colleges may not mean much. They may, but it depends. Finally, I learned that one of the best sources of reliable information, even a candidate with professional experience, is a careful review of their college transcripts.

But the best help I ever got was from the book “Thinking, Fast and Slow” by Daniel Kahneman, one of the more fascinating books I’ve ever read. It describes the two modes of thought people work with. One is “fast,” instinctive and emotional, and the other is “slow,” more deliberative and more logical.

His lifetime of research strongly suggests that people have too much confidence in human judgment, specifically with forecasting, probably because forecasts tend to use your fast brain. The problem with your fast brain is although it’s great dealing with a current situation such as an immediate danger, it is not very good at looking ahead years.

In an especially interesting section, he goes through his research on both stock picking and hiring. Both tasks require an ability to forecast the future. He highlights the overwhelming evidence that professional stock pickers have awful track records doing what they’re purportedly paid to do. And the same is often true for hiring, even when done by professionals. The results can be just as bad.

This is seemingly true for any forecasting. Most forecasts are just slightly better than guesses. Economists are a great example of horrible financial forecasts. And even when presented with this information, we will continue to feel and act as if each of our specific predictions was valid. Our emotional minds are so confident in our views that we are mostly unable to hear evidence to the contrary.

Mr. Kahneman provides a psychological argument for why this is true. Probably for efficiency, the human mind has a strong “halo effect” which “inclines us to match our view of all qualities of a person to our judgment of one particularly significant one.”

For example, if we think a baseball pitcher is handsome and athletic, we are likely to rate him better at throwing the ball, too. Our halo effect tends to have us hiring people based on factors largely unrelated to success, such as personality and physical appearances, and then applying them to other important factors with less regard for the facts.

I vividly remember the glowing interview an employee had with a candidate. The evaluation was heavily focused on the candidate’s charming personality, humor and the fact that right during the interview the candidate fixed a problem the individual had with their PC. But most software engineers and many amateur techies probably could have fixed the same problem. It had questionable relevance to the position but had a notable impact on the interviewer.

Kahneman’s evidence suggests that a hiring formula can be developed for a specific position, industry or company, and this formula will outperform the professionals. His theory is that if you remove much of the broad judgment interviewers make, and instead focus on a handful of clear indicators that the slow brain sees as predictive of success, you can significantly improve your hiring. The trick is to push aside the instinctive and emotional fast brain normally used in hiring.

I was intrigued by his argument. Using this information and some of my own experiences, I radically changed the way we hired, with great results.

Instead of trying to intuit what a candidate would do at a job, I started basing our hiring decisions primarily on a simple formula using specific quantifiable data, data that seemed predictive of good employees. Here is how I did it.

From a large pool of current and former employees, I and my managers ranked each employee into approximately 4 quartiles, top to bottom. We based their rankings on several measures that we were able to quantify.

Then based on this ranking I took the top 25% of our current and past employees and compared them with the bottom 25%, looking for notable differences. I only looked at the differences in what we knew about them before they were hired and did not consider items we learned later but could not have reasonably known when evaluating the candidates.

My idea was to see if there was any information we could use in future hiring that might help us select people that would perform more like our top 25% and less like our bottom 25%.

The results were astounding! Please remember that these results are for software engineers working in our organization. Your results may be much different for your needs.

We were able to find significant differences in success based on their level of education, the specific school they attended, their degree and the GPA they received. This was true of both entry level and experienced candidates. Amazingly, we were even able to find a strong correlation with their letter grades in a couple of classes normally taken as part of standard software engineering training. It was also notable that we found no discernible difference between a bachelor's and a master’s degree.

We found several significant indicators for job retention, including their past positions and where the jobs were. We found correlations with their previous experience, too, specifically with what basic responsibilities they had in each position and how long they stayed at that position.

Although I considered many factors, I was able to reduce this information to 4-5 key indicators that summarized a candidate. These indicators fit on the back of an envelope.

This is one of the other secrets of formulaic hiring and forecasting: Don’t use excessive analysis and detail. There are probably less than a half dozen discernible indicators available that will get you the same or better results. Further, too much detail and too much latitude opens the process up to finding a way to let your intuition abscond with your hiring, rather than using a few clear facts.

I didn’t try to understand why a certain indicator predicted certain behaviors but rather just accepted the facts. For example, we had one highly rated college with strong graduates but on the job, they had a negative correction with successful hires. It’s counterintuitive but we accepted it. Anything that allows your fast brain back into the process is not good.

The process succeeded very well. We still had misses and surely we passed on some good people. But confidence in hiring improved significantly.

I was now able to quickly filter out candidates. Once a candidate passed our key screening, we then did several interviews, letting various people take a try at deciphering the candidate, primarily along our key indicators.

We did our best to ignore dress, communication, culture fit, commitment, strategic value and motivation, all words often bandied about as critical to an organization’s success. But our analysis suggested otherwise.

There is one serious problem with this approach, though. People, especially the professionals and managers, will normally discard any thought of formulaic hiring, responding with hostility. Any notion that you can mostly ignore your intuition when making hiring decisions, and instead rely primarily on a handful of facts is seen as nonsense.

Apparently, it upsets people to think that their assumed ability to read people is largely an illusion. After several tries over a long period of time, and in spite of an obvious improvement in our hiring success, I gave up trying to explain our methodology.

Based on Kahneman’s book, we also eschewed onsite testing. It’s paraded around as a must-do but there’s little evidence that a quick test tells you much about one’s ability to work well in a position.

To repeat, he instead highly recommends that you look at what the individual has already done in the field and don’t try to assess those abilities, whether by testing, intuition or any other means that has shown little ability to forecast the future behavior of an individual performing in your organization. You do not want to look into their eyes and find that impression you’ve already decided on, normally based on other often irrelevant factors.

My suggestion is similar to Mr. Kahleman’s. However you do it, identify some key indicators regarding the candidate’s past performance that should be good indicators of success and can be normally elicited as part of your screening process. These can be based on your past hiring experience, as I did, or your own ideas of what seems to be reasonable factors in accessing an applicant for a position. Finally, don’t overcomplicate your process.

Have each interviewer rate the candidate 1-5 on each indicator, add up the results and hire the candidate with the highest score. You’ll make mistakes but a lot fewer of them.

4 comments:

  1. What classes did you find had the highest correlation? Do you have any idea if there was, and if so what it would be that was an underlying causation.

    ReplyDelete
    Replies
    1. Matt - The short answer is that there was a very strong correlation between successful hires and the highest grade they received in any of their college calculus classes. That is, almost 100% of successful hires had received at least a B in at least one of their calculus classes. It was common for low performers to have not taken calculus at all. None of our top employees hadn't taken upper level calculus. Further, though, the GPA I used with candidates was the GPA of only their math and science classes, which was a better indication of success than their overall GPA.

      Delete
    2. Interesting, was there any programming classes that you found such a correlation in, or 'just' math? I'm not surprised the problem solving that comes into mathematics around Calculus and beyond had a correlation, but if I had been left to guess I would have expected a core programming class to correlate stronger to software development.

      Delete
  2. Great post - very informative!

    ReplyDelete