Recruit better than the Israeli army
2021-02-15
A most stunning story from Daniel Kahneman entails his recruitment efforts in
the Israeli army. As a young man doing his national service, he was assigned
to the Army’s Psychology Branch where they would evaluate candidates for
officer training.
Using a technique developed by the British Army in World War II they performed
one test called a “leaderless group challenge” on an obstacle field. The task
was to get all participants and a log over a wall without either the log or
any participant touching the wall.
Source: JNS /
Israel Defense Forces
While the group figured it out, Kahneman and his colleagues would summarise
their impressions of each soldier’s leadership abilities. Those who scored
well went onto officer training while others were left behind.
As he says about their decision making, "The task was not difficult, because
we felt we had already seen each soldier’s leadership skills. Some of the men
had looked like strong leaders, others had seemed like wimps or arrogant
fools, others mediocre but not hopeless. Quite a few looked so weak that we
ruled them out as candidates for officer rank". We were quite willing to
declare that “This one will never make it,” “That fellow is mediocre but
should do okay,” or “He will be a star.”
Every few months they would get feedback from the commanders at the officer
training school allowing them to compare their forecasts to how cadets were
doing.
The story was always the same, their ability to predict who would be officers
was negligible. Their forecasts were only slightly better than blind guesses.
Their elaborate leaderless challenge process did not indicate whether a
candidate would become a good officer.
While downcast that the evidence showed that they were unable to usefully
predict who would do well on officer training, this didn’t influence how they
selected the next candidates. Recruits arrived and they followed the same
process and got the same haphazard results months later.
This was remarkable.
The very strong evidence that their process of selecting candidates was flawed
should have shaken their confidence and yet they continued as before. As
Kahneman says, ‘We knew as a general fact that our predictions were little
better than random guesses, but we continued to feel and act as if each of our
specific predictions was valid.’
The link to recruitment
This experience of knowing the process is flawed and continuing as before
reminds me of the recruitment process in most organisations.
The evidence is strong (see
New York Times - The utter uselessness of job
interviews
) that a series of interviews is a poor
method to determine whether somebody will perform well in an organisation. The
combination of our own biases and people's ability to sniff out what you want,
let alone that 80% of people lie in interviews, mean that an interview falls
far short of any scientific rigour. And yet interviews remain the most
prominent method for recruitment.
I was speaking to a friend who runs a recruitment company recently. She says
it is remarkable that they can coach a candidate to say exactly what they know
will get the person interviewing to see them favourably. Stock phrases such as
“excited about the role” and “a platform for my ambitions” are guaranteed to
elicit favourable responses from the person making the decision and often lead
to them being hired.
When deciding to hire someone, particularly a senior person it often feels
like we are taking a leap of faith. We always hope for a favourable outcome
but the results are mixed. Like Kahneman's military assessors, we know that
our process gives somewhat random results and yet we keep doing it.
This has fascinated me for many years. As the CTO of 20twenty in 2003, we were
hiring a whole new technology team very quickly to meet the demands of our
relaunch with Standard Chartered. Our HR head insisted on a process which
involved collating inputs from multiple people. I thought it was cumbersome
and long-winded but agreed to follow it, well mostly.
Fast forward a year and the two hiring mistakes we made amongst 40 new hires
ended up being the two people where I had overridden the process and made a
decision off a couple of interviews. It was a major lesson for me.
Years later as I learned about how difficult it is for humans to make complex
decisions, I realised the genius of the hiring approach that our HR head had
imposed. I’ve refined it over the years to remove the hope from the process
and to introduce much more scientific rigour. Clients who have effectively
used data in their recruitment processes have had much more predictable
results.
Data Rich Recruitment
The process looks as follows.
There are two important principles to keep in mind with this process.
-
Understand the short / long term trade-off
Identify how much pain you will experience if the person you hire for this
role doesn’t work out. This could feel like planning for divorce as you are
getting married, but it helps to motivate yourself to do slightly more work
upfront to avoid the pain later.
Describe the pain to yourself in the amount of time, money, momentum, morale,
credibility and personal disappointment you will feel if you get it wrong.
This is the price you pay for getting it wrong.
-
Use a scientific approach
This means that all data is collected and collated independently. It is
important to involve multiple people in the recruitment process. The effort of
involving these people is wasted if we bias their views on the candidate. This
often happens when we chat to each other throughout the process to make sure
we are 'all on the same page'.
This well-intentioned action skews the individual views that each person has
on the candidate. We end up with a "group think" outcome, frequently
influenced by the most powerful person or the individual with the strongest or
best-articulated views.
The first step is to list the criteria against which we will measure the
candidate. This may sound obvious but often we hire with the briefest of
descriptions for the role. “We’re looking for a Chief Revenue Officer” or “We
need to hire a good experienced salesperson” is not enough. Likewise writing a
job description that looks like War and Peace is also not useful.
Research shows that a few well-defined criteria work best.
If we are hiring a Marketing Director then the criteria may be:
- At least five years directing marketing for a company with revenues of more than $1,000,000
- Leading a team of at least ten people
- Specific first-hand knowledge of digital marketing
- Effectively deliver qualified leads that convert to sales
- Be a valuable member of the executive team
Three to eight criteria work well. Write them down in a way that makes it easy
for you and others to evaluate if there is evidence that a candidate meets the
criteria or not.
Now that we have the criteria we can encourage candidates to apply for the
role. This is likely to be on a recruitment site or LinkedIn or maybe using a
head hunter or recruitment company.
Depending on how many people you are likely to attract, it is worth thinking
how you can use one of your criteria as a simple gatekeeping mechanism to
filter in only those people who are committed and eligible for the role.
We want to weed out the tyre kickers and those not able to do the role. In our
example of a marketing director, we could ask them to write 300 words
summarising their digital marketing capability or perhaps provide a two-minute
video highlighting their three key lessons for leading a marketing team.
Either of these hurdles takes only a little bit of effort for a professional
who is well qualified for the role. For anyone committed to getting the role
they are a minor additional piece of work well worth doing to demonstrate
their capabilities and differentiate themselves.
We need to be able to scan through a number of these submissions quickly
evaluating who we want to shortlist for the role. We may need to do more than
one round of this screening depending on how many candidates apply. You can
also use one of the measures described in the next step to screen your
candidates.
Next, we design the measures that you can use to evaluate these criteria. A
typical list looks as follows:
- CV/LinkedIn Profile
- Psychometrics
- Audition
- Presentation
- Interview
- Reference checks
- Video introduction
- Essay / position paper
There might be specific things that apply to certain roles. For example, the
audition could be replaced with a code test for a software developer. A
project manager role may need to create a project plan for a fictitious or
real event.
You might also have specific measures that you have created for your company.
Some organisations do a culture fit interview separate from other interviews.
For each of these measures, we use a scale of 1 - 4 to prevent hedging our
bets with a middle option. The way we score it as follows.
- No evidence
- Some evidence
- Good evidence
- Lots of evidence
Dropping this into a spreadsheet gives us an easy way of scoring each item.
To be complete you could create space for comments on each of the scores to
list the specific evidence that you discover.
Next, we pull it all together by adding in the specific people who can do each
of the evaluations. Ideally, we have multiple people, meaning multiple data
points for each criterion.
There may be some items like psychometric tests that need a specialised person
which makes the multiple data points impossible.
Your grid will now look as follows.
What about our intuition. What do we do with the gut feeling that someone will
work out or not? We add it in as a criteria. Each person can include their
rating using the same 4 point scale. A suggestion from the research is that
you ask evaluators to close their eyes and imagine the candidate in the role.
If you can see them being very successful then they would rate four down to
failing in the role scoring a one.
Another variation is to weight some criteria more than others. A psychometric
test can give insight into personality and specific mental competencies such
as verbal reasoning and numerical literacy. If these are very important
requirements for the role, you may weight these items higher than others.
You can refine the list above to only those items that you believe are
critical to the role. Typically a few well-thought criteria work much better
than a full-on belts and braces approach that is over complicated.
The most important aspect of the whole process is that everyone does their
scoring alone. It might sound overly rigid to not speak to anyone else about
the candidate however let's consider what happens if you do.
The moment we talk to each other about a candidate we bias each other. If the
other person says they love the candidate this will affect what you think
about the candidate.
Conversely, if they believe the person will have real challenges with the
role, then your opinion is also going to be affected. Whether you agree or
disagree, it is almost impossible to remain neutral when you hear the view of
someone else.
If you think you are immune from being biased, then consider these
eight
examples of cognitive bias
that affect your decision making.
Save the conversations for after all the results have been collated.
Each person submits their scores to an independent person who is not part of
the evaluation process. This can be an assistant, a project manager or a
junior in your people team. All they need to be able to do is operate a
spreadsheet.
They do all the adding up and only release the scores once they are all
included.
When complete, you will have a ranking of your best candidates. I've found
that the people who follow this process are significantly more confident in
making recruitment decisions.
Commonly more than one candidate is eligible at the end of the process.
It may seem obvious that you pick the person who scores the best in your grid.
This is not always the case and sometimes when you go through the process, you
will identify elements that convince you to choose the #2 or #3 candidate who
also more than meets your minimum requirements.
Learning from Kahneman
Kahneman's recruitment experiences in the Israeli army helped him to identify
what he describes as his first cognitive illusion -
the illusion of
validity
. A belief that an approach would give a valid result despite
conclusive evidence to the contrary.
Kahneman's circumspect approach to his own expertise gives an insight into
what drove his and Amos Tversky's decades-long quest to understand and improve
our decision-making ability.
When recruiting, we are making a complex decision. We are attempting to weigh
up many factors predicting how well a person will fit into a role. Anyone who
believes they are good at recruiting could learn from Kahneman's experience.
To prevent overestimating our abilities, we need to collect the data and
ensure that we are not suffering from the illusion of validity. Using data to
predict our results we can revisit what we believe will happen with our
recruits and hone our process to incorporate our learning.