Interviews are not a great way to find out how good someone will actually be at the job. They can also make it easy for your unconscious biases to have too much influence. This has led some to suggest that interviews are broken and we need to find other ways; but it is possible to do interviews in a much fairer and more informative way.
Let’s say you want someone who is good at setting vision and direction for a large team. A question like “How would you set vision and direction for a large team?” is only going to give you hypothetical answers.
All you will learn is how they talk about doing it, how much they’ve read around the topic of vision-setting, and also how much what they say chimes with what you believe. But we know that planning to do something a certain way and actually doing it that way are two very different things. They might say all the right things, but then not be able to follow through.
A better question would be “Tell us about a time that you set vision for a team”, because that will get them to talk about what their actual past behaviour was, which is a much better indicator of what their future behaviour will be. It also gives you an opportunity to ask questions to find out more what their approach was, like “How did you do it?”, “What was the result?”, “What didn’t go so well?” None of those questions are hypothetical, so you’re not asking the candidate to make something up that you want to hear, you are asking them what they did, and how they thought about it.
Note that it doesn’t have to be the exact same thing, because chances are this job will be a step up in one way or another. You need to work out what the important part is, and ask about that. So my question above was about setting vision for a team, not for a large team, because the important thing I wanted to know about was vision-setting.
Interviewing like this is something the civil service does very well. When advertising a job, the hiring manager works out what competencies the job will involve.
Let’s say you are hiring a tech lead. What should this person be able to do? You might come up with some things like set technical direction, communicate effectively with other teams, communicate effectively with the business, unblock team members.
You may come up with a very long list. Identify the four or five that are essential to the role, and then devise questions around those skills. “Can you give us an example of a time that you unblocked a team member?” for example. Or, “Tell us about a time that there was a breakdown in communication between engineers and the business. What did you do to improve the situation?”
Spending time thinking about what exactly you want from the candidate is time consuming, but nowhere near as time consuming as hiring the wrong person.
As well as being relevant to the role and asking about past performance, you need to make sure the questions can be answered in a way that you can aim to objectively judge the answer. For example, with the vision-setting question, you’d be looking for things like whether they’d considered a diverse audience, whether and how they measured the impact, etc.
If you ask a vague question like “Have you worked in an Agile environment? What did you find positive and negative about it?”, what criteria can you judge their answer on, other than whether that sounds like the kind of thing you’d agree with?
Once you’ve identified the most important areas, write out the questions and ask all candidates the same ones. This may feel unnatural, as you will be following a script, but it’s the only way to make sure that you aren’t just making biased assumptions and are actually comparing the information each candidate gives you with each other candidate.
I recommend you come up with around 5 or 6 questions. Once you’ve allowed time for introducing the panel, explaining the interview, and 5-10 minutes at the end for answering their questions, then six is the most you can reasonably cover in an hour.
Some people do go into a lot of detail, so be prepared to cut people off if necessary – you want to give them the best chance possible to demonstrate the full range of skills.
You want to know as much as possible about the extent and effectiveness of what they did, so you can understand whether they have the skills you are looking for.
A useful model to think of is the context/action/result model (a useful mnemonic is CAR). If they don’t tell you all that as part of their answer, you can prompt them.
For example, using the question about unblocking team members. Context: Why were the team members blocked? Action: What did you do to unblock them? Result: What was the outcome?
This will give them a better opportunity to make sure you have the information you need. Remember, this is about giving them the opportunity to give you the most useful information to help you make a decision about whether they are right for the role.
(For candidates, I’ve got a bit more advice about how to structure your answers and think of examples in this blog post for the Government Digital Service or read this excellent Twitter thread by Beth Fraser.)
In order to make the most of the process, you need to give a score for each question as soon as you can. Ideally immediately after the interview, so that you can remember what has been discussed. This helps you focus on how they performed against each of the important skills, rather than being influenced by what you thought of them personally, or your general sense of whether they’d be good (which will be informed by your unconscious biases).
This also helps address the primacy/recency effect, where you will think more highly of them if they answered the first and last question well, or if they were the first or last candidate.
At GDS we used a score of 0-3.
0. They demonstrated no evidence of this skill.
(e.g. if the question is about Perl, they cannot write it at all).
1. They showed that with support, they can master this skill
(They made a few mistakes, they'll get there)
2. They have definitely demonstrated this skill
(They can start writing Perl on day one)
3. They've exceeded the skill level required for this role.
(They are Larry Wall*).
To avoid biases, you need at least two perspectives and ideally three, and the interviewers should be different from each other to help with that: for example, from different disciplines, or different levels (e.g. a junior and a senior), or different ethnic backgrounds, etc.
Each person should make notes during the interview, to allow comparison and feedback later, and then each give a score for each question on your own before discussing as a panel, to guard against your score being influenced by what the others think.
Interviews are not meant to be a stressful test, or have trick questions. You want to get the right person for the job, not outwit anyone. So the whole process should be clear and transparent to the candidate.
In the job advert, spell out skills you are looking for. Let them know when you are inviting them for interview that they can prepare by thinking about examples of times they’ve demonstrated those skills, and perhaps tell them about the context/action/result model.
This means that people will be set up to do their best and you’ll be able to really give them a chance to shine. And it will also mean that you can give clear feedback on what areas to work on for unsuccessful candidates.
If you want to read more about this, here is an article about why interviews are useless. But, as a throwaway question about what can be done at the end, it says “One option is to structure interviews so that all candidates receive the same questions, a procedure that has been shown to make interviews more reliable and modestly more predictive of job success. Alternatively, you can use interviews to test job-related skills, rather than idly chatting or asking personal questions.”
How to do exactly that is what I’ve described here.
If you’d like to be notified when I publish a new post, and possibly receive occasional announcements, sign up to my mailing list: