How to Remove Bias from Tech Interviews

In a competitive job market where tech skills are in short supply, organisations are under increasing pressure to build diverse and high-performing teams. However, in reality, unconscious biases frequently creep into traditional unstructured interview processes, resulting in hiring decisions based on subjective factors rather than true job qualifications. This is where structured interviews shine as a powerful tool for promoting fairness and objectivity. Taking a data-driven approach not only helps organisations identify great talent but also supports a commitment to building an inclusive and equitable workplace. 

While structured interviews do not completely eliminate bias, they significantly reduce the influence of unconscious biases compared to unstructured, conversational interviews. This leads to a more fair, consistent, and legally defensible hiring process focused on identifying the best candidates.

Asking the same, predetermined, job-relevant questions in the same order ensures consistency and prevents interviewers from asking subjective or potentially biased questions based on a candidate’s characteristics. 

Equally important is standardising the scoring or decision criteria. Consider the fact that asking a pre-written question does not guarantee the interviewer will not have a response to answers that carry no unconscious bias. Therefore, it’s critical also to give a guide on what interviewers should look for in candidate’s answers to minimise scoring discrepancies across interviewers and ensure a fair and reliable evaluation process.

How unconscious bias affects judgement

One example of how unconscious bias could impact the judgment of a candidate’s answer to a structured interview question with no scoring guide is the “Halo Effect”. The Halo Effect is a type of cognitive bias where one positive trait or characteristic about a person influences the overall perception of them, even on unrelated traits.

For instance, if a candidate gives an excellent answer to one of the initial structured questions, the interviewer may be subconsciously influenced by that positive impression. As a result, they may rate subsequent answers from that candidate more favourably, even if the responses are average or do not fully address the question.

Another example could be “name bias”. Suppose a candidate has a name associated with a particular ethnicity or background that aligns with the interviewer’s unconscious preferences. In that case, they may inadvertently score that candidate’s responses higher than others, regardless of the actual answer quality.

Without a well-defined scoring rubric that outlines specific criteria for evaluating responses objectively, interviewers are more susceptible to letting unconscious biases like the halo effect or name bias influence their assessments of candidates’ answers.

Clear scoring guides help mitigate these biases by providing a structured framework for evaluating responses based solely on their content and alignment with the desired competencies or qualifications for the role

How to write structured interview questions

1. Define the job requirements

  • Identify and list the key skills, knowledge, and abilities required for the role. This will form the basis for your interview questions.
  • Categorise the requirements into different competencies or areas, such as technical skills, problem-solving abilities, communication skills, leadership qualities, etc.

2. Develop questions for each competency

  • For each competency, create specific questions that will allow you to assess a candidate’s proficiency in that area. Use a mix of question types:
    • Behavioral questions: Ask candidates to describe past experiences that demonstrate the desired competency, e.g., “Tell me about a time when you had to resolve a conflict within your team.”
    • Situational questions: Present hypothetical scenarios and ask how the candidate would respond, e.g., “How would you handle a missed deadline on a critical project?”
    • Knowledge-based questions: Test the candidate’s technical knowledge or expertise relevant to the role.
  • Ensure questions are clear, concise, and open-ended to allow for detailed responses.
  • Avoid leading questions or those with obvious right/wrong answers.


How to design a rating or scoring system 

Define rating or scoring criteria

I. For each interview question, determine the ideal response that would demonstrate the required competency or skill. This becomes the benchmark for the highest score. 

II. Develop a rating scale, typically ranging from 1 to 5 or 1 to 7, where 1 represents a poor or unacceptable response, and the highest number represents an excellent or ideal response.

III. Clearly define what constitutes each rating on the scale for that specific question. For example, a score of 5 could mean “Response demonstrates complete mastery of the competency.”

Create a Guide

This is the part most frequently left out or done badly. And by badly, we mean over-engineered. So our advice is to keep it simple! If it’s too wordy or complex, interviewers simply won’t use it. Usability is key here.

  • Document the scoring criteria for each question in a scoring guide or rubric. This ensures consistency across all interviewers evaluating candidates for the same role.
  • The scoring guide should include the question, the desired competency being evaluated, a description of the ideal response (highest score), and descriptions for each rating on the scale.
  • For behavioural questions, the scoring guide can provide examples of potential responses that would qualify for each rating level.

While no hiring process is perfect, research consistently shows that structured interviews with effective scoring rubrics significantly reduce the influence of biases compared to unstructured interviews.

Improving objectivity reduces the influence of subjective impressions or personal biases when evaluating candidates. Another benefit is the improved reliability and validity of the interview process in predicting future job performance. That’s not a small benefit!

Candidate experience can also be positively impacted by an improved ability to provide structured feedback, removing the extremely beige and over-used fallback “we had someone who more closely matched the role”.

And one more, less thought-about benefit is protection against legal challenges. A scoring rubric demonstrates a structured, job-related interview based on objective criteria.

Organisations that are accredited by Project F receive a range of resources to support People & Culture and Technology with DEI challenges, such as this one. Join the Project F movement and let’s work together to close the gender gap in technology! Get in touch hello@projectf.com.au

Subscribe to the DEI Bites newsletter

Get your fortnightly 3 minute update on the latest Diversity, Equity & Inclusion events, insights and industry research.

Further reading

How to build a genuinely diverse and inclusive tech workforce without underestimating women’s abilities and potential.
By prioritizing Diversity and Inclusion, startups can tap into a multitude of benefits that will propel their success. Let’s explore.
Uncover the differences between traditional and progressive HR and how the latter is essential to build diversity and inclusion in the tech industry, especially on the verge of the fifth revolution.