Written by Dr. Becca Ciancanelli, Director of Inclusive Teaching Practices, and Dr. Stephen Riley, Director of Assessment
Implicit bias can be sneaky. This term refers to the unconscious bias that forms the assumptions that we make about students based on social identity (Imazeki, 2021). These assumptions can be invisible to us, especially in course-level assessment.
For example, when grading papers, professors might be influenced by the student’s perspective on a topic and therefore have less focus on the quality of the argument (Steinke & Fitch, 2017). Confirmation bias might lead a professor to grade a student lower that they perceive as disengaged, or who has not performed well on prior assessments (Malouff et al, 2013). Courses that have a few large stakes assessments, like midterm exams, might block certain students from successfully showing mastery of the content. Taking proactive steps to bring awareness to the assessment structure and grading process will reduce the likelihood of bias in assessment.
Bias in assessment can show up in many ways. One of the prominent aspects of our assessments that can be affected by bias is the way in which we structure our courses toward summative assessments.
Summative assessments are those assignments which we design to see if students have mastered the main learning outcomes of our courses. In many cases, the assignments take the form of a research paper or standardized test. While both approaches to assessment have merits, there are some implicit biases in them as well.
For example, the research paper can disproportionately affect those who are English Language Learners (ELL) or those with ADHD who find the research to writing connection especially difficult. Standardized tests often favor students whose recall memory skills and anxiety coping are high while disproportionately affecting those who may have processing disorders and may not be able to easily move information from short term to long term memory.
In such cases, it is important to recognize that Student Learning Outcomes (SLO) are especially helpful in making explicit what information and skills our courses are intending to assess. When we can articulate those outcomes clearly, we may be able to mitigate against some biases in assessment by moving toward flexible assessments as proposed by Universal Design Language (UDL). With flexible assessment, students are given options for how they might demonstrate mastery of a course’s SLO in a summative assessment while still maintaining rigor and integrity. For example, if our summative assessment has been final research paper, we could offer students the opportunity to propose a final project based upon a single rubric that included the important information and skills that aligned with our SLO for the course we were assessing.
There are many excellent examples of how this has been done in different disciplines and the results include increased student engagement and creativity while removing barriers and biases that hinder student learning (Edwards, 2020).
Another way bias may present is through our grading of student work. Researchers have published work about a number of grading biases. For example, J. M. Malouff has written extensively regarding biases such as the ‘Halo’ effect, where a professor gives grades to students based on their overall impression of each student rather than the actual submitted work (Malouff et al, 2013). Others have pointed out effects such as the ‘Anchor’ effect, where all students are graded based on the work of one superior or creative student, or the ‘Logical Fallacy’ effect, where students are assessed based on tangential criteria to the learning outcomes. It is easy to see how these and other effects like them could disproportionately affect marginalized and vulnerable students in our courses.
There are ways to mitigate these biases such as turning on anonymous grading in Canvas and having students submit assignments with no identifying information. Another approach for looking at biases in our grading includes using clear and accessible rubric, which you can set up in Canvas For example, using the ACC&U’s VALUE Rubric for Intercultural Knowledge and Competence for grading would ensure students understand the clear guidelines of what is being assessed in an assignment with this outcome. Such a rubric would also give clear actionable steps to meet distinct levels of success based on a nationally normed set of criteria. Finally, working with others to see how they assess our grading, known as interrater reliability, could ensure we are grading assignments with both rigor and equality. This could be achieved by meeting with other members of your team to review scoring practices against the same rubric on an assignment.
Investigating your language regarding assessment, on your syllabus and in class, will help to interrupt bias. Many students struggle to understand expectations regarding assessment. Being transparent about how your assessment design aligns with your learning outcomes, as well as signaling effective study strategies for mastering your disciplinary content, can reduce barriers to learning for many students. Consider a culturally responsive approach to assessment, such as contract-based grading, which creates a collaborative classroom environment, where students can benefit from working together, and ultimately prepares students for their future workplaces (Jack & Sathy, 2021; Stephens et al, 2012). Also, encouraging students as they prepare for an assessment can reduce stereotype threat (Learning for Justice, nd).
Please explore the “Inclusive Assessment” Module to review key definitions, DEI approaches to assessment and a suggested syllabus statement.
Bringing self-awareness to implicit bias about student performance, which we all have, is a strong step towards creating inclusive environments where all students can thrive.
- Stachowiak, B. (Producer and Host). (2021, October 14). Implicit Bias in Our Teaching with Jennifer Imazeki [Audio podcast]. Teaching in Higher Ed. https://teachinginhighered.com/podcast/implicit-bias-in-our-teaching/
- Steinke, P. & Fitch, P. (2017). Minimizing Bias When Assessing Student Work. Research and Practice in Assessment, 12 (Winter), 87-95.
- Malouff, J.M., Emmerton, A.J., Schutte, N.S. (2013). The Risk of Halo Bias as a Reason to Keep Students Anonymous During Grading. Teaching of Psychology, 40(3): 233-237.
- Jack, J. & Sathy, V. (2021, September 24) It is Time to Cancel the Word ‘Rigor’. The Chronicle of Higher Education. https://www.chronicle.com/article/its-time-to-cancel-the-word-rigor
- Stephens, N.M et al. (2012). Unseen Disadvantage: How American Universities’ Focus on Independence Undermines the Academic Performance of First-Generation College Students. Journal of Personality and Social Psychology, 2012, Vol. 102, No. 6, 1178 –1197.
- Learning for Justice. (n.d.). How Stereotypes Undermine Test Scores. Retrieved September 12th, 2022. https://www.learningforjustice.org/professional-development/how-stereotypes-undermine-test-scores