Mikel Villamañe, Ainhoa Alvarez, Mikel Larrañaga
This poster is related to the thread “A culture of learners”, more specifically, on supporting teachers and students on their assessment processes. Assessment is often the key element used to decide whether implemented actions and techniques are being effective or not, as it allows measuring the teaching and learning outcomes (Dunn et al., 2011) and to analyze how to adequately improve it. However, to be able to use the assessment as a reliable measure, a fair marking that truly reflects the student performance must be guaranteed. The first step to obtain this fairness is the standardization of the criteria (Chan, 2001) what can be obtained, for example, through the use of rubrics. Defining good rubrics is a complex task which can be supported by e-assessment tools (Villamañe et al., 2016). Even when assessment criteria have been established, objectivity is not always assured. Systematic patterns in evaluation behaviors can significantly influence the final grade (Engelhard Jr George & Wang, Jue, 2015). These behaviors, called rater effects, can be produced in an unconscious way, due to the different personal perceptions and tendencies of the raters or on purpose to affect some student’s score in a positive or negative sense. Often, the data gathered during an evaluation process may include different students, with several works where each work is scored by different raters, so its analysis to detect rater effects is not trivial. Therefore, it is important to provide software that automates some of the rater monitoring aspects (Wolfe, 2014); for example, by analyzing statistics related to particular raters and automatically detecting scoring patterns.. This software can also support the process of gathering and analyzing information (Ras et al., 2015), helping to make adequate decisions and to improve the assessment process itself as well as the quality of the teaching-learning process (Rodríguez-Conde et al., 2016).
This poster presents the satisfying experiences in the use of AdESMuS (Villamañe et al., 2015) and its visualization capabilities to analyze assessment processes in order to identify different rater effects and controversial evaluations. The audience will be encouraged to reflect on their own assessment processes and on the usefulness of visualization techniques to identify rater effects and biased evaluations.
This work is supported by the UPV/EHU (EHUA 16/22) and the Office of the Vice-Chancellor for Innovation, Social Engagement and Cultural action of the UPV/EHU through the SAE-HELAZ (HBT-Adituak 2018-19/6).