Assessment shapes much of the learning that students enjoy and has a significant impact on overall student experience. Yet, it remains an area of concern for many students; for example, it is still one of the lower scoring areas of the NSS. It is essential that assessments are designed to promote confidence in students that these are fair, understandable, and objective: ensuring validity, transparency and reliability in assessment is a step towards this.
- Validity in assessment signifies that the assessment (task or instrument) allows students to demonstrate attainment of the learning outcomes addressed by the assessment. In short, a valid assessment evaluates what it is supposed to.
- Transparency signifies that the requirements of the assessment and expectations for grade bandings are made explicit to the students.
- Reliability in assessment signifies that assessments and marking criteria are designed to ensure that evaluation of the relevant learning outcomes is consistent from student to student, marker to marker, and year to year. That is, different markers would award similar marks for the same piece of work.
When seeking to ensure that assessments are valid, transparent, and reliable, the following principles are useful:
- Constructive alignment - constructive alignment (Biggs and Tang 2011) emphasises the alignment of intended learning outcomes (ILOs), assessment task(s), and teaching/learning activities. In particular, it makes explicit use of the verbs in the ILOs to design teaching/learning activities and to specify what the student should perform in the assessment task(s).
- Explicit - The requirements of the assessment task(s) should be clearly stated. The criteria used to evaluate the attainment of each learning outcome should be made explicit; and each element of the assessment task(s) should be mapped to the learning outcomes addressed.
- Accessible - Assessments should be written using language that is understandable by and meaningful to students, taking into account students’ backgrounds and prior learning. The Universal Design for Learning framework (CAST 2023) is an invaluable resource here.
Examples of good practice to promote validity, transparency, and reliability include:
- Continuous reflection - It is essential that assessments are continuously reviewed, evaluated and revised to ensure relevance and currency. Feedback and discussions with students, staff, and other key stakeholders, such as Industrial Advisory Boards, are an essential part of promoting validity, transparency and reliability in assessment.
- Timely discussion - Assessments should be introduced to students at an early stage in the module. Students should be given an opportunity to seek clarification on the requirements of assessments and examples should be used to explain how attainment of criteria - and thus learning outcomes - can be demonstrated.
- Rubrics - The use of rubrics and/or assessment guidelines promote both transparency and reliability (Jonsson and Svingby 2007).
- Standardisation - Module teams should meet at an early stage to discuss assessments prior to release to students, agreeing expectations of the students and interpretation/clarification of criteria; teams should also be in regular contact and discussion when marking assessments.
Office for Students (OfS): Considerations
The requirement for assessment to be effective, valid and reliable is made explicit in the OfS’s General Ongoing Conditions of Registration under B4.2
B4.2 The provider must ensure that:
a. students are assessed effectively.
b. each assessment is valid and reliable.
c. academic regulations are designed to ensure that relevant awards are credible.
Biggs, J., and Tang, C. (2011). Teaching for Quality Learning at University. Maidenhead, UK: Open University Press.
Black, K. (2019). Perspectives on: Authentic Learning with Large Student Groups. Enquiring into the ‘Management Enquiry’. Chartered Association of Business Schools. Available online: https://charteredabs.org/publications_cats/perspectives-on/
CAST (2023). About Universal Design for Learning. Available online: https://www.cast.org/impact/universal-design-for-learning-udl
Jonsson, A., and Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educational Research Review, 2(2):130-144. https://doi.org/10.1016/j.edurev.2007.05.002
Office for Students (n.d.). Quality and Standards Conditions. Available online: https://www.officeforstudents.org.uk/media/084f719f-5344-4717-a71b-a7ea00b9f53f/quality-and-standards-conditions.pdf
QAA (n.d.). Assessment Design Attributes for use Across the Higher Education Sector. QAA Collaborative Enhancement Project. Available online: https://www.qaa.ac.uk/membership/collaborative-enhancement-projects/assessment/developing-a-set-of-inclusive-assessment-design-attributes-for-use-across-the-he-sector
Swaffield, S. (2011) Getting to the heart of authentic Assessment for Learning. Assessment in Education: Principles, Policy & Practice, 18(4):433-449. Available online: https://www.tandfonline.com/doi/abs/10.1080/0969594X.2011.582838?journalCode=caie20