Building Bridges to Link Global and National Learning Assessments by Silvia Montoya
Silvia Montoya (@montoya_sil) is the Director of the UNESCO Institute for Statistics.
A new paper from the UNESCO Institute for Statistics (UIS), prepared for the Fourth Meeting of the Global Alliance to Monitor Learning (GAML) in Madrid, outlines a new approach linking the global monitoring of learning to national and cross-national assessments.
The UNESCO Institute for Statistics (UIS) is working to develop reporting scales to help governments monitor student learning in mathematics and reading over time and make good use of the resulting data to shape policy – essential stepping stones to the achievement of Sustainable Development Goal 4 (SDG 4): a quality education for all. The aim is to make the greatest possible use of existing national assessments and cross-national assessments to produce internationally-comparable data.
The Global Alliance to Monitor Learning (GAML) and the Australian Council for Educational Research’s Centre for Global Education Monitoring (ACER-GEM) are working to develop the UIS Reporting Scales (UIS-RS) that would achieve these goals. As outlined in a new paper, SDG 4 Reporting: Linking to the UIS Reporting Scale through Social Moderation, prepared for the Fourth Meeting of the Global Alliance to Monitor Learning, good progress has been made, with draft learning progressions now being reviewed and a proposal for the validation of the scales on the table. However, it is important to note that these reporting scales represent a long-term effort.
The urgent need to define minimum proficiency levels
The most pressing need is to help countries define ‘minimum proficiency levels’ in literacy and numeracy – a pre-requisite for reporting against Indicator 4.1.1: the percentage of children and youth achieving a minimal level of competency in literacy and numeracy in three points in time and by sex: (a) in grades 2/3; (b) at the end of primary; and (c) at the end of lower secondary. We also need, as a matter of urgency, a reporting metric and a mechanism to link this to existing assessments.
The paper, prepared by Management Systems International (MSI), sets out a proposed approach to meet these immediate needs. It presents five steps to construct a ‘UIS Proficiency Metric’ (UIS-PM), for Indicator 4.1.1 and to link this to national and cross-national assessments.
- Content standards: What students are expected to learn in reading and mathematics at the three levels of education defined in Indicator 4.1.1 – grades 2/3, end of primary and end of lower secondary.
- Policy descriptors: What students are expected to perform, in generic terms, without content.
- Performance standards: What students are expected to perform in terms of content, in terms of knowledge, skills and abilities.
- Proficiency scale map(s): How the proficiency scales (i.e. performance levels) of various national and cross-national assessments align with the UIS proficiency metric.
- Socially moderated performance standards: What students should obtain (in terms of a score) on their national and cross-national assessments to be classified as reaching the ‘desired’ performance level for SDG reporting.
The paper emphasises the critical importance of social moderation (or policy linking) of performance standards, setting out a two-stage process.
Stage 1 envisages the evaluation of the alignment of Performance-Level Descriptors (PLDs). This would assess whether there is any alignment between the UIS proficiency metric and the performance-level PLDs of the assessments. A group of country representatives and subject-matter experts would use a three-point scale (‘no or limited match’, ‘mostly matched’ and ‘fully matched’) to determine the degree of alignment. The performance levels of the assessments with ratings of ‘mostly’ or ‘fully matched’ with the UIS proficiency metric are then used to report Indicator 4.1.1. Those that do not yet align should go through the second stage.
Stage 2 envisages the setting of socially moderated performance standards for assessments, using a standard-setting method to link them with the UIS proficiency metric. Workshops would be convened, where groups of country representatives and subject-matter experts would provide individual and independent judgements about each item on the test to set their initial cut scores, based on their understanding of the PLDs and experience with the student populations. Their scores would be aggregated to estimate the panel-recommended cut scores, before analysis of the implications. Those assessments that achieve cut scores deemed partially meet, meet or exceed minimum proficiency levels would then be used for reporting against Indicator 4.1.1. In other words, the students classified as meeting or exceeding minimum proficiency and levels of the UIS proficiency metric demonstrate the required knowledge and skills assessed by the national and cross-national assessments.
We look forward to working closely with countries and our technical partners on this new approach.
Disclaimer: NORRAG’s blog offers a space for dialogue about issues, research and opinion on education and development. The views and factual claims made in NORRAG posts are the responsibility of their authors and are not necessarily representative of NORRAG’s opinion, policy or activities.