The University of East Anglia’s (UEA’s) HEFCE funded learning gain pilot project was carried out in partnership with City College Norwich. The project evaluated three approaches to measuring learning gain, and this is a brief roundup of the findings.
One of our work packages focused on student marks and how they represent learning gain.
In terms of learning gain expressed as the change in average marks, we found some variation in the difference in student attainment between discipline areas at UEA, with aggregated average student marks varying by up to 5 per cent between the first and last year of undergraduate study.
We explored the possible reasons behind these differences among cohorts through interviews. The interview findings highlight the following differences in marking cultures:
- While a generic marking scale is applied across the university, several academics have developed more specialised subject-based marking rubrics.
- The nature of subjects give different marking profiles, with mathematical subjects producing a different (bimodal) distribution of marks when compared to essay-based subjects that tend to be more clustered.
- There is an acceptance of an element of subjectivity within the marking process in some subjects, especially when there are only small differences (for example 2 per cent) in marks awarded.
- The nature of the assessment design varies from course to course, with some students having to produce different numbers of assessments for modules of the same credit size.
- Opportunities to discuss marking and assessment approaches between schools are limited.
Self-efficacy and student confidence
Our second approach was all about self-efficacy, a term which refers to student confidence in their own abilities. The results of this work showed us that learning gain is positively associated with confidence gain. When students learn from each other in the classroom, their confidence at tackling similar problems in the future also increases.
The analysis of the self-efficacy data generated in the project indicates that peer-instruction combined with self-assessment have a positive association between learning gain at student-level and at class-level. Variations to the approach to teaching meant that we had two slightly different approaches to teaching. While the bulk of evidence was gathered in the School of Economics, the School of Pharmacy trialled self-efficacy assessments through an alternative teaching method, embedding self-assessment as an implicit element of formative assessment. Under both scenarios we demonstrated that students develop good self-assessment skills, provided that teaching is delivered according to an active learning approach.
Thirdly, we used concept inventories in the study once at the beginning of the module and one at the end – to measure the difference. Concept inventories are a measure of a student’s understanding of subject knowledge in a way that does not rely on memorising facts. Our results indicated that those students who performed worse in the initial assessment exhibited greater absolute improvements in their conceptual understanding than those who performed better initially.
Two types of inventory were used in three subject areas. In the School of Chemistry, the aim was to compare the various proposed measures of learning gain. The students sat a ‘Bonding Concepts Inventory’ developed by the Lowery-Bretz group in the US, which was used at the beginning and end of the teaching period. In the School of Pharmacy the Bonding Concepts Inventory was again employed. Here the match between module learning objectives and what the concept inventory tested was weak, and no learning gain was observed. Perhaps the life sciences instrument would be more appropriate for this cohort? In the School of Biological Sciences, selected questions from a ‘Life Sciences Concept Inventory’ were employed. Here student engagement was a particular problem and the number of students who participated in the second sitting was less satisfactory.
Our project has come to an end, and we continue to disseminate our findings at conferences and in journals. For UEA this work has also helped inform our understanding of assessment and our approach to marking and feedback. At a time when teaching and learning in higher education is undergoing rapid change, these sort of projects, which look closely at the student learning, are especially valuable.
We are continuing to work with HEFCE and other higher education providers to develop the sector’s understanding of this complex issue.