norway (1)

Learning Gain – the attempt to measure the different ways in which students benefit from their learning experience – is now a core part of the Government’s plans for higher education. A focus on student outcomes is signalled in the new Higher Education White Paper and learning gain is a key strand of inquiry in the Teaching Excellence Framework consultation.

So, happily, it complements these plans that HEFCE is exploring learning gain through 13 funded projects which will run over three years and involve 70 institutions.

These projects use grades, student self-reported surveys, standardised tests and qualitative methods to measure students’ ‘distance travelled’ during their time in higher education. They combine existing data and collect new information. They also draw on new techniques such as learner analytics and student journey mapping.

And they are trialling a variety of affective measures: surveys of students’ self-efficacy, well-being, disposition and confidence. Behavioural measures also include student engagement, involvement in placement and work-based learning, skills self-assessments, learning patterns and learner analytics.

Sound easy?

This might seem straightforward, but, in practice, there are considerable challenges and many of the metrics can take a long time to measure.

Research has shown that that there is disciplinary bias in standardised tests and surveys. There are multiple entry and exit measures, some of which are not easily or directly comparable. Motivating students to take part in tests outside their degree courses and tracking them over time can also be a struggle. And questions arise about the reliability of student self-reporting.

What is more, given the metrics currently available, all too often satisfaction and research reputation are used as the main markers of success.

And the problems do not stop there. For learning gain you need to know where any given student started. You need to account for the huge variety of entry qualifications, from home, European Union and international students. This includes everything from BTECs and A-levels to portfolios and performances.

Some argue that we can clearly show this gain in terms of grades and degree classification, students’ primary exit measures. But even this is not as simple as it might seem, given that over 70 per cent of students finish with at least a 2:1, which means it is hard for students to differentiate themselves to employers, and for employers to distinguish between them.

Focusing on the individual

A number of the projects have addressed this last point directly by trialling Grade Point Average to provide granularity.

But a key feature of all the projects is that they go beyond grades alone. Additional measures look at the cognitive skills students develop within their subject area and more generally. There are a range of employability measures being explored, including students’ career readiness, adaptability and sustainability.

Several projects also link satisfaction and student engagement survey data, which can inform ongoing practices. Are those learning the most the happiest? Or are students not asking much in return for not being asked to do much?

The advantage of the 13 funded projects is that they focus on the learning experience of individual students. They highlight good subject and institutional practices, and illuminate how students’ attitudes and behaviours impact on their educational experiences.

They also highlight the variety of outcomes that different types of students achieve across different subjects, institutions and modes of study.

The projects can be roughly divided into two approaches.

Telescope projects

These projects rely primarily on existing data, often connecting information from multiple sources. This can include UCAS, Higher Education Statistics Agency and Destination of Leavers from Higher Education data, as well as how often students visit the library or see their personal tutor. They are collecting enormous amounts of data, but like staring into the night sky, the hard part is making sense of it all. However, linking large datasets can help account for the influence of disciplines, entry and exit measures, and help identify the key factors that lead to student success.

Microscope projects

Although linking with existing data sets, these projects focus on collecting new data, often tracking individual students for the duration of their courses. Exploring individual students’ experiences in depth maps their self-reported attitudes and behaviour with outcomes data. Some projects have students filling in different tests and surveys throughout their educational experience – which already highlights that students do not have a linear learning journey.

No silver bullet

Will this nuanced approach produce a silver bullet, a single measure to rank all students? Already we know that this will not be the case.

Still, we will have robust data on what students learn, how they learn and how well it prepares them for employment and continued study. This information will be available across different courses and at different types of institutions catering to different student groups.

And we do not have to wait three years to get wiser. The projects are having an impact already. Career readiness data, for example, is being collected across a range of institutions and providing them with information on where to target advice, guidance and support.

Using multiple approaches, tools, and techniques to measure learning gain, the pilot projects are mapping out the student learning journey. Like zooming in and out on Google Maps, learning gain research will help us get detailed information on students’ learning throughout their course, as well as a better sense where students start, where they end up and how institutions can support them to succeed.