HEFCE has commissioned research to develop a more systematic and robust evidence base on the impact of the National Collaborative Outreach Programme (NCOP).
The impact evaluation is being led by CFE Research in collaboration with the Behavioural Insights Team, Professor Jenny Roberts (Department of Economics, the University of Sheffield), and Dr Shqiponja Telhaj (Centre for Economic Performance, the London School of Economics). The evaluation team has a wide variety of experience in research relating to higher education and widening participation.
CFE Research specialises in social research and recently worked with HEFCE on projects exploring methods to improve reporting and documentation on the impact of Student Opportunity funding. This work supported the development of an annual data return for higher education institutions to show the impact of their widening participation funding.
CFE will build on this and other research evidence, and work with the Behavioural Insights Team (BIT) and a number of the NCOP consortia to design and implement a range of new and experimental methods.
Randomised control testing
Rigorous testing is at the heart of our approach. Randomised controlled trials (RCTs) are commonly used in medicine and considered the gold standard of evaluation. They provide evidence on whether a policy is actually working – for example, in a famous early trial, BIT demonstrated that changing letters to explain that most people in a local area have already paid their taxes considerably enhances tax compliance in comparison with a standard tax demand.
BIT has now run more than 300 RCTs across a diverse range of sectors, including education and skills. It will bring this expertise to the NCOP evaluation by working with a number of consortia to design and implement RCTs in their target wards.
Planning, prepping, testing
The impact evaluation is currently at the planning phase, which involves close consultation with consortia to discuss the range of different experimental methodologies available. The preparatory and testing phase will design an evaluation and sampling framework and explore the feasibility of conducting RCTs.
RCTs are valuable because, if well designed, they can isolate the causal mechanism from contextual or individual factors. This is achieved by randomly assigning some people into a ‘treatment’ group and the rest into a ‘control’ group.
In medicine, those in the treatment group would be given the drug which is being tested and the control group would not. This enables a comparison between the two groups to see whether the treatment has worked or not. In the NCOP the ‘treatment’ might be participation in a summer school or mentoring programme.
Although they are one of the best ways of generating concrete evidence, there are number of issues to take into consideration when running RCTs. For example, it is important to discuss when it is appropriate to use a control group.
In some instances this may raise ethical concerns, particularly where consistent, strong evidence already exists that the intervention is effective. However, it’s important to remember that some established social and medical programmes have been proven ineffective when eventually evaluated via RCT.
The ‘Scared Straight’ programme in the US provides a good example. As part of this popular initiative, juvenile delinquents and children at risk of becoming delinquent take part in prison visits. While the intention is to deter young people from criminal behaviour, RCT evaluation has shown that participants are actually more likely to offend after taking part.
In another example, an RCT was used to evaluate the effectiveness of providing teenagers with infant simulators (dolls that mimic the behaviour of real babies), to discourage teenage pregnancy. The trial results showed that those who were given simulators were more likely to get pregnant before they were 20.
Even in widening participation, there can be surprises
BIT has run a number of RCTs in this area, and the results haven’t always been what was expected. In some cases, RCTs have proven the value of low-cost, high-impact activities, and in others they have shown that things that should have worked haven’t.
For example, in 2014 BIT collaborated with the Somerset Challenge, a collection of secondary schools in Somerset working together to improve outcomes for young people. Together they ran a study to test three interventions:
- providing young people with information about the costs and benefits of attending university
- providing the same information to their parents
- giving students a short talk from a former student from their area who went to university.
This trial had some surprising results. Firstly, the talk significantly increased students’ interest in university and their likelihood of applying. Further analysis revealed that this was driven by the belief that attending university would result in better friends and a more interesting life.
However, providing parents with information cards had no effect on students’ interest in attending university, and giving the same information to students actually made them less interested in attending.
This project is a good example of where something we might expect to raise aspirations actually had a negative impact, and highlights the importance of evaluating widening participation activities to ensure they are having the desired effect.
For more information about this research and how you can get involved, contact project manager Dr Sarah Tazzyman, email NCOP@cfe.org.uk, tel 0116 229 3300.