Methodological issues and challenges

Self-reported data have known limitations, the most obvious of which might be an expectation of self-interest bias that would drive scores up because of individuals’ tendency to overstate abilities. To counteract this, a suite of studies would need to be devised to facilitate cross-validation. Thus, the employer studies and the learner interviews in part would need to confirm the results obtained from the broad student surveys. A further issue with self-reported data is evident when the aim is to ‘measure change over a period of time.’

A proposed study aimed to quantify change over time, especially focused on the time between the start and the end of a The Impact of project work  or competence observations towards Integrated Learning on Student Work-readiness within an Apprenticeship standards placement experience. One approach to ascertaining change over time is the repeated measures approach, in which a measurement is taken at the start of an experience and then the same measurement is taken after the experience. The problem is that self-appraisals become affected by response-shift bias. This occurs when respondents’ rate their skills as high prior to an experience, and rate themselves lower after the experience. One of two outcomes may occur: 1. Prior to the experience, the respondent may perceive that they are particularly poor at a particular skill. Following the experience they realise they were more competent than their original assessment, rendering the initial rating invalid. 2. Alternatively, a respondent may initially perceive their skills as highly competent but following the experience realise that the earlier rating was inaccurately high. To counteract this, the project adopted a retrospective approach to the question of change over time by creating the proxy-longitudinal study in which students at the end of a placement experience rate themselves “now”, “at the start of the placement”, and “at the start of their studies” giving three time points (three repeated measures) not subject to response-shift bias. A final observation about the use of self-reported data in studies where comparison of two groups is needed is the well-known and well-validated effect called the Dunning-Kruger effect (Kruger & Dunning, 1999; Simons, 2013). This challenge’s the validity of inferences made in comparative studies. The Dunning-Kruger effect highlights the problem of not knowing what you don’t know. In this project students with no prior WIL placement experience tended to over-estimate their employment-readiness abilities. Previous studies has detected and counter-acted by analysing data by reference to placement quality and not just by reference to its presence or absence. Students with no prior placement experience rated themselves higher than those with a prior low-quality placement, and about the same as those with a prior sub-median quality placement experience.


Like this article?

Share on Linkdin