‘How do we assess’

Introduction

Assessment is a critical endeavour with implications for students, universities, industry and the wider community. The measurement of student learning, however, presents many challenges, particularly in the context of cooperative education, work-integrated learning, work-based learning, service learning and other models of learning through participation (LTP).

 

Finding appropriate assessment strategies is a significant factor in ensuring the sustainability of experience-based education. Davidge-Johnston (2007) observes, however, that using traditional assessment models can be problematic because it is‚ difficult to validly measure learning in one learning model with tools designed for a completely different model‛ (p. 140). Many traditional methods do not address or adequately measure the new kinds of learning that this type of education seeks to engender, such as the so-called soft skills, graduate capabilities/attributes or personal development and transformation. These aspects of learning do not fit neatly into ‚proscribed and specific learning outcomes‛ (Hodges, 2008, p.11).

 

  1. Determine the aspect/s of learning to be assessed (e.g., application of theory to practice and discipline-specific soft skills) and what kind of evidence of learning can be used.
  2. Decide what students need to achieve and be clear about what will be measured, taking into account any accreditation or certification requirements.
  3. Agree on who is involved in the assessment process and clarify roles in terms of how stakeholders will be involved, whether this be in only some aspects of assessment (i.e., host supervisor involved in formative assessment only) or all of the assessment (i.e., the academic supervisor);
  4. Provide support and training for anyone involved in assessment as stakeholders may be unfamiliar with the aspect of learning, situation of learning and/or some of the methods used for assessment.
  5. Consider the situation/context of learning which will vary between students and ensure the assessment package is flexible and realistic enough to account for variations while also being equitable to all students.

We have to acknowledge Hodges et al. (2004) who suggests there are dangers in concentrating heavily on performance measurement and reliability that ‚lead assessment designers to focus on more tangible and identifiable technical skills and competencies at the expense of more difficult-to-measure soft generic skills and competencies‛ (p.53). Other authors warn that assessment of valuable professional skills such as ‚tacit knowing, intuition and artistry‛ (Zegwaard et al., 2003) or the ‚poorly defined but essential elements of the graduate attributes‛ (Hungerford, Gilbert, Kellett, McLaren, Molan, & Washington-King, 2010, p.199) can be overlooked by such a narrow focus. These skills fall into the list of ‚wicked competencies‛ defined by Knight (2007) who also identifies graduate attributes and complex achievements in this group. He further contends that a competency such as creativity or critical thinking ‚cannot be precisely defined, takes on different shapes in different contexts and is likely to keep on developing‛ (p.1)

Like this article?

Share on Linkdin