Artificial intelligence: Implications for assessment practices

The implications of artificial intelligence (AI) on assessment practices have, in many respects, been more thoroughly considered than other, broader issues. However, if AI use is to be policed in schools, resourcing and personnel come with it, IEU-QNT Research Officer Dr Adele Schmidt writes.

Both universities and schools have been aware of opportunities for students to use AI and other digital tools to ‘cheat’ for some time. Universities often have sophisticated detection systems in place.

The detection of cheating is taken so seriously in universities that several institutions have resourced their faculties with personnel who work with students on the use of AI in assessment pieces.

Schools do not have the capacity to do this without additional funding.

If AI use is to be policed in schools, this level of resourcing/personnel must be delivered rather than expecting schools to ‘value add’ using current resourcing and staffing models.

Schools deal with younger students and often take a more pastoral approach when cheating is detected, with the goal of educating the student on why cheating is unethical, rather than simply punishing them for having cheated.

Options for educators

Lodge et al [2] note that there are six options for educators trying to find ways of dealing with AI and its impact on assessment:

  • ignore
  • ban
  • invigilate
  • embrace
  • design around, and
  • rethink.

Given that ignoring and banning are unlikely to produce solutions, and invigilating requires investment of considerable resources, educators have no option but to embrace, design around and rethink.

One rethink option open to teachers in schools is to shift to more process-based assessment models, such as oral examinations [3], where there is less emphasis on the artefact and more on the learning underlying it.

Such changes are resource-intensive and have the potential to overwhelm teachers with even more assessment and moderation work (that is, individual assessment interviews with a class of 25 senior students).

Alleviating assessment workload

It is possible that AI could help ameliorate assessment-related workload.

Some authors have suggested that AI integration into learning environments can develop complex, multi-dimensional models that summarise the learning status of individuals across subject areas to facilitate more precise instructional diagnosis [1].

The solution for designing and implementing effective 21st century assessment paradigms is likely to lie somewhere between process-focused and AI-mediated assessment.

Either way, any sustainable integration of AI into the teaching, learning and assessment process requires recognition that traditional assessment paradigms are:

  1. Onerous for educators to design and implement.
  2. Provide only discrete snapshots of performance rather than nuanced views of learning.
  3. May be uniform and not adapt to the knowledge skills and backgrounds of participants.
  4. Inauthentic in that they adhere to the culture of schooling rather than the cultures schooling is designed to prepare students to enter.
  5. May be antiquated in that they assess skills that humans use machines to perform [4].

In proposing options for assessment tasks that incorporate AI use, Swiecki et al [4] suggest that automated assessment construction, AI-assisted peer assessment and deployment of writing analytics have potential, but rolling these out at the scale demanded by the entire schooling system is problematic.

Not only is it inadvisable for schools to undermine human relationship-oriented learning, but concerns about data sovereignty exist when AI platforms are owned by commercial entities.

This creates tensions in relation to matters such as health, safety and wellbeing (including data sovereignty) of schools, teachers and students.


References

1. Kay, J., et al., Enhancing learning by Open Learner Model (OLM) driven data design. Computers and Education: Artificial Intelligence, 2022. 3: p. 100069.

2. Lodge, J.M., S. Howard, and J. Broadbent. Assessment redesign for generative AI: A taxonomy of options and their viability. 2023 [cited 2023 25 July]; Available from: https://www.linkedin.com/pulse/assessment-redesign-generative-ai-taxonomy-options-viability-lodge.

3. Pearce, J. and N. Chiavaroli, Rethinking assessment in response to generative artificial intelligence. 2023, Australian Council for Educational Research: Melbourne.

4. Swiecki, Z., et al., Assessment in the age of artificial intelligence. Computers and Education: Artificial Intelligence 2022. 3: p. 100075.