In light of the recent hysteria around Chat-GPT, education professionals may well groan at having to read yet another piece about AI and education, writes Monash University Professor Neil Selwyn.
However, AI is not a topic educators can afford to completely tune out from. Indeed, there are a lot of people wanting us to surrender to the hype and accept that we have all now entered the ‘AI age’ and that teachers and students simply need to accept it and make the best of the AI being handed down to us.
As such, one of the main reasons that ongoing debates around AI have become so boring and repetitive is the seemingly inescapable nature of the situation. Regardless of how optimistic or pessimistic the conversations around AI are, the underlying presumption is that ‘there is no alternative’.
In contrast, organisations affiliated with Education International (the global federation of teachers’ unions) remain suspicious of being told to put up and shut up. Indeed, there are many powerful voices working hard to keep us passively resigned to the changes currently being ushered in under the aegis of ‘AI’ – not least the likes of Google, Open AI, the OECD and others who stand to gain most from this technology.
Rather than give in to these vested interests, the education community needs to step up and work out ways of pushing back against the current received wisdoms around AI and education.
So where to start with thinking against the current forms of AI currently being so relentlessly sold to us? This article presents a range of persuasive critiques of AI that are beginning to emerge from those who stand to lose most (and gain least) from this technology – Black, disabled and queer populations, those in the global south, Indigenous communities, eco-activists, anti-fascists, and other marginalised, disadvantaged and ‘subaltern’ groups.
Any educator concerned about the future of AI and education can therefore take heart at this growing counter-commentary. Here, then, are a few alternate perspectives on what AI is, and what AI might be.
Ways of thinking differently about AI
Some of the most powerful critiques of AI are coming from traditionally minoritised groups – not least Black critics calling out racially related misuses of the technology across the US and beyond. These range from well-publicised cases of facial recognition driving racist policing practices, through to systematic racial discrimination perpetuated by algorithms deployed to allocate welfare payments, college admissions, and mortgage loans.
Condemnation is growing around the double-edged nature of such AI-driven discriminations. Not only are these AI technologies being initially trained on datasets that reflect historical biases and discriminations against Black populations, but they are then being deployed in institutions and settings that are structurally racist.
All of this results in what Ruha Benjamin (2019) terms ‘engineered inequality’ – ie the tendency for AI technologies to result in inevitably oppressive and disadvantaging outcomes “given their design in a society structured by interlocking forms of domination” (Benjamin 2019, p47).
Similar concerns are raised by critiques of AI within disabled and queer communities. As scholar-activists such as Ashley Shew argue, there is a distinct air of ‘techno-ablism’ to the ways in which AI is currently being developed. Features such as eye-tracking, voice recognition and gait analysis all work against people who do not conform to expected physical features and/or ways of thinking and acting.
Shew points to a distinct lack of interest amongst AI developers in designing their products around disabled people’s experiences with technology and disability. At best, AI is developed to somehow ‘assist’ disabled people to fit better into able-bodied and neuro-typical contexts – framing disability as an individual problem that AI can somehow help overcome.
Such perspectives on AI should certainly make educators think twice about any claims for AI as a force for making education fairer. Indeed, it is highly unlikely that AI systems implemented in already unequal education contexts will somehow lead to radically different empowering or emancipatory outcomes for minoritised students and staff. Instead, it is most likely that even the most well-intentioned AI leads to amplifications and intensifications of existing discriminatory tendencies and outcomes.