With AI becoming more commonplace in our everyday lives, arguments for teaching children skills that go beyond the capabilities of AI are becoming more frequent, with many concerned that autonomous machines will likely replace human counterparts in most future professions.
Dr Lucinda McKnight, a Senior Lecturer in Education at Deakin University, Australia, has examined this story in greater detail.
Sputnik: Could you tell us more about your research and the impact AI is having on writing?
Lucinda McKnight: My research involves working with teachers, and working in schools, interviewing teachers in schools, and talking to them about how they teach writing, how they plan for writing the kinds of activities that they do, and this is in secondary schools in Victoria, which is a state in Australia. I've found in my recent research with schools that teachers are feeling very constrained in their teaching of writing, that our national testing regime means that teachers feel pressure to teach writing in very formulaic ways. Students have to write in essays of five paragraphs and no more, with an introduction, three body paragraphs, and a conclusion. They're not allowed to have more than three ideas, each of those paragraphs has to be in a certain order, with a topic sentence at the beginning, and written to a formula. So, even when teachers want to do more creative things when the assessment values this kind of formulaic approach, it's very hard for them. Meanwhile, I'm also aware because I'm a writer myself, that in the world out there AI (artificial intelligence) is having this huge impact on how writing is happening, and I'm interested in my research in the gap between the way the world is moving there, and the way school writing is becoming always more traditional, and more narrow.
Sputnik: What practices must children and future generations adopt, in their writing and grammar, to ensure that they aren't replaced by these AI-based applications?
Lucinda McKnight: Obviously people do have to master the basics, the students need to know how to write and how to spell, and all those sorts of things. However, that's not really enough and just limiting education to those things, is that those are the things that, as we know when we use predictive text on our phones and with emails and things now, we know machines know how to write really quite well. So, we need to do more and the sorts of things that we need to be doing in the teaching of English, and the teaching of writing, are the things that the Institute for the Future has been calling for, for the last decade. We need to be thinking about social intelligence, which is the ability to connect to other people in really deep and meaningful ways, so that's creative writing and personal writing, and we need to do novel and adaptive thinking. So, that's really creative writing, and it's breaking the rules, it's not just knowing them, but breaking them and reinventing things in different ways. Then there's cross-cultural sensitivity and ability, and that's about insights into diversity. So, that's reading and writing in all sorts of different ways with influences from all sorts of different cultures. There's transdisciplinarity, so that's thinking about writing across disciplines, and just for example with that, that might be working with people who write algorithms themselves and thinking about how those algorithms for creating writing work, and what uses that they can be put to. There's virtual collaboration, so teaching students to collaborate, they should be working together in virtual documents, Google Docs, or whatever and learning how to do that, and having a design mindset, perhaps above all else, and it's about being able to write with a purpose for a real audience that you know and that you understand that you want to influence because that's what computers can't do.
Sputnik: What challenges most developers behind this technology overcome, so that their AI products are safe, beneficial, and also fit for purpose in the future?
Lucinda McKnight: I think based on my reading of the literature in this area, the research literature and the industry literature, one of the main things that developers need to do is be really careful about how they're training their artificial intelligence, because these robot writers learn by reading or processing enormous data sets - vast amounts of data. So, for example, it might be everything that's ever been written on Reddit, which is a sort of posting site, or it might be a huge database of writing in a library or a museum or something. But if the data set that the AI is trained on is racist, or prejudiced, or biased in some ways, then the kind of writer that's produced the bot, the robot that is created by that learning, is going to mirror those same prejudices. So, I think it's really important that the developers are very aware of the risks there. I think safety is a big thing for the future to think about.
The views and opinions expressed in the article do not necessarily reflect those of Sputnik.