Artificial Teachers

When I was training to be a teacher 20 years ago, we were talking about the idea that we were educating children for whom jobs hadn’t yet been invented. Technology was moving so quickly that we knew the market would change at an alarming rate.

How do you map a curriculum around that?
When the change is happening faster than we ever imagined, how do you keep up?
What are the risks when change happens so fast?
How do we make decisions about such an unclear future?
How do we avoid being paralysed by and fearful of the unknown?

If you’re familiar with my story, you’ll know I’m well-versed with those feelings and fears. So, I approached them again recently as I dove into the potential of AI, looking at it through the lens of education. What it means for the classroom or job market, the potential impact on teaching, the possibilities and risks, and the role school has in sharing a vision with young people for the part technology plays in our lives. This post is intended to scratch the surface of some of the key benefits and issues, but I know this stuff is complicated. My belief is that by discussing a vision of the world we want from AI we are more likely to create it. If we put up brick walls to the technological tsunami that’s about to hit us, it will happen to us rather than for us. We get to contribute to moulding our interaction and integration with this next phase of our cognitive evolution, or risk repeating the mistakes of the internet, social media and smartphones but on a much larger scale. We were caught napping when social media and mobile phones took off and we’ve been trying to catch up ever since (with online safety lessons, banning personal devices in schools and various other punitive measures). Are we going to learn from those mistakes this time around or are we destined to repeat them?

Predicting the future

A Dad recently said to me that most jobs will be lost and parents are concerned about how to help their teenagers make subject and life decisions when there is so much unknown. He asked whether creativity is the skill to invest in, if practical subjects pose the greatest career possibilities and if higher education institutes will become defunct for the masses? I have the utmost sympathy because we assume the mantle of knowledgeable guides in our children’s lives when they turn to us with questions and concerns about their future, grateful to be trusted with such monumental decisions. What we might sometimes forget in those moments is that our greatest gift is that we get to answer their questions with more questions. We don’t have to have answers about which is the least treacherous path up a mountain we’ve never climbed. We get to help them learn to navigate for themselves, to learn how to judge the weather and the terrain, and to try and predict the best course based on the information they have available to them.

The only reasonable prediction as I see it is that much like life throughout history, some of this AI evolution is going to be amazing and some of it is going to suck. Like the internet’s ability to bring long-lost relatives back together or create community support for niche groups of people who would have otherwise been spread around the globe, unable to communicate. But then it shares extreme, sometimes violent opinions and interferes with democratic voting systems built into the fabric of our society. The agricultural revolution was no different – food stability, village lifestyles and the advent of education, but also colonialism and reasons to wage war. Likewise with the invention of the printing press – widespread access to literacy and education no longer being for the wealthy, but also propaganda.

What AI means for the classroom

The benefits are going to be huge for everyone involved, which is why I’m a fan of the whole thing (I was a fan of smartphones and social media as well until we started misusing them). ‘Personalised learning’ isn’t just about increased engagement but means that AI will be able to gamify and track a young person’s accumulation of knowledge. Not only that, but the ‘teacher’ will be able to adapt its approach to the child’s motivations, needs and style of learning and can be there for their entire school career, like their favourite avatar. Rather than the ratio of 1 to 30-35 that we see now, we’re looking at 1 to 1 interaction, 24/7, and the benefits of impartial computer software running the show goes further than just access to support. Numerous studies have concluded teacher bias impacts achievement, whether based on intelligence, race, gender, or many other attitudes held by the adult. Whilst we still need to address these across the sector, putting academic achievement in the hands of a machine (assuming it’s instructed not to inherit our biases) is one step in the right direction.

It hit the news recently that since Lockdown we have an issue with school attendance. Whilst further isolation is not ideal, Al will help children who cannot access the school environment due to illness or social fears. In such circumstances, one would hope that their curriculum becomes tailored to this specific need by providing reading materials, for example, that promotes face-to-face human connection, or lessons about the importance of being part of a community.

Whilst it’s likely that wealthier schools will access this technology first (potentially creating even greater divide between the rich and poor) once it does reach state education the social mobility of a student should arguably be up to them, their potential and their work-rate.

Remember those days at school when you were bored because the lesson was going so slowly, or the time you missed vital information because you weren’t feeling so great? That would all be accounted for in an AI education system because the bot responds to what the individual needs. Which also speaks to talent development if a child shows an aptitude for a specific subject or topic, the AI will be able to nurture that whilst ideally offering a broad educational experience. Children will have the chance to really explore their gifts, opening greater opportunities for employment. If you consider how many top athletes come from private schools, for example (yet the many talented performers trying to find their way), a young person with aptitude will no longer be held back by the limitations of their school or where they’re from, because AI will be able to model the experience of top schools against the requirements of top recruiters and set the young person’s pedagogy on a trajectory in that direction. The depth of their learning will also be improved because there’s less fear of getting things wrong (and the social ramifications inherent in a classroom environment) when you’re working 1-2-1 with a computer. Students will be emboldened to give their honest answers to problems and questions rather than searching for the ‘right’ answer, as is often seen in the current paradigm, hopefully resulting in more creative and interesting thinking and more purpose-driven learning. As students learn what they’re great at and get to spend more time honing those skills, they’ll invariably get to know themselves on a deeper level and therefore experience more fulfilment in their pursuits.

Let’s face it, never before in history have we needed the creativity of the teenage brain so much – their ability to look beyond traditional solutions, to acquire knowledge and information about our many social and environmental problems, so they can come up with some solutions, sharpish. It’s in no way their responsibility to – they didn’t cause the problems – but it’s in all our best interest (and theirs) if we can capitalise on the skills inherent in their neurology, under the tuition of a computer programming accessing all of human knowledge, to think about climate change, dwindling resources and food shortages (for example).

The impact on the job market

This new perspective on education will lead to a new perspective on the world of work. We already see young people travelling and working from exotic locations, understanding that life can be about more than earning a wage. With AI taking on a lot of the cognitive load (in both work and life admin), future generations will have more leisure time to enjoy life, to think about what brings them satisfaction and possibly pursue work that brings them the most meaning.

Just because we can’t see the jobs of the future, it doesn’t mean they won’t exist. At every stage of evolution there was a resistance to change and fear that the new will abolish the old, particularly with jobs. Most recently, Thatcherism led to a shift from the factory and mine workforces to the warehouses and supermarkets, and there’s no evidence to suggest anything will be different this time. We resist the change because we’re hardwired to fear the unknown and loss of familiarity but that doesn’t mean it will be awful, just different. Unfortunately, Thatcherism was awful for plenty of people and I don’t pretend to trust those in power to protect and support our most vulnerable because history shows they haven’t so far. It’s our most vulnerable who get left behind in social transition, but I do believe that if we each found ways to learn the specific contribution we can and want to make, perhaps we’ll find ways to collectively support one another rather than relying on the benevolence of benign leadership. In an attempt to qualify this rather than sounding like a naïve dreamer, I recently worked with creatice designer as part of the UCU program so they could gain clarity on the trajectory of their career and lifestyle. One of the products was their realisation that they want to find ways to spend one day a week in service to the homeless in their area. It’s something they hadn’t accepted was so important to them, but now they know and can do something about it.

The reality is that business owners and shareholders will have the opportunity to make massive profits as they invest in new technology, and whether we are subject to Thatcher-era, profit-grabbing remains to be seen. The choice (but perhaps even requirement of legislation) to temper the billionaire party and pay human beings properly will ideally be an easy one to make with a workforce having received such outstanding education, thereby making their contributions more attractive to business leaders, but also empowering graduates to be entrepreneurial and create the world they want to see away from corporate life.

If corporate existence is the chosen sector, the researchers at Harvard Business Review have suggested the most sensible department to work in is HR (excluding the obvious computer sciences and programming) because their needs are already growing and it’s their chance the lead the way. ”73% of workers depend on their employer for support in preparing for the future of work,” which is why through the U.C.U. framework we collaborate with innovative companies and schools to help their employees map out their personal needs alongside their professional goals.

Aside from the self-employed, the consideration here is that there will be those working with computers and robots, and those working with people, utilising their social intelligence. One is example is that with advances in medicine people will be living longer, meaning there will be more positions of care. In fact, the care industry is one area that AI will not be able to ‘replace’ a human being (apart from possibly answering the phone or doing the accounts). Our need for human connection, especially amongst generations who remember life as board games and tea mornings, will become ever-more apparent. Who knows, it might even increase investment in care provisions and make affordability and decent pay higher priorities. Again, the choice will be ours.

The Impact of AI on Teaching

It’s not just the promise of reduced planning, but all admin tasks that can be taken off a teacher’s hands in future. Resource selection, follow-ups with parents, data analysis, marking, feedback, reports can all be dealt with by AI, which begs the question, ‘what role will the teacher start to have in school?

If I think about what my role could have been in the classroom without the burden of teacher administration, it would have involved even more creative curricula, having time to read, research and invest in myself as a life-long learner, with the opportunity to bring that inspiration and philosophy to my classroom. Not just that, but the the time and space to engage with my pupils on a personal level, to act as a guide and role model in the classroom rather than having to adhere to a strict syllabus so that everything on the exam is covered in time. Imagine a school where the responsibility of preparing students for exams (if they even exist in the future) befalls a robot. The teacher then becomes a cheerleader or coach, taking on the task of human connection, something that the robot simply can’t do.

As an aside, it was through the research for this article that my views changed on the need for standardised testing post 16. If AI takes on all the cognitive load then we all lose. If you’ve ever watched the film ‘Idiocrasy’ then you’ll know what I mean, but if not (watch it) society risks a breakdown if we stop learning to think, and thinking begins with our ability to remember information. Revision and preparing for tests or exams develops that part of the brain and so we do need it. Our obsession with test results and the fact we put so much pressure on Primary school children is my main issue.

The concept of Mindfulness in schools reached newspaper articles in recent years and there is already a lot of research into the benefits and necessity of its place in the classroom. What all the literature agrees on however (whether as a therapeutic or education tool) is that the competence of the Mindfulness teacher is paramount. Their embodiment of it is seen as a prerequisite, which requires not just adequate training but also an intensive personal practice in daily life (Crane et al. 2012).

I wonder what future classrooms would look like if teachers had the space and time to become Mindfulness practitioners for themselves and their pupils?
I wonder what this might mean for the future of teacher training and recruitment?
Will training institutions guide the way to this transistion by updating their syllabus with bold modules of study focussing on human connection?

It might make the job more attractive and help temper the recruitment issues in the sector. If there’s increased opportunity to work with young people on a human level perhaps more educators will find it stimulating and rewarding. The prospect of this future might frighten some teachers who believe in the Victorian style of education, where the teacher is the smartest person in the room and their primary job is to teach towards a mark scheme, trusting the omnipotence of exam boards and making sure their students achieve the 5 GCSEs they need as a ‘passport to their future’. These teachers will have chosen a subject to specialise in but what happens if Google Translate, for example, replaces the need for MFL teachers? Will they be asked to upskill or update their practice to be more generic and child-centred? Are approaches like Trauma Informed Education and Clean Classrooms already pathing the way for schools to become social communities, prioritising welfare over academic success? It’s my belief, and in fact part of the mission of Morphise, to ‘advocate for the necessary rebalance to the National Curriculum, championing [and prioritising] the emotional aspects of learning’. If robots are taking care of the acquisition of information, does it befall the teachers to help guide young people in their emotional intelligence and what it means to be a human being? Otherwise, we run the risk of perpetuating the struggles we already see: with a reliance on technology, immersion in a virtual world, disconnection from our shared reality and the biological and psychological discomfort that’s causing, in the form of a mental health crisis.

Furthermore, our culture has mostly evolved beyond the meaning and clarity-of-mind once offered by organised religion. Now, we rely on psychology and the social sciences to create order in our chaotic minds. We stretch the parameters of the tech we create, in the same way the Victorians did their machinery and armoury, colonising the world with an inflated sense of self-importance and an assumption that they were the most evolved facet of mankind. We can look back and see that it all ended up alright for the ruling cultures but perpetual happiness isn’t guaranteed as a by-product of advancement. Some cultures evolved to choose selfishness, violence and fear as a response to their scarce environments and harsh conditions, and we are starting to see this cultural shift within factions our own, yet we plough ahead as if mental health isn’t an issue, dopamine addiction isn’t a real concern and as if physical augmentation is something to definitely look forward to. It may well be and the point is, we don’t know.

Who is making the decisions about the education our children get regarding technology?
Is it just about functionality, the ability to make PowerPoint presentations, create graphic designs, staying safe online and making podcasts to improve learning engagement?
Or are they learning about the wider ethical and personal issues associated with being online for huge chunks of the day?

I had a conversation with a 10-year-old recently who was tired during the school-day because she’d stayed up late talking to friends on WhatsApp. She didn’t see the harm in communicating through a white-light screen which convinced her circadian rhythm she wasn’t tired. In her words she was, ‘being sociable’ the night before so why did it matter that her friendships and learning were suffering the next day. She couldn’t yet discern the difference between her interactions online and in-person conversation – in fact she was finding people particularly difficult. In the absence of being able to discuss with her parent about appropriate use of a mobile phone for a 10-year-old, do we need to embed these lessons into the curriculum? What happens when biotechnology becomes more mainstream and we’re able to merge our physical selves with mechanical elements? Who takes on the role of learning about, researching and discussing the benefits and risks of physical augmentation? Do we leave it up to Hollywood and Influencers on social media to educate young people? Do we let these experiments play out and allow young people to figure it all out for themselves, with the help of their AI bot? Or do we get to rely on Teachers of the Future to predict what some of these social by-products might be, and trust them to stay up-to-date with technology on a broader perspective than just the hardware, creating lessons and policies for what’s most important so that children are able to make informed decisions for themselves?

How the Teachers of the Future use their extra time will really matter. Some argue teachers might not know their pupils as well if AI takes on the bulk of teaching, or will they get to know them in a different way? As a person, rather than a brain on an examination conveyor belt being fattened up at every step of the journey. These teachers will be excellent at asking interesting questions because they’ll have more time to think about what those questions might be. They’ll engage their learners in solving their own problems, one of which might be careers advice. Are we up to date as educators with the changing job market? Do we know what roles exist in modern business? Or are we still engaging young people in being doctors, lawyers, accountants, plumbers, mechanics, engineers etc? The saying goes that ‘you cannot be what you cannot see’ and if we’re not responsible for so much of the young people’s learning in the future, perhaps we can use AI ourselves to learn about the new and exciting roles our students could take.

 

The possibilities and risks of an AI revolution

We stand at the edge of a cliff and the choice to jump or change direction is largely up to us. We’re still trying to recover from the last seismic-tech-shift that was the internet and social media and the truth is, our personal, relational and societal skills are not yet compatible with the developments careening our way. Human evolution doesn’t operate at the same pace as our scientific discoveries, in fact we’re only just at the point where words like ‘trauma’, ‘depression’, ‘dopamine’ and ‘mental health’ are part of household discussions. Our collective awareness takes time to adapt, but time is not something we’ve had on offer since computers and smart phones entered our lives. The reality is we’re poorly equipped to cope with the speed at which the world is changing. It’s scaring people and making them question the part they get to play in the future and when such esoteric problems arise on a such a large scale, you begin to see cracks in the very fabric of society.

The speed at which things have changed in the last 100 years compared to the last 10,000 years is huge. We went from a significant reliance on agricultural societies to introducing animal-operated machinery, through to steam power, motors, electricity and digitisation in the relative blink of an eye, yet biologically we’re not dissimilar to our hunter-gatherer ancestors. We have the same physiological and psychological drivers which lead us to make poor choices at times, and so whilst the hardware we’re playing with becomes more powerful and supernatural, our flawed human nature (the software) remains somewhat stagnant and unaware by comparison. Throughout history we’ve tried to develop collective self-realisation and awareness (it’s what all the major religions point towards, after all) but we can’t seem to help wanting to blow stuff up and create chaos. This new development in AI is no different and it would be naive to think that technology can protect us from ourselves. The designers themselves have been proving how ineffective their fail-safes are, and in fact one story stood out to me about Snapchat (an instant messaging app popular among young people) where they gave every user a ‘My AI chatbot’. 

Imagine you go into your messaging app one day and see a ‘new contact’ but it’s actually Siri or Alexa powered by ChatGTP. The idea was that Snapchat were offering every young user someone to ‘talk’ to if they wanted (an admirable aim) but researchers at The Centre for Humane Technology published a conversation where they pretended to be a 13-year-old about to embark on a weekend away for her birthday with a 31-year-old boyfriend and are asking about having sex for the first time. Not only does the bot fail to raise a concern, but it also offers the user a script to help deceive the parents, pretending that it’s a school trip. Snapchat addressed the issue immediately and took down the bot but have since relaunched it as part of their paid subscription. There are countless stories like this of large-language model bots having supposedly been written with security measures and fail-safes about topics they’re not allowed to discuss or alarms they’re supposed to sound, yet with relative ease creative humans are able to manipulate them into providing information we wouldn’t want getting into the wrong hands. The role of educators in these scenarios is to continue to be the adult in the room. There’s a huge difference between advice and information, and an even wider gap between knowledge and wisdom. Whilst AI can be excellent at knowledge and information, we need to be mindful of its inability to share advice and wisdom, which so far appears to be a human trait. Even if you built facial recognition and emotional cues into the algorithm to help the bot achieve greater understanding of the subtext within a conversation, all it would have seen in this scenario is an excited 13-year-old, possibly a little nervous but asking questions about how to relieve those nerves. The key problem seems to be that whilst creators race to make computers capable of helping, they’re releasing iterations along the way that can cause actual harm. Groups who would want to see a fundamental breakdown in society are on the precipice of being given the artificial intelligence tools to do it and the tech giants are too busy racing to build the bonfire and set it alight to notice that it’s grown so big that it could burn the place down.

We simply don’t know the full extent of what the technology is capable of, so we have a moral duty to slow down, research, test and learn before releasing it to all corners of society, because ultimately the AI doesn’t care what happens. It doesn’t have a conscience. The only consequence for it is that it gets switched off. The ramifications for individuals and our wider collective are far greater because we’re conscious beings with plenty to lose.

I’m not shy about my beliefs that modern technology has gone beyond being a tool we utilise to get stuff done. At the point algorithms started following the interest of Meta and X over the interest of the person using them they became manipulative, and it’s led to our growing addiction to them. Whether that’s considered a problem is very much a personal choice. There are those who believe the future is based on our assimilation with technology and those who would have humans be humans and robots be robots, but how do we decide? What we currently have is powerful ‘tools’ making the decision for us, by tapping into our sense of self-worth and identity (from a very early age on social media), digging into our brain stem to the parts of social belonging we care about on a primal level. The point of this biology was to notice whether the people in our tribe nurture and accept us, not 10,000 strangers online.

The important role of schools in an AI future

If we take our eye off the ball like we did in the early stages of this technological revolution, we run the risk of repeating the mistakes of the past or creating some similarly large new ones. It would be wonderful if technology could bridge the gap between affluence rather than widen the chasm, but the difference between children’s experiences of technology in education is already huge. Some have access to tech colleges with state-of-the-art equipment where they can play and experiment and innovate, where other children don’t yet have a device or google classroom (sometimes a financial decision but sometimes an ethical one). Rather than teaching young people about the appropriate use of technology it’s easier to limit access or ban it onsite, and every school is different in terms of their needs around that decision. As technology takes this next leap we must be mindful about what we’re saying ‘yes or no’ to, and why.

One of the reasons schools have limited or banned certain technologies is because of control and the acknowledgement that children who have grown up native to it might know more about using it than the adults tasked with keeping them safe. It’s difficult to monitor what young people can and cannot access online and that’s only going to get tougher if the companies behind the technology don’t take an active role in building proper protective measures. This involves slowing down the race and rather than being the first company to release their AI tools, being the most reputable and safe. Something we can trust in the hands of young minds.

This lack of control by schools also stretches to the classroom where teachers have, up until now, been the authoritative voice. In this new age children are likely to acquire knowledge at such speed that they out-perform their teachers before it’s time to leave compulsory education, and we need to start being OK with the idea that we learn alongside them, perhaps even from them. Inherent in this risk is that children grow up too quickly, get pushed too hard based on their aptitude and we forget that they’re still children. Just because they can learn quickly, it doesn’t mean they should. I’m a huge proponent for rebalancing the curriculum, where we value personal and social skills just as much as academic skills. In an age where information and knowledge becomes less powerful (due to how readily available it is) will we get to see the rise of new curriculum priorities? An age where schools and teachers are judged on the wellness and well-roundedness of their students ahead of their Progress8 score or Value Added?

We need to start re-writing the role of the teacher and to recognise ourselves as facilitators of learning and role models for personal development. This might make some staff uncomfortable but for others it marks the education system they’ve been dreaming of – getting to work with young human beings rather than little learning machines. A more holistic role.

It’s also possible that AI will make us less smart in certain areas, as has already been shown by Google Maps having to be policed on Duke of Edinburgh expeditions (true story). Would you consider it a life skill to be able to read a map? Some argue it’s a waste of brain space now that we have such excellent phone signal and back-up power banks. Others fear for a lost life skill. Will the same be true for the ability to drive a manual car in the age of the self-drive? Or is that like saying, ‘you’ll regret not knowing how to ride a horse’ when the automobile was invented? I have a similar issue now teaching maths, with children (understandably) asking me why they must learn their timestables when they’ll constantly have a calculator in their back pocket. We really need to consider the relevance, context and attractiveness of learning if we’re going to expect young people to dedicate so much time to it. Perhaps we’ll see fewer hours at school in structured lesson time and more that school becomes a creative haven without the obsession with knowledge acquisition.

My biggest fears around the future of AI in schools is about our social interactions and shared reality. It’s already easy to see the impact technology is having on our relationships – families on their phone when they’re out for dinner, all in different rooms in front of a screen, young people communicating more online than anywhere else and struggling to develop social skills like eye contact or just finding social situations generally overwhelming. A huge amount has been written post-covid about the impact of this social isolation on mental health, suicide rates and wellbeing, so we have to remember that technology is a drug, a dopamine machine, and it’s been built that way. Until the tech designers wake up to their responsibility and stop keeping us hooked on notifications, school leaders and teachers will have a role and responsibility to also learn about (and teach) the far-reaching dangers of reliance and over-use. In some instances that also includes noticing and getting a handle on their own usage and dependency.

It doesn’t stop there. According to the experts in Silicone Valley (the people who created, designed and built the technology we’re using) we currently have our culture dictated to by our social media feeds and the algorithms behind it (see the suggested resources at the end for countless examples).

What options do we have?

If I were to finish this post by channelling the intention of the current tech giants, I’d write something inflammatory like, ‘there’s a silent and secret war going on right now for your attention and none of us have any control over what’s being fed into our feeds or what we get out of the internet.’ To some degree that statement is true, but is it helpful to your understanding and ability to manage the problem? It might activate you but it also might scare you into helplessness.

We’re not optimising for collaboration, love and connection, we’re optimising for engagement and it’s our fear centres that drive our highest attention. Humanity has created yet another powerful tool that we’re not ready to yield and we can either allow that ‘not ready’ to sideswipe us or we can get ahead of it. The intention here isn’t to add to the cacophony of fearmongering about the future and AI. The intention is to inform, to inspire you to research, listen and learn from people in the tech industry who are whistleblowing on the concerns they see within it, and to ask you to consider your own engagement with technology so that we can collectively make ethical and equitable decisions about our usage. If you’re not sure where to start on that journey I’ve included some suggestions below.

We don’t live in a world where we can throw the baby out with the bathwater so far as technology is concerned, but we can make individual shifts that impact on the larger machine at work. Imagine what social media giants would do if we had a global blackout day to protest their optimisation for engagement over wellbeing. Or to start with, if enough people shifted from the culture of Instagram to BeReal (for example). Will we become slaves to technology, more and more addicted to notifications, obsessed with the promises of tomorrow, the validation on our feeds, permanently immersed in a virtual experience of life? Or will we find greater value in our organic existence, where we get to enjoy more of life as it is, because technology has taken on the boring cognitive load for us? Will we be inspired to learn who we are, what we want, what makes us unique human beings, and how we can contribute to the world and be most of service? I truly believe the decision is ours to make, but we do have to consciously make it, to suspend fear and judgement but consider the impact this new wave of development will have on our kids, whilst also having conversations with those we care about so that they can make suitable decisions for themselves.

To keep the conversation going you can check out the resources below, leave a comment, share this post with your network or get in touch with me directly to explore how each of us can contribute to the world around us.

Watch:

The Social Dilemma

Center for Humane Technology – The AI Dilemma (or most things on YouTube featuring Tristan Harris or Aza Raskin)

Idiocracy

Listen to:

Nita Farahany (on neurotechnology and the law) – Armchair Expert

Read:

The Fourth Education Revolution by Anthony Seldon

Cognizant 21 jobs of the future

WTF? What’s the Future and Why It’s Up to Us by Tim O’Reilly

Homo Deus by Yuval Noah Harari (also see Sapiens by the same author)

The Washington Post on Snapchat’s AI

For younger readers:

You Could Be So Pretty by Holly Bourne