Forget homework cheating: How young innovators are using AI for social good

In an age where AI has the potential to transform every element of our modern lives, teaching youth how to understand and interact with AI is more important than ever.
Youth participate in Digital Moment’s Social Innovation Lab in Montreal in 2023. (Digital Moment/Facebook)

This story was produced thanks to a partnership between Future of Good and RBC. RBC is proud to support a broad range of community initiatives through donations, community investments and employee volunteer activities. See how at rbc.com/peopleandplanet. See FOG’s editorial ethics and standards here. 

While working at Digital Moment, Linh Kim felt deeply inspired by how marginalized youth were making innovative use of technology for their own needs.

One project she remembers was a young girl who developed the idea for an AI virtual assistant for kids who struggle with balancing school and family responsibilities. The AI was a persona of a little girl who could manage and organize homework and school tasks to alleviate the pressure of keeping track themselves.

Kim said it spoke to a real experience and need for low-income youth who struggle to stay afloat.

Many youth within the social innovation lab at Digital Moment, an organization dedicated to teaching youth digital skills and ethical AI, are marginalized and from lower-income backgrounds.

As the intersection of technology and social impact continues to generate conversation, looking at it through the lens of youth can help to understand the new and necessary approaches to AI.

How young leaders are tackling social issues connected to AI is also crucial to the future of responsible development in the technology field.

“I think there’s so much fear that young people are just going to [use AI] cheat on their homework, which we really need to move past that and start thinking about the way we educate and mentor and coach young people through what is the biggest revolutionary change in the industrial revolution,” said Indra Kukicek, CEO of Digital Moment. \

Organizations like Digital Moment, Actua, Animikii, and RBC Borealis’ Let’s SOLVE It all focus on education, helping young people better grasp how to use technology for good and learn about the potential consequences.

Tech-savvy doesn’t mean digital-savvy

There’s a misconception that since young people have grown up with technology and use it with ease, they’re digitally savvy in every way, said Kukicek.

“They’re vulnerable people by the sheer sense of being young; youth is a vulnerable population that we need to protect because they haven’t had the ability to develop some of those really important critical thinking skills yet.”

Her organization’s top priority is ensuring that young Canadians can be active digital citizens and have the tools and skills to thrive in an increasingly digital future.

When Digital Moment ran its social innovation lab for the first time a few years ago, one of the prominent issues around AI was how facial recognition technology did not recognize faces with darker-skinned and lighter-skinned tones.

This creates a risk within police departments and the broader criminal justice system, as people with darker skin are at a higher risk of being misidentified.

Understanding this inherent bias in the AI system is integral to how the organization educates vulnerable and marginalized youth about the topic.

Conversations like these with the program’s youth give them space to consider how these issues could impact their own lives.

When Kim worked as a program coordinator at Digital Moment, she did grassroots work directly connecting with young people, their siblings and their families to raise awareness about AI, what it’s capable of and what it isn’t.

“A lot of people kind of expect Chat GPT to be their best friends that they can get life advice from,” said Kim.

“So being the educator or the connector between all these big tech and everyday people means pointing out the limitations and the potential of AI,” said Kim.

But the goal, she said, isn’t to tell youth what’s right and wrong. Instead, they focused on challenging youth to think about bigger social issues and how they related to technological advances.

They would have deep discussions about issues like climate change, social inequity, and food insecurity and contextualize them within the digital age.

A core point to these conversations was: how does AI affect multiple populations across the world?

Advancing technology for good is something that happens to one youth at a time because that one youth will have a ripple effect on their social circle, said Kim.

“Our mission was not to come up with the solutions. It’s to tap into that source of empathy and potential of idea-making so that [youth] can improve their own understanding of the topic and communicate with others in their community about the topic,” said Kim.

Leading and developing their own AI projects for good

Like the Digital Moment’s social innovation lab, RBC’s RBC Borealis’ Let’s SOLVE it allows youth to develop technology projects for social impact through mentorship and hands-on education.

 “There is a lot of appetite in Canada from young people to learn more about AI; our universities are among the best in the world in the field of AI, however, the field exploded so rapidly that we didn’t really have time to catch up properly to accommodate everyone’s needs,” said Dr. Eirene Seiradaki, Director of Research Partnerships at RBC Borealis.

“So it turns out we have amazing programs on the graduate level that teach several topics within the AI field, but at the moment, there is no major undergraduate degree in Canada in AI, and there are very few in the U.S. as well.”

The Let’s SOLVE It program started three years ago and was designed for undergraduate students in Canadian universities to participate in an incubator-type program with an AI project. Seiradaki said many students worked on projects related to physical and mental health.

These included projects looking at needs at hospital ERs, optimizing workflow, and managing wait times.

A few projects also tackled the issue of helping local fire departments track wildfires.

“We felt that we needed to help with that — help more underrepresented student groups get involved in AI, whether they are women, female, identified, visible minorities, people from remote communities, or recent immigrants,” said Seiradaki.

Kubicek has also seen many youth interested in developing AI to help improve mental health.

“We have these young people come through the lab, and they might create an idea around an app, and they want to have it for young people, so maybe mental health or using AI and in order to support them, which is a great concept, but that’s when you start to think in practice — are you protecting people’s data?

When developing these ideas, youth need to be asking, whose information do they have? Are they following ethical standards in that they’re not using and training AI on people’s mental health data without their knowledge?

“Going through a hands-on project enables them to think about those kinds of second layer implications or consequences,” said Kubicek.

The responsibility and ethical values discussion

AI starts with the data it’s fed. So, for responsible AI to be created, it has to be made with responsible, representative, and ethically sourced data.

Kim explained that to create responsible AI, partnerships need to be created with the people who can make it happen. There also needs to be communication with executives and leaders across sectors about the importance of having responsible, representative, fair, and accurate data that they use to feed their system.

Seiradaki said that inclusivity in many forms is key in AI development teams. That means having people from different social and cultural backgrounds and teams that consist of ethicists, philosophers, and psychologists in addition to technologists.

In a learning setting, teaching about responsible AI can translate into conversations about how algorithms work. Ask them: when you search for something once, and it’s now shown to you everywhere, across all of your social media, why is that?

Kubicek said talking about things like preference bubbles and how we get stuck in them and how that shapes our algorithms, our newsfeed, and our understanding of world events is a crucial part of teaching youth about ethical AI.

Ethics in technology also means involving those populations that have historically been furthest from the conversation.

When it comes to education, if just one population is overwhelmingly going into the STEM field, then the STEM field is shaped by only their expertise and experiences, not those of others, according to Kim.

“Breaking down barriers to science to as many youth as possible is absolutely crucial in creating a strong, inclusive and representative workforce that is able to create something that is useful for everybody. Because AIs can affect everybody. Same thing as climate change. It doesn’t just affect one country or another. It affects everybody,” said Kim.

 “It is true that sometimes people think of AI, people either think of this existential threat or this great solution to everything. But actually, it’s not — it’s just a tool, so it’s a street that goes both ways,” said Kim.

“It affects us, but we affect it in the way it’s developed and how it’s shaping society, we have a role to play in that as well.” 

Tell us this made you smarter | Contact us | Report error