Skip to content

December 3, 2024

Why It’s Time to Close the AI Knowledge Divide

It’s crucial to dissolve the myth of all-encompassing helpfulness and usefulness

Listen to this article:

Loading the Elevenlabs Text to Speech AudioNative Player...

The first iPad came out in 2010, and within a few years, the market was also flooded with affordable Android handhelds. On these devices, Skype and FaceTime made the Jetsons-era idea of a videophone possible, but I noticed a glaring issue. I wanted to purchase one for my grandmother so she could better keep in touch with friends and family, especially those overseas. But, at the time, with multiple options for a smartphone, I couldn’t find a version that simplified things enough for her. The complicated sequence of taps and inputs required to operate it was a barrier. Although online shopping allowed for one-click checkouts, there was no convenient equivalent for connecting with loved ones. 

In the past, my grandmother would grow anxious whenever I paid her a visit. After a few hours, she would gently hint that I prepare to head home. If grey clouds gathered overhead, indicating an impending storm, she wouldn’t hesitate to intervene, taking liberties to ensure I could escape waiting at the bus stop during a downpour. Who could predict when the next bus would arrive? I would check my smartphone in front of her. After a few moments of scrolling, I could confidently inform her that the forecast promised clear skies, and we could add an additional 25 minutes to our visit. I was certain because the transit app told me that the bus was running late. She’d shake her head in disbelief, marvelling at the information available to me at my fingertips. Although I benefited from the technology that was changing the world around us, I also felt helpless that I couldn’t extend those benefits to her. 

For the past few years, Al has been a hot conversation topic, specifically Generative AI (GAI). GAI works by ingesting massive amounts of data and, essentially, playing an educated guessing game by stringing together sentences or crafting images that have the highest probability for accuracy based on the information it has been trained on. The pattern recognition makes it useful for tasks like summarization. However, because the technology doesn’t actually understand the context of what it’s spitting out, it can also stray into the absurd. AI technology like ChatGPT or Midjourney might also glitch to create product videos that feel like Daliesque nightmares, where people melt into pink globs of goo and trees sprout fangs.

Yet, not everything is immediately discernible as Al-generated. The tells may be minor, or just cause a fleeting suspicion that something is off. This posed a particularly urgent issue for the American and Canadian elections, where deepfakes hold the power to potentially impact our perception of reality and shape the political landscape for generations. But it also has consequences at a more individual level, ranging from rampant misinformation to financial fraud. I’m thankful my grandmother never received a call similar to those reported in Saskatchewan and Newfoundland in 2023 where a couple’s “grandson” was in trouble and needed money. The voices they heard were Al clones, akin to the one portrayed in the film Thelma, which premiered at Sundance in 2024. The protagonist, a 93-year-old grandmother, loses $10,000 to an AI phone scam. Unlike the titular character, most victims rarely receive rightful remuneration.

I remember hearing the kitchen table discussions my parents had around taking care of their aging parents: finances, mobility, mental acuity. For my generation, we’ve added misinformation, social isolation and now the hazards of GAI to the list. The simpler times of my grandmother, who was amazed at weather forecasts and live transit updates, feel distant, but that memory holds an important lesson. Part of the reason why people tend to trust technology has to do with its origins in promoting a form of utopia and an earnest promise to make life better. 

It’s crucial, then, to dissolve that myth of all-encompassing helpfulness and usefulness. Instead, we should place great emphasis on learning why certain applications of new technology might work against our interests, empowering us to make wiser decisions.

Although no company working on GAI would outright support uses such as financial fraud, most tech companies want to drive up their valuation to enrich themselves and their investors. If putting guardrails on their technology could potentially slow their growth, then they’d happily sacrifice safety. This is as true for OpenAI, the largest of the companies working on GAI, as it is for Uber, Meta and Tinder. I advise people to always follow the money. Dating apps, for instance, make money only when you use them, so they are disincentivized from helping you find a match and leave their platform. Your desirability to them swells when your direct messages are consistently teeming with prospective dating opportunities.

Or, take the now common trend of rage-farming, in which a user posts something meant to elicit a visceral response, usually anger. Social media platforms pay for engagement, and so you might argue that, for example, pineapple belongs on pizza, prompting a comment section to explode with activity. You may not have any actual opinion on pizza, but you’re still accumulating dough for the company. I’ve had to explain to older family members in our WhatsApp group chat that the videos that get forwarded to them aren’t always true. Instead, what they’re seeing has been created to enrage them. It can be a confusing experience to realize you’re being taken on a ride like that. But a general rule is that if a product is free to use, then it’s probable that the company may be making money from selling your data. 

Becoming aware of tactics like this is important in finding a skilful way to incorporate technology into our lives. In the past few years, books such as Jenny Odell’s excellent How to Do Nothing, Taylor Lorenz’s Extremely Online and Tim Hwang’s Subprime Attention Crisis lay out how efforts to suck us deeper into online platforms have begun sounding the alarm bells. But it can be difficult to share these materials when English isn’t the first language for so many of our loved ones, like my grandmother, and translated copies might still display a learning curve. We have to be the change we want to see by sharing what we learn with our communities. A friend who is a computer engineering professor, for instance, created a course targeted for seniors to learn about Al at the University of Toronto. He got the inspiration for the course when he was trying, and struggling, to explain it to his own father. 

Advertisement

Technology promises to connect us in new and novel ways, and in many instances, it can live up to that promise. But it is not benign, and, without considering how it can be accessed—crucially, from a nuanced and multi-dimensional perspective—we risk leaving the most vulnerable people behind. Through knowledge sharing that empowers individuals across generations, we can return agency to all users, ensuring technology is harnessed with intention.