Sinead Bovell is the first to admit her path to success in the tech industry has been unconventional. In fact, the term she uses to describe it is “zigzag.” The New York-based futurist got her start in the fashion industry, modelling in campaigns for brands like DKNY and Salvatore Ferragamo in her home city of Toronto, Ontario. While on set, Bovell often found herself having in-depth conversations with her fellow creatives about the impact of artificial intelligence on their line of work.
The conversations weren’t anything new for Bovell. Since graduating from the University of Toronto’s MBA program in 2015, she had been tracking emerging technologies, and was still an avid reader of white papers on the future of work. But, it wasn’t until she was talking with colleagues years later that she “realized creators are just as intrigued and interested in these conversations about the future of work [and] artificial intelligence. They just aren’t invited to them, and the material isn’t made accessible for everyone else. That was the light-bulb moment for me; maybe I could be this bridge [between industries],” she says now.

“It was one of those quintessential moments where I couldn’t have planned those steps, but it all made sense looking back,” she adds. “Those were always the topics I was fascinated by; these are always the things that I would talk about, regardless [of whether] there was an audience or not.”
Bovell soon began writing about technology and its intersection with—and impact on—creative fields. The articles resonated with readers and established her as an important voice in this space. Then, in 2018, she formally launched WAYE (Weekly Advice for Young Entrepreneurs), a tech education company that helps break down the latest cutting-edge technology for young entrepreneurs. The company’s goal is to empower this group through WAYE Talks, monthly events featuring a rotating roster of industry insiders, as well as through workshops and one-on-one consulting. Now she’s a world-renowned expert who has spoken at institutions including the United Nations, the U.S. Chamber of Commerce, Cornell University and Bloomberg. In January 2021, she was appointed to the United Nations International Telecommunications Union Generation Connect Visionaries Board, which links young people with global tech leaders to ensure their input is incorporated into future innovative tech projects.
Although her insights are clearly the major driver of her success, it’s also important to consider her approach. Early on, when her monthly WAYE Talks kept selling out despite zero investment in marketing, Bovell knew she was on to something. And when she looked around the room at each event, it was clear what that something was: she was tapping into an oft-overlooked market. “What stood out to me was [that] more than half [of the audiences were] women and people of colour,” Bovell says, “and you just don’t see that in tech rooms as much.” More than anything else, she says this was her signal that the person leading these conversations matters, an insight that inspired her to be very intentional and visible. “It’s not just about changing who’s in the room talking about technology, but also what it looks like to talk about technology,” she says. “It should look like everyone, because technology impacts everyone.”

Bovell firmly believes everyone needs to be prepared for the way technology is changing our world. She has spent her career encouraging those outside of the industry to plan for a future increasingly dominated by artificial intelligence, in a way that works for—and speaks to—her. “I always knew stepping into modelling [that] I would connect the dots in some way to my old world; I just didn’t know what that looked like,” Bovell says.
Bovell recently sat down with 3 to discuss her work, her thoughts on the future of tech and why she’s focused on helping build a better tomorrow.
Why do you focus on educating people about the power of technology?
When I was doing my MBA, I first [became] exposed to the world of futurism and strategic foresight. In the business world, discussions around blockchain or artificial intelligence and the future of work were more commonplace, but not in the world of fashion modelling. That’s when I recognized the conversations that I was having about the future weren’t as prevalent, or happening at all, in creative industries. It wasn’t because people in the arts and the creative world weren’t interested in having them; it was because they were never invited to the conversations at all.
When you were having those conversations with your fellow creatives on set, what were they concerned about when it came to AI and the future of tech?
This is back in 2018, so there weren’t conversations about diversity in technology and algorithmic bias. These topics were [just] being surfaced by individual voices in the AI field. Big Tech was somewhat aware, but these weren’t mainstream conversations yet. So at this point in time, I was bringing this to creative markets, [talking about how] algorithmic bias is real, and what happens when we don’t have enough diversity in the room. I was also raising questions about national security to tech leaders and in rooms full of creatives. I would ask the questions on behalf of the audience to make sure that they were aware of all of the amazing things about these technologies, but [also] the not-so-great things that we need to call attention to. [I remember being at an event in 2018 and talking about how, by 2025, writing would be called into question with artificial intelligence. So I called attention to it, saying if there’s journalists in the room, this is an area you might want to pay attention to. It was much earlier than the ChatGPT moment.
The more you understand technology, the better equipped you are to either push back or take advantage and leverage what’s coming
sinead Bovell
Since then, a lot of your work has focused on ensuring people are educated and prepared when it comes to the evolution of AI and tech in general. What does “preparedness” mean to you, and why is it so important for people to be prepared both in and outside of the tech industry?
Preparedness is multi-faceted. It means you ask “what if?” questions, and you have answers to them. From the company point of view, if you’re in the tech industry, you’re asking: “What if the pace of AI is faster than I’ve currently planned for? What would that mean for hiring? What would that mean for my business model? Maybe I should start adapting now.” And the same thing goes for somebody in the workforce, even if they’re not in tech. Being prepared means you’re understanding the basics of how things are changing so you can change alongside of it, or ahead of it. So you don’t feel like you are reacting to a moment. Instead you are building…towards what’s to come. It’s a really big problem if technology is a one-way street, or a one-way dialogue. You want to make sure that civil society is going back and forth. Does everyone feel included in it? And so, the more you understand technology—and not just where it’s going, but also what it’s capable of and what that means for the unique intersection where you reside—the better equipped you are to either push back or take advantage and leverage what’s coming.
Do you think there are ways that this kind of thinking about preparedness can extend to other realms?
Absolutely. As a futurist, you can’t just look at technology in a vacuum, because it didn’t evolve in a vacuum. It is a product of the country that it’s being developed in, the geopolitical relationships at the time, the climate, the culture, the society, the values, the economy. You have to analyze all of these factors, which means you are using a forecasting, forward-looking lens on all of these. If you’re a business leader, you’re looking at technology, but you’re also looking at climate change. What does that mean for migration for your workforce [of ] the future? You’re looking at geopolitics—if one country does something to another, that might impact your ability to sell there or your hiring pipeline. All of these are things that you can’t predict, but you can have plans for them; you can do scenario analysis and see what is most likely. You can prepare for all scenarios, which allows you to make better decisions in the present.
For someone who’s not in this area and doesn’t work with this kind of information day-to-day, it can feel overwhelming to look to the future and try to anticipate it. How do people initially engage with this idea of preparedness?
I don’t think that it’s on everyone. I do this for a living; it’s my job to come to a company or a government having [already] analyzed all these factors. This isn’t on the average person at all. I don’t think it’s reasonable for someone to be tracking the economy, geopolitics, technology, or catching up on white papers—that’s just a nightmare. But in terms of what should the average person be thinking about? You should be thinking about the different ways technology could change how you work [and] live. And it’s not just so you can continue to adapt in the future, but so you can take advantage of the technologies that are coming out of the pipeline, and so you can steer them as well. There’s a lot of great innovation that can change how we live for the better, and we want to make sure as many people get a chance to participate in that as possible.

Who should be responsible for educating the public, then?
I think it’s a mix. I do think companies have a responsibility; if you’re seeing something that’s going to radically change the world, it doesn’t mean you have to release your IP, but I think it’s helpful if you give policy-makers, educators and society a heads-up as to what you’re working on. The same goes for our government. Governments, especially in democracies, are institutions that were designed to move slowly—and that’s a good thing. You don’t want your government to seem like a different entity every 48 hours. But especially in the West, governments have lost the [habit of ] looking super long-term because our societies are designed for the present. With elections every two to five years, depending on where you live, and with quarterly or annual financial results, we think in this kind of present track, and that is preventing us from seeing what’s around the corner. Governments tend to think very linearly—but technology doesn’t work that way. Governments need to [focus on] strategic
forecasting and a longer-term lens. That’s vital, and a part of their responsibility. The future shouldn’t be a surprise; breakthroughs are a surprise, nobody can predict that, but the future overall isn’t a mystery.
Is it realistic to ask or expect these corporations and governments to take this on? Is that something that you see happening, or can see happening in the way that it needs to be?
I think so, if we were to take strategic forecasting more seriously. Imagine a government that has a branch that is politically neutral but can take that longer-term lens and advise about what is coming down the pipeline, regardless of where they stand politically and what their policies are and what their campaign paradigm is. [We’d have someone to say], This is how technology is moving and what this means for your country, [and] this is how I would advise you to think about this. We just need to start taking it seriously and there will be change. You can see that now in how governments are responding to the AI moment versus the social media moment; most governments are trying to build a plan. They recognize they need to stay ahead. That is an optimistic sign that we have hopefully sailed past the days where we just wait 10 years for something to cause a lot of harm and then we think about holding a meeting on it.

Is it possible to balance the speed of technological innovation and responsible practices?
I don’t see it as much of a balance! Responsible technology adoption, responsible technology practices [and] sufficient regulation aren’t barriers to innovation; they’re essential to it and essential for it. Artificial intelligence is moving at a pace that’s much faster than prior technologies…but I do think that when it comes to regulators and people in that role, [they’re there] to safeguard how these technologies are diffused into society. I heard a great quote [recently] about how policy-makers don’t necessarily have to understand how this technology fully works, they just need to understand what it’s capable of. If they can [make this their focus], they have some time to prepare adequate safeguards.
Responsible technology isn’t a barrier to innovation; it’s essential to it, and for it
Sinead Bovell
At the start of this year, Chinese AI start-up DeepSeek disrupted Silicon Valley’s approach to AI in a big way. How does the emergence of this tech change the overall AI landscape?
In general, AI diffusion into society will continue to speed up the [emergence of] cheaper and more efficient models, so when it comes to DeepSeek specifically, this is another example of why it’s unreasonable to assume we can put a border on software—there has to be global collaboration.
Private companies play an important role in developing and regulating new technology. How can companies and institutions do better on equitable representation?
How can technology that is trained on data from society be better than society? It can’t without intentional intervention. We know society and history have many biases and many practices that we don’t want to repeat—and we don’t have to. But we have to be honest about what we might find in the data sets [that we use to train AI algorithms] in order to ensure that we’re not codifying history into the future. Artificial intelligence can be a chance to break away from historical patterns and imbalances that we don’t want to repeat, but we have to be open and honest about what’s going into that data. This includes making sure there’s more representation, not only in the building rooms, but in the deployment rooms, because it becomes challenging to spot historical power imbalances if you had no idea that they occurred.
As a futurist, how do you best ethically inform and educate the public on technology and these advances?
I focus on being as transparent as possible with what I’m learning and what I’m seeing, and as open as possible as to what this could mean and [what] the implications [might lead to]. I’m not the one building these technologies; I’m connecting these data points to see where things are going, so I think the ethical thing to do is to share what this scenario could look like in a couple of years, if we act [or] if we don’t act for some groups. For me, ethics means not being afraid to offend or challenge an organization based on what they’re doing if I think it’s going to get in the way or have a bad outcome for society. That has never bothered me.

You mentioned transparency is very important. How do you balance the critiques of AI—environmental impact, changing how we think of work and labour—with its potential for transformative change?
I don’t think there’s any benefit to ignoring or minimizing [the] harm that a technology creates. So, I’m always open, and try to be as transparent as possible as to the impact these systems are having, as well as highlight that this is a choice—because there are other options. That is what I find so exciting about being a futurist; some people say, “You’re staring at the future all the time; isn’t that scary?” No, because I see nothing but options in front of me. There’s all of these different approaches that we could take; it just comes down to human choices. The more open we are about what options we have [regarding] the harm that’s happening and why it’s a choice not [an inevitability], the more that advances the conversation.
Looking to the future, what are you most excited about when it comes to the possibilities with AI, data and emerging technologies more broadly?
Curiosity is our engine.
Sign up to discover what we’re reading, seeing and thinking about each week.
Listen and learn.
Tune into Third Culture Leaders, a podcast hosted by our co-founder and publisher, Muraly Srinarayanathas.
Explore how leaders skillfully navigate multiple cultural landscapes, leveraging their diverse backgrounds to drive innovation and change.
This is a fascinating time to be around a breakthrough technology like artificial intelligence. AI is a once-in-a-generation, once-in-a-century technology. And what’s really interesting is, we don’t necessarily know the impact of turbocharging our cognitive powers and our cognitive abilities. Every scientific breakthrough, every engineering approach, anything you look around at modern society and see, it has all been the product of the human brain. We’re about to turbocharge that with artificial intelligence, and that’s really, really exciting. What’s also exciting is that hopefully those capabilities won’t be limited to a small sector of society. If anything, what we’ve seen recently—models getting cheaper and cheaper to the point of being free—is that these cognitive abilities and cognitive abundance [are] going to be widespread, very accessible and widely adopted.
What are the potential benefits to humankind, and specifically groups that have historically been disenfranchised?
It allows you to make better decisions with the data that we have. We saw this in particular [with the water crisis] in Flint, Michigan. Artificial intelligence was used to analyze data and determine which homes and areas likely carried lead pipes versus copper ones. The success rate of detecting lead-based pipes jumped to 70 per cent versus [around] 15 to 20 per cent when humans were picking. So, artificial intelligence can be a tool of empowerment and allow for better judgment and decision-making based on data and insights, which is something that all communities, especially disenfranchised or underserved communities, can benefit from.”
What advice would you give to someone who is trying to prepare themselves for a future that includes AI?
Start playing with and using these tools as much as possible. While that might feel counterintuitive, we’ve been here before: We’ve adopted smartphones, we’ve adopted the internet—the fax machine was [once] seen as radical, so we’ve done this before. The reason AI has gone so viral is because anybody can use it. You communicate with it the same way you would communicate with a colleague or a friend. So, the more you use these systems, the more you’ll understand their potential in your life.