The AI Doc director says cynicism is the only wrong answer to AI
About half of young people ages 14-29 are now using artificial intelligence every day or week, and yet just 15 percent of them see AI as a net positive for society. And you don't have to go far in the tech world to encounter AI doomers warning about the dire risks of AI run amok.
Indeed, such doom and gloom can be hard to avoid when the headlines constantly remind us that our world is heating up, drying up, and blowing up. And that's what makes the new Focus Features documentary, The AI Doc: Or How I Became an Apocaloptimist, such a head-scratcher. The movie is a call to action, not just to regulate artificial intelligence so it can be harnessed for good, but a call to arms for optimists (and aspiring optimists like myself).
The AI Doc was produced by Everything Everywhere All At Once co-director Daniel Kwan and directed by filmmakers Daniel Roher and Charlie Tyrell. Roher, who won the Oscar for his 2022 documentary Navalny, is the emotional anchor of the movie, and he urged me to resist the siren call of cynicism around AI.
"[Cynicism] is, frankly, easy," he said. "Very, very easy. And it's kind of like the low-hanging knee-jerk reaction to something. You'll realize that it’s actually the only wrong answer to this."
In the documentary, the Oscar-winning director learns that his wife is pregnant just as he begins a good and proper AI doom spiral. So, he takes us along for the ride as he explores the dangers of AI, both real and imagined. He even talks to the "final bosses" of the AI problem — the handful of men sitting atop the AI industry — OpenAI's Sam Altman (or is it Sam Altman’s OpenAI?), Google DeepMind's Demis Hassabis, and Anthropic's Dario Amodei.
Roher spoke with me by phone after the movie's release, where he confronted me about my own cynicism around artificial intelligence. We also talked about how AI is being used in Hollywood, the ongoing copyright battles between artists, filmmakers, and the AI industry, and whether AGI is really as imminent as it seems.
As a tech editor, I get whiplash covering AI. I talk to tech people, who talk about AI like it's the greatest thing in the world — it’s going to solve all our problems and change the world. And then I talk to artists and reporters, who tell me it's a scam, it's just destroying the [environment]. Have you experienced the same thing as a creative who talks to a lot of tech people?
Daniel Roher: I think that's a good way to articulate it. If you talk to one set of people, and they tell you one thing, and then talk to another set of people, and they tell you the polar opposite. And the particularly complicated component is that both people are incredibly intelligent and thoughtful and well read and well researched, and so it's sort of like looking at two truths at the same time and trying to decipher it and figure out how to reconcile that reality.
I imagine one tough thing about making an AI documentary is the pace of change in this space. For the first time, we're really seeing AI used in a war capacity. I'm just curious how your thinking has evolved since the movie wrapped?
I'm just becoming more and more concerned. Obviously, the documentary is about how scared I was, and I think now, as I'm seeing some of the [dangers] discussed in the documentary [happen]...like AI being used in conflicts. It's just very concerning and very scary.
And you've seen red lines drawn in the sand by some companies, while others blow through them. I'm particularly speaking to Anthropic and the very reasonable red lines that they drew down with the Pentagon and what was comfortable for them, gaining the public support of most people in the world, including Sam Altman and OpenAI, only to be then designated a supply chain risk and have Sam Altman swoop in and and, you know, make his own deal with the Pentagon.
But Sam Altman is someone who has a sort of air of someone who came out of the womb wearing his turtleneck and running shoes ready to give his keynote address at Davos...I found him to be just media-trained up the wazoo. Not a particularly genuine person.
That type of, I don't know if you want to call it bad faith dealing, is pretty Machiavellian, and it's scary.
Yeah, and it kind of lines up with Sam Altman's reputation. His reputation is a bit Machiavellian. There have been accusations, I know, by former employees and board members that he's... I've heard the word "two-faced." What was your impression of Altman? Did it seem like he had a good grasp of the seriousness of the risks here?
I guess, although if he really did, I think he'd be doing more to work with his colleagues to try and create safety precautions and common-sense safety measures, which he's not doing. So perhaps not.
But Sam Altman is someone who has a sort of air of someone who came out of the womb wearing his turtleneck and running shoes, ready to give his keynote address at Davos. Like, that's his energy, which is a vibe, you know? I would say that he and I didn't hit it off. Before that, I found him to be just media-trained up the wazoo. Not a particularly genuine person.

The documentary did a really good job of laying out how, basically, our entire global economy is being rearranged around this arms race for AGI. All the biggest tech players in the world, the financial powers, they're all pouring resources into this race to be the first one to achieve AGI. And I guess one of the questions I have is, what happens if AGI isn't possible? What if AGI turns out to be a mirage?
Well, how do you define AGI?
I would say, AI that's capable of replacing the average worker. Smart enough that it can do the average laptop job, the average manufacturing job, pretty much out of the box.
By that metric, we have already achieved AGI. No debate.
I mean, I'm only going by the box you draw on the floor, and based on your explanation, certainly we've achieved AGI. Certainly, AI can write your article, and certainly AI can interview me, and certainly AI can write a movie, and certainly AI can drive a truck. It's just a question of the bureaucracies of our world being slow to incorporate these systems. But I think, by your definition, we have reached it. And anyone who says that it's not possible, or that this will plateau, that has not been my experience, just observing reality around me.
I don't know that it's quite all the way there. I think it still needs quite a bit of babysitting, from what I've seen. But maybe that's a bit of denialism on my part.
For me, artificial general intelligence is an AI system that can do a wide variety of tasks at a level superior to that of an individual. So that is not limited to just, you know, coding or writing an essay. Anything, it can do better than you, not just one category. That's what I understand AGI to be.
Without some sort of consensus on what we're talking about, it's hard to focus the discussion. And that's just a challenge with this, and how fast it's moving, and the fact that there are no clearly defined goalposts of what we're even talking about.
As you've gotten further into fatherhood, have your feelings on AI changed?
I would have typically described myself as quite a cynical, perhaps a denialistic person. I would have, you know, five years ago, said, "Oh yeah, this is gonna be terrible. There's nothing we can do in the face of this." And I don't feel that way now. I feel like the worst thing you can do is be cynical. And I think my perspective, geared towards optimism and collective action, is framed through the lens of fatherhood. It's irresponsible to be a parent and to be nihilistic or cynical, and that's why I really try and focus on what we can do, what I can do, and what you can do, what we can all do.
What are one or two things someone can do if they’re worried about AI?
Educate yourself. Use the software. Understand what they're capable of. Think critically about what you want to use these for, [and] what you don't want to use them for. That's really, really, really important.
And then the other thing is to evaluate what we call your sphere of influence. If you're a single mom, if you're a truck driver, if you're a teacher, if you're a dog walker, if you're a filmmaker, or a politician or so on and so forth, you have power in your life, some smaller than others, but you have power nonetheless, even if it's just calling someone and talking to them about this, telling them what you've learned and how you're feeling about it, trying to explain to someone the value of collective action and being a a participant in finding a solution here. Because it'll take all of us.
Five, 10 years ago, that would have sounded like corny, [politically correct] woo-woo, Kumbaya bullshit to me, but there is no other choice.
So I very much believe in the power of collective action. And then there are basic political pressures that we can do. What political party, what candidates are on the right side of this issue, who is advocating for common sense, regulations, and guardrails to ensure that this technology doesn't consume us, but we still have power over our own future?
Those are a few things that might not seem satisfying to people, but it's not as easy as, like, change your light bulbs, you know, drive your car less, take the train instead of flying. It's more challenging.
[For] the tech CEO to be like, you know, "Fuck you, I will come for your shit." My response is, "Fuck you back. No, you're not." And I applaud the media outlets like the New York Times who are standing up for their material and doing the very, very good public work of fighting companies in the courts.
Among many artists and many progressive people in general, there's a real intense resistance to using AI or to allowing AI to become normalized. For example, whenever we hear about AI being used in the process of making a video game, there are calls to boycott that game. Are you seeing that among other filmmakers or artists as well?
Yeah, sure, and that's their prerogative. This shit is fucking scary. I get it. I get why people are freaked out, why they don't want to use it, and why they want to boycott. But it's also the plain reality that it's here and it's not going anywhere.
And so what I'm more interested in is figuring out how we can be creative beings alongside this thing, right? And what do I do that this thing cannot do, because I believe that my unique lived experience on Earth is just a different category of existence than this obtuse, oblique computer God thing that we're building that is just trained off of all of our regurgitated knowledge and stuff. I believe my lived experience is unique. That's the biggest thing.
And then beyond that, I'm also very mindful of when it comes to using AI to create art — how is this empowering me versus how is it replacing me? And if it's empowering me in a meaningful way, then I'm like, "Cool, great." If it's going to replace me, I'm like, "No thank you." And it's also the paradox, and the reality is that the same thing that empowers me can also replace me, and that's why it takes all of us to sort of stand up and say, "You know what? We don't want to use it for this. I don't want to play a video game that was made by an AI, or I don't want to watch a film that was shot out by a computer. No, thank you. I appreciate the artist's hand."
Maybe that's naive, but that's just my opinion, as someone who is an artist who makes stuff as my vocation and reason for existing.
I've found some people take a very, very hard line that if there’s any involvement of AI, I won't engage with it at all. And I wonder sometimes if those people are kind of alienating themselves from the larger conversations that need to happen.
I don't disregard that position. I understand why people feel that way. My position is, this is fucking terrifying. Like, this is actually really scary. And I know most of my creative friends who have had the experience of using Sora or looking at ChatGPT and being like, "Oh, look, the thing that I've been training my whole life to do no longer has any value. So what the fuck do I do with that?" That, in and of itself, is scary, and it seems like a very natural reaction for people to be like, "No, fuck that. No, thank you. Not for me."
You know, is that healthy in the grand scheme? Probably not. But as I said earlier, my position is that this isn't going anywhere, and it's just a question of how we can coexist and co-evolve with this technology in a way that is empowering and not depleting.
I also wanted to quickly ask about the copyright issue. I interviewed the CEO of a major AI video company, Luma AI, and he basically said, anything we train on is [fair use]. You know, we're going to train on whatever we want. But if the output looks like copyrighted, protected material, that's a problem, and that's where we draw the line.
Do you get a sense that that's kind of a losing battle, that ultimately AI companies are going to do what they want?
The guy who has a financial vested interest is saying that he's gonna train his model on what the fuck he wants? It's kind of like the guy who runs the tobacco company saying that, you know, smoking is good for you. Everyone should have a cigarette, and if you say differently, fuck you. And to that, I'm like, "Dude, go fuck yourself." Language like “the battle's already been lost”? And it's like, dude, relax. The battle hasn't already been lost.
This is just a unique challenge of 25th-century technology that's crash-landed into the 21st century, being regulated by legislative processes forged in the 17-fucking-hundreds. And court cases take a long time, but I think, at the end of the day, the book is still very much open on whether the IP battle has been won or lost.
So, yeah, for the tech CEO to be like, you know, "Fuck you, I will come for your shit." My response is, "Fuck you back. No, you're not." And I applaud media outlets like the New York Times, which are standing up for their material and doing the very, very good public work of fighting companies in court. And this is what I'm talking about, as a collective action. There has been a tangible pushback against the overreach of these AI companies. I feel it. I sense it in the ether. People are scared. People are pushing back. People are saying, "No, thank you," and I'm inspired by that.
[Disclosure: Ziff Davis, Mashable's parent company, in April 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.]
I think that speaks to the doomer in me. I have the skeptic, the cynicism, in myself as well.
I don't know what your life is like, but I hope for you that you get to experience having kids, because it rocks, it's just so fun. And maybe you're not a person who wants to do that in your life, and that's fine, too. But I hope that your main character arc is that, one day, you have a family and you understand viscerally that the cynicism you're speaking to is, frankly, easy. Very, very easy. And it's kind of like the low-hanging knee-jerk reaction to something. You'll realize that it’s actually the only wrong answer to this.
Visit The AI Doc Get Involved website for more information. You can catch The AI Doc: Or How I Became an Apocaloptimist in theaters now.
Some of the quotes in this story have been lightly edited for clarity and grammar.
from Mashable https://ift.tt/QE1sYnb
via IFTTT