TL; DR
Women are on the front line of AI disruption, and not by accident. As generative AI reshapes white-collar work, the first roles under pressure are the ones dominated by women—communications, marketing, content, administration, and other text-heavy, strategy-driven jobs built on skills often rooted in the humanities.
When powerful tech leaders openly say AI will reduce the economic and political power of highly educated women, we should believe them. But this is not a reason to panic; it is a reason to get strategic.
Key Insights
- Women are more exposed to AI disruption because they are concentrated in white-collar, text-heavy roles such as communications, marketing, content, and administration.
- This is not just a workforce issue. It is a power issue tied to who loses economic leverage, professional influence, and decision-making authority as AI scales.
- The skills often dismissed as “soft”—critical thinking, ethical reasoning, empathy, persuasion, and judgment—are exactly the ones organizations need most in an AI-driven world.
- The biggest risk is not whether AI can truly replace strategic work, but whether budget holders believe it can and cut people accordingly.
- Women are being hit from both sides: they face greater automation risk and often receive less encouragement than men to learn and use AI strategically.
- The smartest response is not panic or rejection of AI, but becoming fluent in it while proving the business value of human-led strategic work.
Women Are on the Front Line of AI Disruption
Last week, I had 30 minutes of sitting in the car, waiting for my small one (who isn’t really all that small anymore) to finish with her piano and voice lessons.
Normally, I get a ton of work done during that time—no meetings, no interruptions. Just me, my laptop, and my car’s WiFi. But my brain was absolutely crushed, and I could not think straight, let alone try to do any deep work.
So I did what any self-respecting, exhausted professional would do. I opened LinkedIn to feed my brain in short bursts.
What I found, though, was not something to feed my brain in short bursts. I found an incredibly thought-provoking article from Nancy Lyons, the founder of Everdare Advisors. I read it twice. And then, because my brain was completely shot, I literally commented, “Ooof.” Which is ironic, considering last week I was relentless about the non-comment comment.
But in my defense, I sat with it for a few days, went back, and left a more substantial comment. And then I sat with it for a few more days and decided we needed to talk about it.
Here is what happened…
During a CNBC interview, Palantir CEO Alex Karp said something that should stop every female communications and marketing professional mid-scroll.
He described, calmly and without apology, an outcome he expects from AI: that it will reduce the economic and political power of “highly educated, often female” voters who tend to vote Democrat, while increasing the economic power of “vocationally trained, working-class, often male” voters.
Go ahead and sit with that for a minute. He says the companies are training AI to undermine the power of women, especially those in white-collar jobs, those with high emotional intelligence, and those who are highly educated.
He Said the Quiet Part Out Loud
The CNBC interview aired during Women’s History Month. He said it to a female anchor. And he didn’t frame it as a problem to solve. He framed it as a reality to accept. If that’s not the most on-brand thing ever, I don’t know what is.
Well, International Women’s Day falling on the same day as Daylight Saving Time so we got one less hour to celebrate, is the most on-brand thing ever. But this is a close second!
The thing about what he said, thought, wasn’t just the gender piece. This is the specific language he used, “This technology disrupts humanities-trained, largely Democratic voters, and makes their economic power less.”
Humanities-trained. He named it. Not “some workers” or “certain sectors.” The humanities. The discipline that teaches you to think critically, write persuasively, question power, and hold institutions accountable.
And this wasn’t even the first time. Two months earlier, at Davos, he told BlackRock CEO Larry Fink that AI “will destroy humanities jobs.” He said that if you studied philosophy at an elite school, “hopefully, you have some other skill, that one is going to be hard to market.” He then called anyone who doesn’t see these disruptions coming fit for an “insane asylum.”
And here’s the thing that should really keep you up at night. This is what he said *on camera*. Voluntarily. To a national audience.
If a CEO of a company worth over $200 billion is comfortable saying on live television that his technology will reduce the economic and political power of educated women, what are they saying in the rooms where the cameras aren’t rolling? What’s in the product roadmap that doesn’t make it into the earnings call? What conversations are happening between the defense contractors and the procurement officers and the policymakers who don’t have to answer to anyone watching CNBC on a Thursday afternoon?
He Was Talking About You
When the quiet part is this loud, the silent part should terrify you.
So let’s be really clear about what he’s saying. He’s not just predicting job losses. He’s saying that the skills built by a humanities education, the writing, the critical analysis, the ethical reasoning, the ability to question a system instead of just operating inside it, are the exact skills his technology is designed to make less valuable. Less powerful. Less necessary.
Here’s what I need you to understand: he was talking about you. About me. About our daughters, our nieces, our partners, our colleagues, our friends.
Not abstractly. Not theoretically. Not about some distant category of “knowledge workers” you can pretend doesn’t include your job.
He was describing the work that we do every single day. Text-heavy. Strategy-driven. Built on critical thinking, writing, persuasion, audience analysis, and judgment. The exact skills that come from humanities and liberal arts backgrounds. The exact roles that skew heavily female.
When someone who sells AI into the U.S. national security apparatus tells you his technology will shrink the power of people like you, that is not a thought exercise. That is a business plan.
The Numbers Confirm What He Already Told You
In case you’re tempted to think Karp was just being provocative, the data says he’s describing something that’s already underway.
According to the International Labour Organization, women are nearly three times more likely than men to work in jobs with high exposure to generative AI automation. In high-income countries like the U.S., women’s risk for high automation potential has climbed to 9.6 percent. For men, it’s 3.5 percent.
The reason is structural, not coincidental. About 70 percent of working women are in white-collar roles. For men, it’s roughly half.
Women are concentrated in exactly the work generative AI targets first: writing, communications, administrative support, marketing, operations, and customer engagement.
Men are more heavily represented in construction, manufacturing, and manual trades, where the work is physical and harder to automate.
So the first wave of AI disruption isn’t hitting the workforce evenly. It’s hitting the female-dominated, humanities-trained, text-heavy professions first. And it’s landing right on top of content creation, copywriting, social media management, email marketing, media monitoring, reporting, and campaign execution. Our daily work.
It’s being sold as a way to “do more with less,” which is corporate shorthand for fewer people, lower salaries, and the assumption that a prompt can replace a professional.
And it’s not just junior roles. It’s the mid-level strategists, the PR managers, the content directors, the people who built careers on being able to think clearly, write well, and translate business objectives into language that actually moves people.
Those are the roles getting “restructured.” Those are the budgets getting compressed. And those are the professionals being told, with a straight face, that AI can do 80 percent of what they do.
It can’t. But that’s almost beside the point. Because what matters isn’t whether AI can actually replace strategic work. What matters is whether the people controlling budgets believe it can. And right now, a lot of them do.
Here’s what makes this worse: McKinsey’s Women in the Workplace report found that only 21 percent of entry-level women said their managers encourage them to use AI tools, compared with 33 percent of men at the same level.
So women are more exposed to AI displacement AND less supported in learning to use AI strategically. That’s not a skills gap. That’s a setup.
This Is Not a Diversity Problem. It’s a Power Problem.
The conversation about women in AI often gets funneled into discussions of representation. More women in technical roles. More mentorship programs. More conference panels. More “women to watch” lists.
That’s fine. It’s also completely insufficient for what’s actually happening.
Karp wasn’t talking about who gets hired at AI companies. He was talking about who loses leverage as AI scales. That’s a fundamentally different conversation, and we need to have it with our eyes open.
Think about what we do when we’re operating at a high level. We ask hard questions about messaging and ethics. We push back on campaigns that could backfire. We insist on transparency when it’s inconvenient. We make the case for long-term reputation over short-term wins. We bring judgment to decisions that algorithms can’t make.
That work is not a cost center. It’s a constraint on bad decisions. And there are powerful people who would very much like fewer constraints.
When Karp talks about reducing the power of highly educated, often female professionals, he’s not describing a bug in the system. He’s describing a feature. Fewer people asking uncomfortable questions means faster deployment, fewer objections, and bigger contracts.
The Cruelest Irony
Here’s what makes Karp’s vision not just concerning but absurd.
The skills he says AI will devalue, critical thinking, ethical reasoning, empathy, persuasion, and the ability to ask “should we?” before “can we?”, are the exact skills every credible AI researcher says we desperately need more of as these systems scale.
I just finished reading Bruce Holsinger’s novel Culpability, and the final passage stopped me cold. The fictional AI ethicist Lorelei Shaw writes that AI systems “will only be as moral as we design them to be. Our morality, in turn, will be shaped by what we learn from them, and how we adapt accordingly.”
That’s fiction, but it’s not wrong. It’s the most concise statement of the actual problem I’ve read anywhere. AI doesn’t have a conscience. It doesn’t have judgment. It doesn’t know the difference between an efficient outcome and a just one. The only way AI becomes ethical is if the people building, deploying, and governing it insist on ethics. And that insistence comes from exactly the kind of training Karp wants to make irrelevant.
Women are disproportionately the ones doing this work. The Markkula Center for Applied Ethics at Santa Clara University has documented that women hold AI ethics roles worldwide. That’s not an accident.
It tracks with decades of research showing that women tend to emphasize care, context, and relational impact in moral decision-making, not because of some essentialist argument about gender, but because the disciplines where women have historically concentrated (humanities, social sciences, communications) are the ones that train you to think about consequences, stakeholders, and power.
So here’s the cruelest irony of all: the people most likely to ensure AI doesn’t cause catastrophic harm are the same people whose power Karp says AI will reduce. The profession that’s trained to ask “who does this affect and what are the consequences?” is being told it’s obsolete by a man whose company sells predictive analytics to militaries.
If that doesn’t make you angry, read it again.
The PESO Model® is a Defense.
Here’s where I want to shift from alarm to action, because understanding the threat is only useful if you do something about it.
Your strategic value has never been more important. But you have to be able to prove it. You have to be able to measure it. And you have to stop accepting the premise that your work is replaceable by a chatbot.
This is exactly why I’ve spent years building and teaching the PESO Model®. Not because integrated strategy is a nice idea, but because it creates measurable, demonstrable business outcomes that no one can hand-wave away.
When you can show that your earned media strategy drove qualified leads, that your owned content built search authority that reduced paid media costs, that your shared strategy created community engagement that converted, you are not a line item someone can cut because ChatGPT writes a passable blog post or news release.
You are a strategic function. And strategic functions don’t get automated. They get funded.
So if Karp is right that AI is coming for the economic power of people in your profession, what’s the smart response? It’s not panic. It’s not pretending it won’t happen. And it’s certainly not “learn to code.”
Here’s what that looks like in practice:
- Get fluent in AI, but don’t surrender to it. Learn what these tools can and can’t do. Use them where they genuinely help. But be the person in the room who knows the difference between AI-generated output and strategic communications work. Those are not the same thing, and the professionals who can articulate why will be the ones who survive the compression.
- Tie your work to business outcomes. Every single time. If you can’t measure it, you can’t defend it. And if you can’t defend it, someone will replace it with a tool that costs $20 a month and produces mediocre work at scale. The PESO Model exists precisely to connect your activity to results that matter to the C-suite. Use it.
- Lead the AI governance conversation in your organization. Don’t wait to be invited. We are uniquely qualified to ask the questions that need asking: what are the reputational risks of deploying this tool? What happens when it hallucinates in a customer-facing context? Who is accountable when AI-generated content causes harm? Where are the ethical lines we won’t cross? If you’re not leading that conversation, someone who doesn’t understand those risks will make the decisions for you.
- Stop undervaluing the humanities. The skills that make great communicators, critical thinking, ethical reasoning, persuasive writing, audience empathy, and cultural literacy, are not soft skills. They are the skills that AI cannot replicate and that organizations desperately need as they integrate these tools. The push to devalue humanities education is not accidental. It serves the interests of people who want fewer questions and faster deployment. Don’t help them.
- Build your authority now. Create content. Build your platform. Establish yourself as a thought leader in your space. The professionals with visible, demonstrable expertise are far harder to replace than those who do good work quietly. Visibility is not vanity. It’s job security and professional power.
- Be the moral infrastructure. This one is new, and I mean it. If AI will only be as moral as we design it to be, then someone has to insist on the morality. That someone is you. Not because it’s a nice thing to do, but because the alternative is ceding that ground to people who have already told you, out loud, that they don’t think your values matter. Your empathy is not a weakness. Your ethical instincts are not naive. They are exactly what’s needed, and the fact that powerful people find them inconvenient should tell you everything about how important they are.
Take Him at His Word
Alex Karp didn’t misspeak. He didn’t stumble into a controversial take on live television. He runs a company valued at over $200 billion that sells surveillance and analytics tools to the most powerful institutions on Earth. He knows exactly what his technology does and who it affects.
And he told you. Directly. On camera. That people like you are going to lose power.
You can be offended by that. You should be. But offense without strategy is just venting.
The smarter move is to hear what he said, understand that he means it, and then do the work that makes him wrong. Build the skills that can’t be automated. Measure the outcomes that prove your value. Lead the conversations about ethics and governance that powerful people would rather skip. And stop waiting for someone else to fight for the strategic relevance of communications work.
Holsinger’s fictional ethicist got the last line right: “But that will be up to us, not them.”
She was talking about AI. But she could just as easily have been talking about Karp, about Palantir, about every executive who’s decided that your power is negotiable.
It’s not. Unless you let it be.
© 2026 Spin Sucks. All rights reserved. The PESO Model is a registered trademark of Spin Sucks.
Share This: