AI should work for you, right now it’s being done to you. Here’s three ways to solve AI’s trust problem, says Lewis Liu
Alex Karp, co-founder and CEO of Palantir, the American AI military surveillance platform, recently said on CNBC: “The one thing that I think that even now is underestimated by all actors in industry… is how disruptive these technologies are… If you are going to disrupt the economic and therefore political power significantly of one party’s base – highly educated, often female voters who vote mostly Democrat… This technology disrupts humanities-trained, largely Democratic, voters, and makes their economic power less.” The notion that AI will gleefully put millions of white-collar, disproportionately female, workers out of work is something disturbingly celebrated by so much of the AI industry.
This is starting to be noticed well beyond the AI industry. Just last week, I spoke in Washington DC at the annual Council for Institutional Investors conference, right after Senator Elizabeth Warren and SEC Chair Paul Atkins, in a room managing $30 trillion in assets. The Chair of the Council asked me directly, in front of the full audience: “Lewis, the New York Times reported that this AI tech boom is not nearly as popular as previous tech booms and that trust in AI is low. Why?”
My answer was simple: the AI industry isn’t just giving off the wrong vibes, it’s actively destroying trust in itself. If I had to summarise the zeitgeist rapidly spreading through the predominantly male AI builder community, it goes something like this: the vast majority of the professional class does pointless work that AI can automate. Those people will form the permanent underclass. If you’re lucky enough to be an AI builder, you must sacrifice everything, including family, relationships, friends, and your integrity, to build the AI agents that automate those jobs, generate your generational wealth, and leave everyone else behind. And frankly, those people deserve their fate, because anyone not capable of joining the new AI builder class is simply inferior.
Not a day goes by without my social feed, populated almost exclusively by male AI founders and builders, mostly in Silicon Valley, serving up gems like: “She’s a 10, but you have AI agents to build so ignore her so you don’t join the permanent underclass.” Or: “Only generational wealth can prevent you from being a member of the permanent underclass.” Or: “How my AI agent is producing the permanent underclass of [insert profession].”
There’s also an undertone of misogyny I can only describe like this: imagine a teenage boy who never grew out of his Ayn Rand and Nietzsche phase, who never developed a sense of duty or kindness, who remains utterly convinced he must become the Übermensch, and now he can control the world through AI.
No wonder trust has collapsed – not just among the professional class, but across society broadly. Many of the billionaires driving this industry are actively working to destroy the very livelihoods of the people they claim to be serving. Even though Dario Amodei, Anthropic’s co-founder, speaks at length about wealth inequality as one of AI’s defining social threats, and then, just last month, Anthropic released a suite of agents designed to replace millions of jobs. The hypocrisy is breathtaking.
But I don’t entirely blame him. At the end of the day, Anthropic and OpenAI are commercial organisations. In our current winner-take-all system, they are simply doing what the system rewards: pursue monopolistic power, consequences be damned.
Of course there is no trust.
There is, however, an alternative path. And it’s what I argued for in DC.
First, let’s be honest about something the industry keeps dancing around: AI will replace jobs. The labour market data already shows it. I wrote about this in these pages years before ChatGPT existed. The question was never whether, it’s how. Specifically, whether human dignity, expertise, and control survive the transition is something AI companies can actively choose to build into their platforms. That choice is being made right now only by a small number of billionaires by default.
Three ways to restore trust
What we need to build is AI that reflects and amplifies individual human expertise and not the lowest common denominator output you get by averaging out training data across the entire internet. I’ve written before about knowledge collapse: the very real risk that as AI generates more and more of our content, the diversity and depth of human knowledge slowly disappears knowledge collapse. We’re already watching it happen: over 70 per cent of new internet content is now AI-generated, quality declining with each cycle. The industry even has a word for it: enshittification. By contrast, AI systems that are genuinely paired real-time with individual human expertise don’t just produce better outputs, they give people’s professional lives actual meaning. And frankly, that’s also what makes AI sustainable long-term.
Second, we need to build systems that give credit back to the humans who originated an idea or insight. We spend so much time debating job replacement and wealth inequality that we’ve barely begun to reckon with the original sin of AI: the wholesale scraping of the entire corpus of human knowledge with zero regard for IP or copyright. That wound is still fresh. People are genuinely terrified of AI “sucking up your brain and replacing you”. Given how the industry behaved in its land-grab phase, that fear is entirely rational. We need AI systems that actively trace the provenance of ideas, acknowledge where specific concepts came from, and find ways to give credit back, socially and economically, to the humans who generated them in the first place. Until that happens, this fear doesn’t go away. It compounds.
Finally, and perhaps most importantly, AI needs to start respecting privacy and boundaries. Humans should control what gets shared with an AI and what doesn’t. This matters more than ever as we slide deeper into a world of pervasive surveillance – state-run in China, corporate-run in the West – where the default is that everything about you is being watched, stored and monetised. Giving people genuine control over what they share with AI systems isn’t just a nice-to-have: it is the foundation of rebuilding trust. And it’s also, frankly, one of the last frontiers in the AI tech stack that nobody has properly solved yet.
Ultimately, yes, AI will replace jobs. But there is a path where AI performs better precisely because it draws on richer, more nuanced human knowledge, and where people retain genuine agency over their own working lives rather than having their destiny dictated by a group of tech bros in California.
AI should not be done to you. It needs to work for you.
#tech #bros #underclasses