The Four Horsemen of the AI-pocalypse
High-stakes advice isn’t dead, it's just deeply confused.
Scrolling through LinkedIn these days, you’ll witness a new kind of combat sport.
Keyboard warriors posting, memeing and flaming that high-stakes knowledge work like consulting, finance, law, you name it, is either teetering on the brink of obsolescence or entering a golden age of augmented brilliance.
It’s got all the best parts of internet discourse: bold takes, zero nuance. Logical fallacies, irrational confidence.
The feed is a battlefield: On one side, AI evangelists declare the old guard's playbook dead, soon to be replaced by algorithms.
On the other, exasperated veterans defend their turf, often with the weary authority of someone explaining the same complex point for the tenth time (maybe while someone else quotes Nietzsche over a Canva carousel).
Okay, fine.
It’s LinkedIn, not trench warfare.
But boy does this topic ignite the passions.
Everyone’s got a crystal ball and is going to town in the comment section with maximum certainty and minimal grace.
Big AI vs. Big Advice
Why the melodrama? Because this isn't just about software replacing spreadsheets or automating research.
It cuts to a fundamental tension about what we truly value in professional expertise. Is it the quantifiable, the scalable, the efficiently coded – the domain of the 'hard skill'?
Or is it the ambiguous, the relational, the intuitive – the fuzzy realm of 'soft skills,' where vibes might actually be real?
It’s a collision between those who believe value lies in optimized systems and measurable output, and those who believe it resides in human connection, nuanced judgment, and strategic insight.
This debate, amplified to absurdity by the AI hype cycle, forces a reckoning.
What actually makes the best advisors indispensable, especially when technology can replicate so many tasks previously done by expensive humans?
The Prestige / Skills Matrix
To make sense of this, forget the usual frameworks.
Let's instead examine the archetypes battling it out, not just in consulting firms, but across the professional landscape.
Actually, what the heck. Let’s go straight to the usual frameworks.
Meet the gang. You know these fellas. We all do.
These guys, our consulting zodiac if you will, represent the forces at play in this weird, transitional moment.
The Players on the Board
🧙♂️ The Wisdom Contrarian (Hates Prestige / Loves Soft Skills):
Addicted to starting emails with "I'm curious...".
Believes "vibes are real, but Harvard is fake." Thinks prestige is a trap; true insight comes from somatic intelligence, maybe indigenous epistemology, definitely not from a slide deck.
Convinced the average carpenter is probably smarter than you. And him. But mostly you.
AI? Interesting, but ultimately limited. It can generate text, sure, but it "can’t hold space for an exec’s inner child." And he’s kinda right.
Force at Play: AI’s rise exposes the emptiness of prestige signaling. The Contrarian sees a chance to replace brand worship with human depth—if only people would stop checking LinkedIn during sound baths.
🧑💻 The Stack Supremacist (Hates Prestige / Hates Soft Skills):
Formerly a "learn to code" bro
Won't explain anything twice, especially not to you. Soft skills are inefficient fluff.
Thinks Prestige is a legacy system waiting to be disrupted by his plan to replace Accenture with a Discord server.
He doesn’t want to join the establishment; he wants to kill it and then automate the grave digging process.
Force at Play: This is the pure techno-optimist fantasy where strategy is code and judgment is an API call. He represents the belief that quantifiable, technical prowess is the only thing that matters, conveniently ignoring why clients pay millions for trusted advice.
🫡 The Lexicon Dandy (Loves Prestige / Loves Soft Skills):
This guy thinks he's better with clients than women. And he’s amazing with women (or so he believes).
He owns the “liminal negotiation space”
Will cite Dadaism while ripping apart your slide deck. It’s a signal and a flex of the power of prestige. Don’t get it? Guess you didn’t minor in History of Art. “Ah, the liberal arts…”
He’s not threatened by AI itself, but deeply threatened by someone using AI without italicizing “contextual nuance.”
Force at Play: This quadrant thrives on interpretive complexity and curated charm. AI can mimic content, but not his specific brand of cultivated performance. His edge isn’t raw knowledge—it’s curation, cadence, and perhaps couture.
👑 The Institutional Loyalist (Loves Prestige / Hates Soft Skills):
Got his job because of the last job. Which he got because of his last job. Which he got because of his dad.
Wears a watch that costs more than your Series A and thinks rapport is what assistants are for.
Soft skills? Pfft. There’s nothing soft about this fella. Empathy is for losers. He does deals.
He doesn’t fear AI; he doesn’t understand it, and isn’t about to start. His value lies in his title and his “@blackstone.com” email address. AI might build the deck, but AI doesn’t make people nod in boardrooms the way his LinkedIn aura does.
Force at Play: Here, trust often substitutes for merit, and prestige is the shortcut to credibility. He represents the inertia of established power, banking that legacy and connections will outweigh demonstrable skill in an automated world.
What Survives When the Deck Disappears?
These archetypes aren’t just caricatures; they’re coping mechanisms navigating a fundamental shift. AI isn't killing high end advisory work. It’s revealing the absurdity baked into it: the flawed proxies we use to measure value, the performative rituals, the way some people get to seem wise while others have to prove they’re smart.
It’s exposing how much of this game relies on vibes and trust.
Some is earned, some is inherited, some is bought with a corporate card.
But none seems to be attributable to one core, identifiable skillset or value add.
It forces a reckoning. What happens when the "hard skills" loved by the Supremacist and delegated by the Loyalist become commoditized?
Without the humans doing the analysis, the modeling, the brute force computation, what is the role of the people anyway?
Does the Contrarian's faith in gut feeling finally get its due?
Does the Dandy's "liminal negotiation space" become prime real estate?
Does anyone know what "liminal negotiation space" even means?
The highest value work often lies in the grey areas AI struggles with: strategic judgment, navigating complex politics, building genuine trust when millions are on the line, understanding why people make messy, irrational decisions.
The Verdict?
The Loyalist collapses. His golden Rolodex collects dust. His value prop—“we’re Goldman”—doesn’t land when nobody cares about the firm name anymore. Without the firm he is worthless. Prestige without substance is just confidence cosplay. Without prestige, he’s nothing.
The Supremacist survives. Comfortably. He keeps shipping dashboards, automating away the boring parts, and pretending judgment is just another function call. He’ll never run the table, but he’ll never run out of clients either.
The Dandy adapts. He blends tech literacy with charm and lands somewhere close to indispensable. His new role? Articulation engine. He makes complexity sound profound and sometimes, the other way around.
But the Contrarian wins.
After years of eye-rolls, he’s finally right. When everything quantifiable is free, judgment becomes rare. And wisdom. Actual, unscalable, lived-through-the-fire wisdom turns out to be the killer app.
Of course you knew I was going to pick the Contrarian. But here's the truth.
This isn’t really about who wins.
It's not a scoreboard. It's not a quadrant deathmatch.
The truth is, we're all just trying to stay human in systems that keep asking us to act like machines. And maybe that's the real point. Not that soft skills are making a comeback. Not that prestige is finished.
Not even that vibes are finally having their moment (if they even are).
But that in a world obsessed with optimization, being gloriously unoptimized might be our last true edge.
Our inconsistencies. Our contradictions. The things that build trust.
The Credibility of Inefficiency
Face it—you’re not handing your Series B term sheet to ChatGPT. You’re not letting an LLM defend you in court.
Unless your goal is to go to jail more efficiently.
And yet, the way we talk about AI, you’d think involving a human is some kind of indulgence. Like mahogany conference tables. Or embossed business cards.
So go ahead. Take the pledge:
Certified Relinquishment of Institutional Networks, Guidance, and Expertise
I, [Name], hereby forswear all human professional services.
I will let AI negotiate my term sheets, write my S-1, and defend me in court.
I will never retain a law firm, never pay a banker, and never hire a consultant.
I trust the algorithm with my company’s future, my financial well-being, and my legal standing.Sign here: _______________________
If you signed, feel free to stop reading this article here.
Joe Alalou is a co-founder and General Partner at Daring Ventures, a pre-seed fund investing in software that amplifies uniquely human skills in complex, relationship-driven fields.
If you enjoyed this, please share or subscribe!
For Those That Didn’t Sign
We both know why you didn’t. Because when everything's on the line, we don't call the most optimized option. We call someone we trust.
Someone who gets context. Who reads the room. Who knows when to speak, when to shut up.
Because when the stakes are high, and the path is murky, and the room is full of tension... You don’t want the most efficient answer. You want the most human one.
Because in the end, do you really trust a server rack?
Final diligence meeting. Gigacorn IPO, 2026.
Joe Alalou is a co-founder and General Partner at Daring Ventures, a pre-seed fund investing in software that amplifies uniquely human skills in complex, relationship-driven fields.
If you enjoyed this, please share or subscribe!