Years ago, I was working on a statistical model. I was early in my career, staring at data that refused to cooperate, and I knew exactly who could help—a senior analyst who’d built something similar for a different project.
I approached him, explained my challenge, and asked if he could share his approach.
He was polite and professional. He said something about how “every dataset is different” and “it really depends on your specific context.” Then he wished me luck.
He had the answer. So why wouldn’t he just share it?
Later, I realized his real edge wasn’t the model itself—it was knowing when to use it, why to use it, and who to talk to first when the results looked weird.
Back then, we blamed it on scarcity mindset. Knowledge workers guarded their expertise because that’s what made them valuable. Open source wasn’t the norm yet. Sharing felt like giving away your competitive advantage.
Today, we have AI that promises a work utopia—transcribe calls, summarize key points, search company databases for insights, generate recommendations from past projects.
And we’re learning the problem was never really about hoarding.
The Promise We Bought Into
AI is genuinely impressive.
LLMs transcribe your Zoom calls in real-time. They extract action items and summarize decisions. They scan through years of company documents to find “relevant best practices.” They even suggest approaches based on what worked for similar projects last quarter.
On paper, information flows freely now. Everyone’s input is captured, searchable, and available to the entire organization.
But here’s what’s actually happening:
You find a summary that says “Team decided to go with Option B.” What you don’t see: the 20-minute hallway conversation about client politics that made Option B the only choice anyone could actually implement.
You discover a “successful playbook” in your knowledge base. What’s missing: the three failed attempts that taught the team what not to do.
You review meeting notes with clear action items. What’s invisible: the skeptical look two people exchanged when someone volunteered for a task everyone knew was doomed.
AI gets the what. It misses the why. And it completely loses the how did we really know.
The Three Layers AI Can’t See (Yet)
1. The Unspoken Context
Real decisions happen before and after official meetings—rarely during them. AI only records the performance, not the actual negotiation.
The pre-meeting alignment calls. The watercooler conversations that establish what’s actually possible. The cultural rules about who can challenge whom and how directly. The relationship dynamics that dictate who volunteers for what. The shared understanding of what leadership actually wants versus what they’re saying they want.
None of this gets spoken aloud in recorded meetings. Everyone in the room already knows it. New people? They learn by making mistakes that veterans avoid instinctively.
Your AI captures decisions. It misses the invisible web of context that made those decisions.
2. The Scar Tissue
Experience isn’t what you succeeded at—it’s what hurt you badly enough that you’ll never do it again.
The data source everyone quietly avoids because it burned someone two years ago. The vendor that looks perfect on paper but has a fatal flaw no one documented. The process that officially exists but everyone routes around. The approach that theoretically works but fails for reasons no one can quite articulate—they just know it does.
This knowledge lives as gut instinct. Pattern recognition that screams “don’t do that” faster than conscious thought. It’s not documented because it’s not even fully conscious. It’s scar tissue.
AI searches your repositories and finds best practices. It never finds the expensive lessons that created the instinct to avoid certain paths entirely.
3. The Self-Preservation Filter
People curate what gets captured. Always have, always will.
The polished version goes in the meeting recording. The real version stays in DMs, side conversations, and phone calls that never get transcribed. Struggles get reframed as “learning experiences” in retrospectives. Three days of debugging becomes “we followed the standard approach.” Dead ends get omitted entirely from project summaries.
Not because people are dishonest—because admitting difficulty feels like admitting incompetence. Especially in documented, searchable, permanent records.
The insights that would actually help someone else—the debugging hell, the false starts, the “this looked promising until”—stay private. Because vulnerability and self-preservation can’t coexist in systems designed for performance evaluation.
AI captures what people choose to say. It misses everything they’re incentivized not to.
Why This Matters
You used to overhear solutions while grabbing coffee. You’d catch the exhausted look on someone’s face and ask what went wrong. You’d see who actually talked to whom and understand the real org chart versus the official one.
Now? You see the documentation.
And here’s the paradox: AI has made it easier to look like you’re sharing knowledge while actually sharing less. We generate meeting summaries, update project wikis, and maintain pristine documentation. We’ve checked all the boxes.
But new hires onboard from documentation that’s technically complete and practically useless. They read the playbook, follow the steps, and hit the same walls everyone before them hit—because no one wrote down the walls.
We’ve automated the theater of knowledge sharing.
What Could Actually Work
The jury’s still out on whether the solution is better documentation or smarter AI. But I think it’s also about designing for human reality instead of pretending humans will change. We need to stop asking people to reconstruct their thinking weeks later. By then, hindsight has already rewritten the story.
Every experienced person in your organization has a library of expensive lessons. But we’ve made it shameful to admit you learned anything through failure.
Specific questions get specific answers. And specific answers contain the context AI summaries strip away.
Most importantly: we need to make this voluntary and rewarded. People share when they see their insights actually help someone. When they get credit for the assist, not just the goal. When the system values the “why” as much as the “what.”
That senior analyst who wouldn’t share his model? He wasn’t wrong to protect his edge. In that environment, at that time, his expertise was his value.
But imagine a different system. One that rewarded him for sharing the context—the failures, the judgment calls, the unwritten rules. Where helping me succeed made him more valuable, not less. Where the organisation recognised that his real expertise wasn’t the model code, but the wisdom to know when it would fail.
That’s what AI could enable if we let it capture the conversations that actually matter.
Maybe we just need to be more human, rather than expecting AI to be better.
What we’re reading at Wyzr
Thinking, Fast and Slow by Daniel Kahneman. It is a briiliant deep dive into the human mind and it’s decision making processes.
Hope you enjoyed this edition of Plain Sight. If you did, share it with a friend. You can also write to us at plainsight@wyzr.in.
We love hearing from our readers and we’ll feature some of the most interesting responses in future editions.
Until next time,
Best,
Amlan

