Path to100x
All Learn Paths
Craft & TasteBuild Your Moat

Own the Path to Specificity: You Are Not Generic

Your craft, taste, and domain expertise are what make you irreplaceable. The moment you make that specificity available to AI, you stop competing with it and start compounding through it.

By Michael Van Havill

Share
Own the Path to Specificity: You Are Not Generic

The dominant narrative about AI and work is wrong. Not directionally wrong - structurally wrong. The story says AI is coming for knowledge workers. That generalists are dead. That the person with the better prompt wins.

That is insanity. The story misidentifies the asset entirely.

You are not a generic knowledge worker. You are a specific person with specific taste, specific craft, and specific domain expertise. The years you spent developing judgment in your field didn't produce "knowledge work." They produced you - a particular point of view that no model was trained on because it doesn't exist anywhere except inside your head. That is your seat position at the poker table.

The moment you make that specificity available to AI, you stop competing with it and start compounding through it. You're not outsourcing your differentiators. You're empowering your agents to build with you - as an extension of your skills, not a replacement. The value is the triad: you, your knowledge store, and AI working as one system. Full stop. That's the entire thesis.

Imagine if every piece of work you produced this year carried the unmistakable fingerprint of your specific judgment. Not because you wrote it from scratch, but because the AI you're working with was loaded with the patterns, the rules, the scars, and the convictions you've built across your entire career. Your output wouldn't compete with AI's output. It would be a category nobody else can enter. That isn't a future state. That's a single weekend of articulation away from being yours.

┌─────────────────────────────────────────────────────────────────────────────────┐
│                                                                                 │
│   THE SPECIFICITY SPECTRUM                                                     │
│                                                                                 │
│  Where most people stop:               Where the 1% lives:                      │
│                                                                                 │
│  Level 0  ░░░░░░░░░░░░░░░░░░░░        Level 4  ████████████████████             │
│  Generic prompting                     Compounding specificity                  │
│  Same input → same output              Every interaction sharpens the AI        │
│                                                                                 │
│  Level 1  ░░░░░░░░░░░░░░░░░░░░                                                  │
│  Better prompting                      ┌──────────────────────────┐             │
│  Marginal. Anyone replicates           │  YOUR SPECIFICITY        │             │
│  it in a weekend.                      │                          │             │
│                                        │  Pattern recognition     │             │
│           ↑                            │  + Quality thresholds    │             │
│      99% of people                     │  + Decision frameworks   │             │
│      fighting over                     │  + Domain scars          │             │
│      prompt tips                       │  + Taste & craft rules   │             │
│                                        │                          │             │
│                                        │  = Output that's         │             │
│                                        │    unmistakably yours    │             │
│                                        └──────────────────────────┘             │
│                                                                                 │
│  ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─    │
│                                                                                 │
│  CAREER IMPACT                         CAREER IMPACT                            │
│  Competing with AI. Replaceable        Compounding through AI.                  │
│  by anyone with the same               After 1 year: encoded judgment           │
│  $20/month subscription.               no layoff can touch.                     │
│  ░░░░░░░░░░░░░░░░░░░░ generic          ████████████████████ irreplaceable       │
│                                                                                 │
│  BUSINESS IMPACT                       BUSINESS IMPACT                          │
│  Same AI, same output as               12 months of encoded org                 │
│  every competitor. No moat.            judgment. Competitors can                │
│  Race to the bottom.                   buy the same models - not                │
│  ░░░░░░░░░░░░░░░░░░░░ commodity        your accumulated specificity.            │
│                                        ████████████████████ moat                │
│                                                                                 │
└─────────────────────────────────────────────────────────────────────────────────┘

The 100x Individual

The Specificity That AI Can't Generate

Forget your job title. Think about what you actually deploy every day.

A product manager doesn't "manage products." She knows that user onboarding friction in B2B SaaS follows a specific decay curve: users bail at the third step when the value isn't obvious within 8 seconds. She knows this because she's watched 400 session recordings, run 12 experiments, and felt the pattern before the data confirmed it. That's not knowledge work. That's judgment - and it compounds every quarter.

A product designer doesn't "design interfaces." She knows that when a SaaS onboarding flow asks for too much information on step two, 40% of users bail - not because of the UI, but because the information architecture violated progressive disclosure at exactly the wrong moment. She knows the gap between what users say they want and how they actually navigate. She knows that a particular weight mismatch between heading and body text tanks trust on the page before anyone reads a word. She can't always articulate it. She just sees the whole system - users, flows, constraints, pixels. That's product design taste. A decade of reps to develop. Zero shortcut.

A founder doesn't "run a company." She knows that when an enterprise prospect asks about SOC 2 compliance in the first meeting, that deal is 3x more likely to close - because it signals the buyer has already cleared internal objections. She knows this from 200 sales conversations, not a playbook. That's pattern recognition you cannot buy.

An engineering lead doesn't "write code." He knows a particular service's response time degrades non-linearly after 50 concurrent connections because of a connection pooling decision made 3 years ago. He knows this because he was on-call when it blew up at 2am. That's institutional memory - and it's worth more than any architecture doc.

A clinical leader doesn't "manage care." She knows that patients in a particular demographic present with anxiety symptoms that mask an underlying condition her standard protocol misses - because she's seen 2,000 patients and spotted the pattern herself. That's clinical intuition no model was trained on.

An operations leader doesn't "run processes." He knows which vendor SLA violation patterns precede service degradation by 2 weeks because he's managed the relationship through 3 contract cycles. That's operational intelligence. Irreplaceable.

A sales leader doesn't "close deals." She knows the exact cadence in which a deal goes cold - the three-day silence that used to be a five-day silence, the CFO loop that suddenly involves a new name on the thread, the procurement question that signals the champion is losing internal air cover. She's seen it happen 300 times. She can smell a slipping quarter in week 4 of week 13. That's deal intelligence. It lives nowhere except in her head and her rep's Slack DMs with her. If she leaves tomorrow, 80% of that intelligence walks out with her. That is the asset at risk.

A marketing lead doesn't "run campaigns." He knows that a specific headline structure lifts his category's conversion by 18% but tanks engagement on mobile, that a certain email send window gets open rates of 42% on Tuesdays and 19% on Thursdays in his vertical, and that the same messaging that worked in 2023 stopped working in Q3 of last year because the market got numb to it. That isn't "marketing." That is decades of pattern recognition per year in a fast-moving field. And it's all in his notebook, his memory, and three Google Docs nobody else has read.

Here's the thing. AI has access to more raw information than you ever will. But it has zero access to your judgment - the part that actually creates alpha. That gap is the whole ballgame.

Think about it like this. Every great investor eventually describes the same asymmetry: the information is public, the data is available, the tools are commoditized, but their edge is how they weight signals the rest of the market ignores. Warren Buffett reads the same 10-Ks as everyone else. Stan Druckenmiller trades on the same macro data. The edge isn't the information. The edge is the judgment that interprets the information. That is exactly your position right now. You have access to the same tools as every other professional in your field. The only variable is the judgment you bring to them. And for the first time in history, that judgment can be deployed 24/7 across every piece of work you touch.

Generic In, Generic Out

Let's imagine this - when you use AI without your specificity, you're playing the same hand as everyone else at the table. Same cards, same odds. You get the median internet response. Structurally competent. Generically correct. Missing every opinion that makes your thinking valuable.

The PM asks for a product strategy and gets something that reads like a business school case study. The product designer asks for a design direction and gets a generic solution that ignores the persona research, the IA constraints, and the specific user problem being solved. The founder asks for a hiring framework and gets a list of competencies aggregated from every HR blog on the internet. The engineer asks for architecture advice and gets a textbook answer that ignores the 3 real constraints in the codebase. The clinical leader asks for a care protocol and gets a guideline so generic it could apply to any patient population.

Then everyone spends 45 minutes rewriting it. Fixing the framing. Adding the nuance. Removing the corporate hedging. Injecting the opinions that make the work actually useful.

Do the math on that tax. 45 minutes per rewrite. Four AI interactions a day. Twenty working days a month. Eleven months of real work a year. That's 660 hours a year you're spending transferring specificity from your head to the page one keystroke at a time. 660 hours. At a $150/hour loaded rate, that's $99,000 of your time per year - vaporized on rewrites you wouldn't have needed if you'd spent one Saturday encoding your rules. You are currently paying yourself $100K a year to keep your expertise undocumented. That is an investor's definition of a terrible trade: permanent outflows for zero principal growth.

That 45 minutes is the tax you pay for keeping your specificity locked in your head. Every single interaction. Every single day. And what are you losing beyond the time? You're losing the hours you could spend on the deep thinking, deep craft, and hardest most interesting problems in your domain - the work that actually makes a difference. Every 45-minute rewrite is time you didn't spend on strategy, on creative breakthroughs, on the problems worth solving. It's not just a negative-ROI loop - it's a trap that keeps you grinding on what doesn't differentiate instead of investing in what does. Most people run it hundreds of times without questioning it.

What's stopping you from breaking the loop this weekend? Not skill - if you can write a Slack message, you can write a rule. Not time - one Saturday afternoon will save you 200 hours over the next year. The blocker is the habit of treating documentation as something other people do, and the false belief that your judgment is "too intuitive to write down." That's exactly what makes it valuable. The intuition is the asset. The articulation is the deployment mechanism. Until you bridge them, the asset stays buried.

What Changes When You Make It Available

The transfer is simpler than people think and harder than they expect.

Simple because the format is straightforward: document your judgment. Not abstract principles - concrete rules.

How to start encoding your specificity today: Open Claude.ai and create a new Project. Name it after your domain - "Product Strategy," "UX Design," "Engineering Architecture," whatever your craft is. Now add a Project Knowledge file with your rules. Start with just 5 - the 5 things you'd tell a sharp new hire on day one. Test it by asking Claude to produce a real deliverable. See what it gets wrong. Add the rules that would have caught those mistakes. Repeat this weekly. After a month, your AI carries judgment that would take someone else a year to develop. You've moved from Level 0 to Level 3 on the specificity spectrum. Not "I value clean design" but "Progressive disclosure on complex flows - never more than 3 decisions per screen. Content hierarchy follows the user's mental model, not the org chart. Body text never exceeds 18px on desktop. Primary CTAs use high-contrast fills, never ghost buttons." Not "I hire for culture fit" but "I evaluate culture by how someone describes their last failure, not their last success."

Hard because it requires you to articulate things you've never had to articulate. The product designer's instinct that makes her reject a solution in 10 seconds - is it the information architecture, the user flow, the visual hierarchy, or all three? The founder's gut feeling that a candidate won't work out - what signals is she actually reading? The PM's sense that a pricing model won't survive - what pattern is she recognizing? The engineer's feeling that an architecture decision will cause pain in 6 months - what's he actually seeing? The clinical leader's intuition about a patient's trajectory - what data is she weighting differently?

That articulation is the work. It's the highest-leverage work you can do right now. The moment you do it, everything changes.

A founder encoded his strategic judgment: how he evaluates market opportunities, sizes bets, decides when to pivot vs. persevere. His AI doesn't make decisions for him. It drafts memos that reflect his actual thinking - not consultant-speak, not generic frameworks, but the specific way he reasons about his specific business. Night and day.

A product designer documented 40 principles about user personas, information architecture, UX flow patterns, interaction design, and visual taste refined over 12 years. She connected them to Claude. Monday morning - three homepage concepts. Every one nailed the user flow, respected the IA, and matched her aesthetic. No "that's not how the user thinks about this" loops. The AI carried her full product design craft instead of flattening it to pretty pixels. That is the unlock.

An engineering lead documented architecture decisions and the reasoning behind them. AI suggestions now follow the team's actual philosophy - not generic patterns. Code review shifted from style nits to real ideas.

A clinical leader encoded her care philosophy, documentation standards, and risk assessment framework. AI-assisted notes now reflect her specific approach. Compliance flagging dropped 80% because quality was built in, not checked after.

A marketing lead encoded his voice rules, audience mental models, and the specific ways his category buyers respond to different messaging frames. His AI now drafts campaign concepts that already sound like his brand, already avoid his category's clichés, and already respect the nuances he used to enforce by hand in round three of every review cycle. His team ships twice the creative with the same headcount. His CAC drops 22% because the work is sharper from draft one.

A sales leader encoded her deal qualification criteria, the specific objection patterns she's seen across 500 enterprise sales cycles, the ICP signals that correlate with 6-figure contracts, and the red flags that predict churn before the contract is even signed. Her reps' AI-assisted call prep now reflects the actual playbook of a top-decile seller in her category. Ramp time for new reps collapsed from 9 months to 4. That isn't an incremental win. That is a business model shift.

The Compound Loop

Here's where specificity gets exponential - and you need to think about this the way you'd think about compound interest.

Every time you use AI with your documented judgment, two things happen. First, you get better output - work that sounds like you, reflects your standards, requires less correction. Second, you discover gaps in your documentation. The AI misses something you'd have caught. You articulate that rule. You add it. Next time, it catches it. That is a compounding loop. The returns accelerate.

A customer call reveals a new pattern. The PM documents it. Her AI surfaces it in future analysis. A design decision teaches the product designer something about how users actually navigate a flow. She refines her taste doc. Her AI applies that refinement everywhere. An incident teaches the engineer about a failure mode. He documents it. The AI flags similar patterns before they become incidents. A patient encounter teaches the clinical leader about an edge case in her protocol. She updates it. The AI catches it next time.

This is exactly how the great investment portfolios compound. Not through brilliance on any single trade, but through the discipline of documenting what worked, what didn't, and why. Ray Dalio's famous "principles" document started as a handful of rules and grew into the operating system of a $150B firm because every mistake got added to it. Every judgment call got codified. Every pattern got named so the next person could apply it. Your encoded specificity is your principles document. And unlike Dalio, you don't need 40 years to build it - you have an AI that can help you articulate, test, and refine the rules in real time, on real work, with real feedback loops. The compounding curve that used to take a career now takes a quarter.

The punchline is this: after 3 months, your AI has context that would take a new hire 6 months to absorb. After a year, you've built a body of encoded judgment that no competitor, no layoff, and no market shift can touch. It travels with you. It compounds with you. It's not a tool you use - it's an extension of how you think. And you're still the one thinking. The knowledge store carries your accumulated judgment. AI extends your capacity. But you direct, refine, and decide. That's not outsourcing. That's leverage.

The people still prompting from scratch every morning are starting from zero every morning. Like a poker player who forgets all the hands from last night's session. You're starting from a year of accumulated, refined, sharpened specificity. That gap doesn't close. It widens. Net-net - that is the moat.

Think about it like this. A novelist who's published 20 books has a recognizable voice. A chef who's run the same kitchen for 30 years has a recognizable plate. A designer with 15 years of work has a recognizable aesthetic. Until now, that recognizable voice was the outcome of working in the field. Now it's a deployable asset that operates at scale. Your voice. Your taste. Your judgment. All of it deployed, simultaneously, across every project, every brief, every decision. The thing that took you a decade to develop is the thing that becomes your moat the moment you encode it. Stop sitting on it.

The 100x Team & Business

The Organizational Specificity Problem

Every company has specificity. Your product serves a particular market. Your customers have particular pain points. Your team has particular strengths. Your brand speaks in a particular voice. Your strategy makes particular bets. Your clinical practice has particular protocols.

Almost none of it reaches anyone's AI. That is insanity.

So your team of 30 people deploys the same AI tools as every competitor and gets the same generic output. The product designer's AI doesn't know about the PM's customer research or persona evolution. The PM's AI doesn't know about engineering's technical constraints. The sales team's AI doesn't know about the product roadmap. The clinical team's AI doesn't know about the ops team's workflow constraints. Everyone is building on sand while organizational intelligence sits scattered across 40 Notion pages, 12 Slack channels, and somebody's head. No leverage. No compounding. Just commodity output at scale.

The fix isn't better prompts. It's making your organization's specificity as available to AI as the individual's. That is the architecture that matters.

Building the Organizational Taste Layer

How to build your organizational specificity layer: Start with a shared Notion workspace or Confluence space. Create three sections: Company (brand voice, strategic principles, quality bar), Teams (each function's decision frameworks), and Individuals (personal alpha from senior contributors). Connect it to Claude via MCP - the Notion MCP server takes 10 minutes to set up. For engineering, add a CLAUDE.md to your repo. For design, connect your Figma library via the Figma MCP server. Every AI interaction across the organization now starts from a shared foundation of accumulated judgment - not from the median internet.

When a team encodes its collective specificity - strategic frameworks, product quality bars, design standards, brand voice, hiring philosophy, pricing principles, clinical protocols, operational workflows - every AI interaction across the organization starts from a shared foundation. Think of it as deploying institutional judgment at scale.

One team created three layers:

Company layer: Brand voice, strategic direction, quality standards, values that apply to everything. The founder's strategic taste. The company's care philosophy. This is the base.

Team layer: How the PM team evaluates opportunities. How engineering defines "done." How the product design team makes decisions about users, information architecture, and aesthetics. How the clinical team approaches documentation. How operations evaluates process quality. How marketing speaks to the market. Functional specificity.

Individual layer: Personal judgment that layers on top. The PM who's particularly strong on pricing strategy. The product designer with unusually refined opinions about information architecture and interaction patterns. The engineer who's the expert on performance patterns. The clinical lead with the strongest patient communication instincts. The ops leader with the best vendor management judgment.

The AI applies all three. A new PM's first AI-assisted brief matches the team's quality bar because the context carries the organization's collective judgment. A new clinician's documentation reflects the care philosophy from day one. A new engineer's code follows the team's patterns in week one. That is compounding at the organizational level - and it derisks every new hire.

Specificity as Competitive Moat

Here's why this matters at the business level: competitors can buy the same AI models. They can hire from the same talent pool. They can copy your features.

They cannot copy 12 months of encoded organizational judgment. Full stop.

Every customer conversation your sales team captures and synthesizes. Every design decision and the reasoning behind it. Every strategic pivot and the evidence that drove it. Every process refinement and the data that informed it. Every clinical protocol adjustment and the outcome data that validated it. This accumulates into an intelligence layer that no competitor can replicate by signing an enterprise contract. It's like trying to buy someone else's poker experience.

After a year, when a new hire asks AI to draft a customer proposal, the output reflects hundreds of past conversations, the specific objections this customer segment raises, the pricing philosophy the founder developed through 18 months of experimentation, and the brand voice the marketing lead refined across 200 pieces of content. When a new clinician preps for a patient encounter, the AI reflects 2 years of care outcome data specific to their patient population.

A new hire at your competitor asks the same AI for a proposal and gets the median internet response.

Imagine if every person your competitor hires next year shows up to work with the median internet in their head, while every person you hire shows up with 18 months of your collective judgment pre-loaded on day one. The onboarding curve for a new hire at your company is 4 weeks. The onboarding curve at your competitor is 6 months. Over a 5-year horizon, that compounds into a talent leverage gap that no recruiter can close and no salary bump can overcome. You're not just outcompeting them on product. You're outcompeting them on the rate at which your people become effective. That is a structural advantage that doesn't show up on a pitch deck - and it wins the category.

That gap is your moat. And it widens every single day. The moat isn't the knowledge store alone. It's not the AI alone. It's the triad - your people, your accumulated knowledge, and AI working together. Your agents build with you, extending your skills across every interaction. You're not outsourcing what makes you special. You're compounding it. That's how you win. That's how you build something no one can financialize away from you.

Picture this. Three years from now, the companies that compounded their organizational specificity have AI that produces work nobody else's AI can match. Their proposals close at 2-3x the rate. Their products feel like they were built by someone who understands the customer. Their internal velocity is the kind that competitors only see when they hire away a senior leader and try to reverse-engineer it. Meanwhile, the companies that didn't are stuck running the same tools as everyone else, producing the same generic output, and wondering why the gap keeps widening. You don't need a vision to imagine this. You just need to look at the companies who started encoding 18 months ago and see how it's already played out.


The Specificity Spectrum

This isn't binary. It's a spectrum - and it maps almost perfectly to levels of investment sophistication.

Level 0: Generic prompting. You use AI the way everyone else does. Same input, same output. No edge. This is the person at the poker table playing every hand the same way. Predictable, exploitable, replaceable.

Level 1: Better prompting. You've learned to write clearer instructions. Your output is slightly above average. Marginal advantage that anyone replicates in a weekend. Like reading one poker book - you'll fold fewer bad hands, but you're not winning any tournaments.

Level 2: Domain context. You've documented some of your knowledge - PM research, engineering patterns, clinical protocols - and connected it to AI. Your output reflects your field. Meaningful advantage, but still general. You're card-counting but haven't developed real table sense.

Level 3: Taste and craft. You've encoded your specific judgment - how you evaluate, decide, and create. Your AI produces work that's recognizably yours. The product designer's output reflects her thinking about users, flows, and visual craft. The founder's memos reflect his actual thinking. The clinical leader's notes reflect her care philosophy. Significant advantage. You're reading the other players now.

Level 4: Compounding specificity. Your knowledge base grows with every interaction. New evidence refines old principles. Your AI improves daily. Your advantage compounds. And here's what separates Level 4 from everything below: you have time for the work that actually matters. Time to decision, faster. Time to idea, faster. Time for the deep thinking, deep craft, and hardest most interesting problems in your domain - the ones that have been permanently deprioritized because you were grinding on assembly. This is where the 1% lives. You're not just playing hands - you're building a model of the game that gets sharper every session, and you have the time to sit with the strategic decisions that really count.

What's stopping you from climbing the levels this quarter? Not access - the tools sit on your laptop right now. Not complexity - the first rule you write is one sentence long. Not time - the compounding begins the moment you start. The blocker is always the same: the belief that you can keep winning with the narrow-window playbook that got you here. That belief is about to be expensive. Every week you wait is another week the wide-window professionals in your field pull ahead - not because they're smarter, but because their judgment is deployed and yours is locked up in a mental filing cabinet that only you can open. The tax compounds in both directions. Every week of delay is a week of their lead growing. Every week of action is a week of your moat deepening. There is no neutral week. There is no "I'll start when I have time." The clock is running whether you're watching it or not.

Most people are at Level 0 or 1. They're fighting over prompt engineering tips while the real leverage sits untouched - their own accumulated specificity. The spread between Level 1 and Level 4 after 12 months is enormous. And the people who start compounding first win.


Where This Connects

Specificity is the foundation everything else builds on. Your knowledge architecture is only as valuable as the specificity it carries. An elegant system that routes generic information is an elegant waste of time.

Your workflow orchestration gets smarter when it knows your quality thresholds, not just your task sequence. Your AI-native team adopts faster when the output already sounds right - because the organizational specificity is doing the heavy lifting. Your performance standards become achievable when AI carries the judgment that used to live in one person's calendar.

The question isn't whether AI will transform your work. It's whether it will transform it into something generic - or into something unmistakably yours. That's the bet. Make it. Let's go.


Examples How Others Have Made This Real

These aren't hypotheticals. Real builders and companies are encoding specificity into AI right now - and the gap between them and generic operators widens daily.

  • Anthropic's Claude Projects and System Prompts - thousands of professionals create persistent AI configurations loaded with their specific judgment. A product strategist with 15 positioning rules. A clinical leader with assessment frameworks. A founder with fundraising patterns from 200 conversations. Each one moved from Level 0 to Level 3 with a single documentation session.

  • Superhuman built their entire product around a hyper-specific thesis: "email should feel fast." That specificity - not a generic "better email" pitch - informed every AI feature they shipped. Their AI drafts match Superhuman's opinionated voice because the specificity was encoded before the first model call. Generic email AI sounds like everyone. Superhuman's sounds like Superhuman.

  • Cursor's .cursorrules ecosystem - a grassroots movement of engineers encoding their specific coding philosophies, architecture opinions, and "never do this" rules into configuration files. These files represent years of accumulated engineering judgment - pattern recognition, quality thresholds, decision frameworks - made available to AI. The specificity spectrum in action, spreading organically because the ROI is undeniable.

  • Jasper AI found that customers who uploaded their brand voice documents, style guides, and content principles saw 3x higher output quality scores than those using generic prompts. Same model. Same pricing. The only variable: specificity of the context layer. That's the whole thesis in one data point.

  • Midjourney power users discovered the same pattern in visual AI. Generic prompts produce generic images. Users who encoded specific aesthetic preferences - "shot on Hasselblad, editorial lighting, muted earth tones, negative space dominant" - developed a recognisable visual style that AI reproduces consistently. Their specificity became their creative signature.

  • Stripe's API design principles are so specific that they function as a taste document for engineering: "Use exactly one word for each concept. Prefer common words over jargon. Error messages tell the user what to do next." New engineers' AI-generated API code follows Stripe's philosophy from day one - not because of training, but because the specificity is documented and deployed.

  • Figma's design systems encode specificity at the component level - spacing rules, colour tokens, interaction patterns, accessibility requirements. Design teams that connect these systems to AI tools get output that's unmistakably theirs. Teams without connected systems get generic Material Design suggestions. Same AI. Different specificity. Completely different output.


Ask Yourself

These questions reveal where you sit on the specificity spectrum - and what it would take to move up a level.

  1. Where are you on the specificity spectrum right now? Level 0: generic prompting. Level 1: better prompts. Level 2: domain context. Level 3: taste and craft encoded. Level 4: compounding daily. Be honest. Most people are at 0 or 1 - fighting over prompt tips while the real leverage sits untouched. See how the 1% vs 99% gap works →

  2. What's the rewrite tax on your AI output? Time yourself. When AI drafts something for you, how many minutes do you spend rewriting it to sound like you? That number - multiplied by every interaction, every day - is the cost of keeping your specificity locked in your head. Every minute of it.

  3. Where is your knowledge about your craft stored right now? In your head? In scattered docs? In old Slack threads nobody will ever search? If your judgment isn't structured and connected to AI, it's invisible to the very tools that could amplify it. See how knowledge bases become career capital →

  4. Can you write 10 specific rules about your domain in 15 minutes? Not "I value quality." Rules like: "We always frame opportunities as customer pain points first, TAM second." If you can't articulate your own judgment, AI definitely can't carry it. The articulation is the work - and it transforms your relationship with every AI tool you touch.

  5. What does your AI know about you that a generic prompt wouldn't reveal? Open Claude. Ask it something specific about how you work. Does it know your quality bar, your framing preferences, your decision frameworks? Or does it give you the same answer it would give anyone? That delta is your specificity gap.

  6. What's compounding for you - and what's starting from zero every morning? After 6 months of documented specificity, your AI has context that would take a new hire a year to absorb. After 6 months of generic prompting, you're exactly where you started. Which path are you on? Explore the full framework → | See how agents compound your judgment →

Newsletter

Stay ahead of the 99%

Frameworks, strategies, and real examples for building your knowledge moat - delivered straight to your inbox.

What this means for you

Pick your role. See the difference.

The same path, applied to your world. Click your role to see how this changes your work.

Share