Secancy

Secancy

Rage against the Adjacency Machine

The Hash: A Skill Profile for an AI-first world - and how to train it

Mat Munro's avatar
Mat Munro
Feb 04, 2026
∙ Paid

It’s Sunday morning and Child 2 is asking me what shape she is.

She’s overheard me talking to my partner about Skill Profiles and she wants to know if she’s a T or a Comb shape. I run her through some other options. She seems taken with the idea of being a π, but does not want to be an X, thank you very much.

I go on to explain, in simple terms, what these shapes represent: breadth and depth of skills, and the odd geometries of the modern workplace. I tell her that grown-ups use these shapes to talk about what they’re good at, and how they fit together, like a jigsaw puzzle. I tell her that most people spend their lives building one long, reliable pillar and maybe, over time they add a second, or a third.

She listens with the seriousness children reserve for information they may want to exploit later. But, she is four. Her attention quickly slips its leash and she has begun crafting a pie-inspired self-portrait out of Play-Doh and beads before my explanation is done.

I return to my writing. 


This wasn’t supposed to be my second post.

Doubling down to the topic of Humanity’s role in an AI-first world (NB. if our epoch is the Anthropocene what would an AI dominated age be called? The Agnetocene? Silicene?) was not in the beautifully crafted content plan that I put together at the start of the year. However, I confess to being a little dissatisfied with where I left that first article. Not enough takeaways. Not enough piss and vinegar. So I had already reconciled myself to writing a partner piece at some point.

Then Davos happened.

Then Dario Amodei - Antropic’s CEO - posted his 20,000 word opus.

So here we are…


If you’re enjoying Secancy then please share! 🙏

Share


Beyond Taste

In the essay this piece accompanies Taste was positioned as the last defensible human skill: In a world of infinite productivity, the argument went, the scarce asset is discernment. The ability to separate sublime from slop. 

That argument still holds.

But it isn’t enough - it’s too passive for the phase of the AI transition that we have entered. In a world of autonomous OpenClaw bots, “choosing” amongst machine generated outputs lacks long term market power. If all we do is curate, we should expect curation to be automated next.

The good news is that after spending much of January exploring ideas for new AI and Agentic systems I have identified a new potential edge, one that lies upstream of curation. I now believe that the final human edge to fall to AI will be our ability to connect seemingly disparate systems in new, and meaningful ways.

In cognitive science this process is called Far Transfer: the act of discovering structural analogies between domains that do not obviously associate with one another, and using those analogies not just to solve problems, but to reframe them entirely.

This is the kind of thinking that produces discontinuous improvements, rather than better widgets. And, for reasons rooted in the basic geometry of today’s AI, Far Transfer is not just a hard nut for our machine rivals to crack. It is anti-natural.

The problem with relevance as a theory of mind

If you want to understand why I believe that Far Transfer represents a human edge, you can start with a principle that linguists have repeated for decades.

“You shall know a word by the company it keeps.”

Modern language models took this concept and ran, transforming it into on an idea so successful that the title of its founding paper has became a slogan: “Attention Is All You Need.”

The Transformer architecture introduced in that paper now underwrites everything from chatbots to coding assistants. Replacing older, sequential machinery, with a statistical approach that infers meaning from patterns of co-occurrence across huge datasets. The Transformer’s signature move - self-attention - amounts to a way of asking, token by token: given this context what here is relevant to what else? 

This approach has proven to be astonishingly successful, and is why today’s systems are so good at what we would call “near transfer”: taking a familiar pattern and completing it, re-expressing it, recombining it, or applying it to a highly adjacent problem. Give a modern Agentic AI an example of a task and it can often generalise within that family of actions.

It is also why these systems can feel like a very particular sort of polymath: broad in principle, but guided by each prompt into a narrow domain of expertise.

In a professional setting, the kind that AI promises to disrupt, you might refer to a person with this Skill Profile as being T-shaped. A generalist with a single, deep, domain of expertise.

In fact AI is the ultimate (T)-shape: an all-encompassing generalist whose “stem” of expertise can align with whatever domain a user could require. Feed the model a supply chain problem, it will become briefly, intensely, a supply-chain specialist. Give it a question on medieval theology, it will oblige there too. In a single interaction, it is vertically deep; across interactions, unimaginably broad. 

But the attention mechanism that enables this incredible performance is also a constraint. The model can only make connections between things that already appeared together in training. Things that never co-occurred exert no pull. So systems that share deep, structural, truths, but highly differentiated vocabularies remain invisible to one another. Attention is like a spotlight scanning a sea of potential concepts. All concepts may be encapsulated within that sea but only those in the immediate vicinity of the target of its spotlight are visible to the Agent.

AI Examines the world through a very narrow lens.

By contrast Far Transfer is not simply “deep” or “broad.” Far Transfer is a bold, diagonal, slash connect concepts and systems that are first glance are completely independent of one another. It is what happens when you realise that a scheduling problem and a chemical equilibrium share a hidden structure; that an interpersonal conflict inside an organisation and a feedback loop in an natural ecosystem rhyme in ways worth appropriating; that an interface problem is, at its core, a issue of gastronomy.

As humans we make these analogous connections constantly, often without noticing. The feeling of an idea “clicking” into place the only telltale sign that a distant, cross-domain, connection has sprung into being.

Transformers do not click.

They cohere.

The horizon problem: context as a cognitive boundary

So the Attention Mechanism impairs AI’s ability to trace lines between systems without pre-existing linguistic connections, but this is compounded by a second systemic constraint: the context window.

During our interactions with AI its working memory, its context, is restricted in size and the system does not “remember” anything outside of that window, any more than it remembers all of the content it consumed during training. Extending the window is possible. Indeed the industry is sprinting to do so. But scaling context is not free: attention is computationally expensive, and the architectural trade-offs required to expand these windows can degrade other capabilities.

For near transfer, the bounded window is often fine. Many professional tasks operate within a local neighbourhood: write this memo, debug that function, compare these contracts, summarise the meeting. The relevant facts fit naturally, or can be search for, retrieved and condensed in such a way as to fit, without degrading output quality.

Far Transfer is different. The connection you need to make to unlock this is often not on the same planet, let alone the same neighbourhood. The ‘context window’ needed to achieve this is vast, both in terms of the content in needs to include, but also in terms of the time range it needs to span.

A human solving the afore mentioned scheduling problem may have to reach back to a near-forgotten high school chemistry lesson; or a half remembered study on traffic flow they parsed many years before - because the mind is less a window into a crystalline statistical cloud than it is a sprawling, leaky museum where we bump into exhibits by accident, often when we least expect it.

We are, in fact, so good at abstraction based pattern recognition that we have a tendency to see patterns that do not exist - an entirely human hallucinatory process known as apophenia.

A model will not “bump” into long forgotten concepts. It will explore the limited set of neighbourhoods held in its working memory, guided by the spotlight of its attention mechanism.

Why System 2 is not the same as “far”

At this point, I expect some of you will object: isn’t this what “reasoning models” are for? Hasn’t the big push over the last few years been, precisely, about making models less impulsive. More deliberative. More System two in their thinking.

And you would be right. The industry has done many things in an attempt to address the “models can’t reason” challenge. It has been bolting more reasoning-shaped behaviour on to the core foundation model for some time: chain-of-thought, self-consistency sampling, tool use, context expansion, context compression, reflection loops, latent space traversal…

In many domains, these methods have been highly impactful. Chain-of-thought prompting - encouraging models to write intermediate steps - has been shown to improve performance on multi-step problem solving, including arithmetic and symbolic reasoning.

This is progress. But notice what these methods are doing. They’re asking the model to traverse a different path inside the same, prompt defined, space. To elaborate. To be careful. To search. They are not giving the system an intrinsic mechanism for discovering a distant but structurally related domain.

It is improvement through introspection, not exploration.

Hoping for these techniques to create insightful and novel connections across disparate conceptual spaces is akin to locking a world-class structural engineer in a room on their own and hoping that as they talk to themselves that they will, apropos of nothing, blurt out the the word “dragonfly”; And from that utterance they will go on to stumble upon a new biomimetic design for suspension bridges.

Far reasoning is not simply “thinking longer” on a problem. It is “thinking elsewhere” - and sometimes, maybe, not even thinking at all.

Analogy: where humans transfer and models stall

If the absence of thought makes far transfer sound poetic, analogy research makes it much more concrete.

A 2024 study on analogical transfer compared children, adults, and large language models on analogy problems that required not just solving within a familiar domain, but transferring the learned rule to unfamiliar symbol systems. Children and adults generalised; the models, largely, did not. 

These weren’t “gotcha” riddles. They were precisely the kind of abstract rule transfer Far Transfer relies on: recognising a structure, then carrying it across a surface shift to a new domain.

To be clear, models can produce analogies.

Sometimes they produce excellent ones. But producing a plausible analogy is not the same as reliably transferring a rule across representational distance.

The workaround today is often brute force: multi-agent systems, multiple prompts, multiple perspectives, repeated sampling.

Instead of making an intuitive leap while the mind is otherwise occupied (if you’ve ever been struck by inspiration while driving on the highway, or having a shower you’ve experienced this), you marshal an army of fifty minds and set them off in different directions in the hope that one of them makes a connection of value.

It works, sometimes.

It is also a confession.


The shapes we’ll be

Modern working society has not historically nurtured or reward Far Transfer as a skill. This may be linked to the fact that it most often strikes when we are seemingly idle, or otherwise engaged, (Far Transfer is a child of your brain’s Default Mode Network, the system that kicks in when we’re daydreaming, or meditating) anathema to modern ‘productivity optimisation’.

So we romanticise the lone genius, but discourage the ‘idleness’ needed to replicate the the far-ranging, diagonal connections, they are famous for making.

The modern workplace rewards throughput, legibility, measurable competence and collaboration - traits represented by familiar Skill Profile shapes:

  • I-shaped specialists: deep and narrow.

  • T-shaped generalising specialists: broad enough to collaborate, deep enough to own execution.

  • π or Comb-shaped professionals: T-shaped but with more ‘legs’, accumulated over time. 

These are good shapes for keeping the world running.

They are less good for changing how the world works.

A comb shape is closest to what we need for Far Transfer, but a comb can still be a set of silos. You can master four domains and never let their concepts and systems touch. Many senior careers end in this shape: a collection of vertical competencies - developed both inside and outside the workplace - with little to no cross-pollination. It looks great on a LinkedIn profile. It does not, by itself, result in truly original thinking.

If Far Transfer is the scarce edge, the skill profile of the future is not one merely of multi-disciplinary depth. We need multi-disciplinary depth connected by a practiced diagonal to represent the muscle needed to understand, force and validate Far Transfers.

I am calling this new Skill Profile “Hash-shaped”: multiple pillars of expertise, slashed through by bold, diagonal, strokes. Unlike traditional Skill Profiles the pillars are not the focus for the Hash-shaped individual, their goal is to foster an interconnecting mindset. To develop the skills needed to summon the diagonal slashes from which the shape gets its name, repeatedly, and at will.

An individual who can fully adjust to this shape is armed with the skills needed to create whole new concepts. 


Why this matters

After the Second World War, much of the West ran on a simple compact - sometimes explicit, often haphazardly delivered through a combination of government policy and labour power - that productivity gains would be shared evenly. In this world wages rose in line with commercial output. Public housing was built at scale. The idea of a single income supporting a family and a home wasn’t a nostalgic myth; it was, for many, the norm.

Over the course of the 1980s, that compact collapsed.

Mass privatisation, deregulation, the crushing of workers unions and the spreading tentacles of finance re-wired the western world. “Financialisation” is a slippery term, but the underlying change is legible: a ever growing share of economic life began to organise around asset values, leverage, and the extraction of returns arising from ownership. Rather than the distribution of gains through wages.

Empirically, the relationship between the financial system and inequality is contested in the details, but not in the fact that it matters. The IMF’s research shows how financial development can spur growth, while also deforming distribution. Meanwhile, household debt - the artificial mechanism by which those with stagnant incomes have mimicked rising living standards for the last fifty years - has been repeatedly flagged by central banks as a stability risk.

The lived experience of this shift can be summarised as “more stuff, less wealth”: technological advances delivered cheaper electronics and better logistics, but the cost of core life inputs - housing, childcare, education - rose faster than our ability to afford them, and the gains of productivity were increasingly captured by a diminishing minority rather than shared with the majority.

My point is not to romanticise the post-war period. It had many exclusions and injustices of its own. The point is that the contrast of that period with the last half-century offers a stark reminder: we live in a world where productivity gains do not automatically result in shared prosperity, and we have been for some time.

Distribution is political. It is institutional. It is contested. And it is driven by those who own the assets.

It is into this, nearly post-capitalist landscape that AI, with its promise of unheard of productivity gains, has crash landed.

WEF, Amodei and Noblesse Oblige

The uncomfortable truth is that most knowledge work is near transfer. It is local optimisation inside an established frame: the quarterly plan, the funnel analysis, the meeting write-up, the procurement process. The modern office is the high altar to the T-shaped individual.

This is precisely why language models are so convincingly threatening wide-spread career disruption. The near transfer-style reasoning that they they were born to do, that they excel at, is what modern businesses have been architected around.

At Davos this year, leaders oscillated between optimism and anxiety: executives repeating the “jobs, jobs, jobs” mantra, unions warning that productivity gains will not be shared, and institutions like the IMF openly discussing huge labour market disruption. Oxfam, as it usually does, arrived with fresh numbers on billionaire wealth growth and a familiar accusation: extreme wealth is not merely a market outcome but a political project.

The dialogues at the World Economic Forum were followed quickly by a 20,000 word essay from Dario Amodei, Anthropic Cofounder and CEO. This essay, titled “The Adolescence of Technology” not only put me to shame with its length (I must up my game!) it also explicitly discussed the societal and species level shocks that Dario believes we will need to traverse in order to land on the golden, techno-optimistic, shores of an AI-powered future.

A number of these shocks fall firmly into the security and alignment categoryies. These have the potential to be extinction level issues, but the one I would like to focus on today is economic in nature.

Firstly Amodei should be given credit for the candour of his writing.

His article is unusually explicit - by the standards of the modern Tech CEO - about the scale of the coming economic disruption he foresees, and its implications for wealth concentration. Secondly, he treats that concentration not just as a moral abstraction but as a political risk: the legitimacy of the system, he suggests, will not survive a world in which a small set of firms claim enormous rents while employment rates collapse around them.

His proposed solutions feel thin though. He suggestions five defences against this societal level shock:

  1. Better real-time data - Anthropic’s Economic Index tracking job displacement

  2. Steering enterprises toward “innovation” over “cost savings” - i.e., “doing more with the same people” rather than efficiency based layoffs

  3. Companies being “creative about reassigning employees” - and in the long term, paying employees “even long after they are no longer providing economic value in the traditional sense”

  4. Philanthropy — he notes the Anthropic co-founders have pledged 80% of their wealth to charity

  5. Progressive taxation - which he frames as the natural policy response, but immediately hedges: “Obviously tax design is complicated, and there are many ways for it to go wrong. I don’t support poorly designed tax policies.”

Reviewing these defences one cannot help but feel underwhelmed.

First, we do a better job of measuring impact. Then we ask companies to do right by their employees, and hope they will. We find a sweet spot for taxation on the fine line between billionaire acceptance and mob appeasement. And for the rest? We hope that philanthropy will make up the difference.

In the Silicon Valley Tarot card deck I’m working on Dario Amodei is XI. ‘Justice’.

This, it’s worth noting, is the message coming from someone who overtly the good guy in the room. And his best hope seems to be: keep the current innovation engine running, then rely on a combination of minimum acceptable taxation and Noblesse Oblige to appease the unemployed masses.

This, is why I am pushing this topic. Why I’ve returned to it for a second post, straight after my first.

If you accept the scale of disruption that AI promises to bring, and you doubt the likelihood of structural and institutional reform arriving in time to compensate for it, then you end up drawing only one conclusion: the next five to ten years will be survival of the fittest, and the individuals that can best adapt have the highest likelihood of surviving.


The AI counter-move: can architectures escape adjacency?

Ok. But it was taste last month, now it’s Far Transfer. How long will this edge really last?

And the answer is, I don’t know. The field is moving forward at an incredible pace. Models improve. Some recent work argues that LLMs can achieve sophisticated analogical reasoning through domain-general learning, even if the mechanisms differ from humans. Others are investigating systematic generalisation and inductive biases - how and when models learn abstract rules, rather than surface correlations. 

But, for now, the pattern remains: the default dynamic of attention-plus-embeddings creates a gravitational pull toward the near. And when models do go far, it is often because humans scaffold the leap: by constructing prompts that explicitly bridge domains, by retrieving the right documents, by running multiple attempts, by layering tool use and verification. Attempts to achieve Far Transfer without these frameworks they often become untethered - something called Semantic Drift.

In other words, the present trajectory looks less like machines spontaneously developing Far Transfer, and more like humans building exoskeletons that let machines approximate it.

That may still erode the advantage over time. But in itself helps clarify the timeline we have to adapt our ways of working an reap the ensuing rewards.

If you are deciding how to invest your own time and learning - where to double down, what to delegate, what to stop caring about all together, and what to start - I would suggest that the correct response is not to try and compete with the AI on its home turf, on deep domain expertise. It is to become a person who can draw diagonal strokes.

It is to get Hashed.


Which brings us back, briefly, to Sunday morning and to child 2.

A four-year-old does not want to be an X because an X sounds cross. She wants to be a π because π is cute, and tasty. She makes a pie themed self-portrait because she is, by default, cross-domain: she does not yet respect the boundaries between symbol and object, between metaphor and mechanism.

And this fact should reassure us. We are all born drawing brilliant bold diagonal slashes across the world - we have just educated this practice out of ourselves and structured our lives so as to systematically oppress them.

The task for adults then, is to re-learn this diagonal flair and put it to good use.

After all near transfer keeps the world running. But Far Transfer - informed by a dollop of taste - decides the shape of the world that we are running toward.


Building a Hash-shaped skill profile

“Have you ever thought about what it is to be intelligent? Probably some of you have, right? ‘Cause you meet your friend, and he’s pretty dumb, and maybe you think you’re smarter and you wonder what the difference is?

And I’ve thought about this a little bit myself, and one of the things is, it seems to me a lot of it’s memory, but a lot of it’s the ability to sorta zoom out. Like you’re in the city and you can look at the whole thing from about the 80th floor, down at the city, and while other people are trying to figure out how to get from point A to point B reading these stupid little maps, you can just see it all out in front of you.

You can see the whole thing, and you can make connections that just seem obvious because you can see the whole thing. That’s why bright people feel guilty a lot, because they come up with stuff that they just say “Hey, look at this,” and other people give them these dumb awards and they feel funny.

But the key thing is that if you’re gonna make connections which are innovative, you’ve - to connect two experiences together - that you have to not have the same bag of experiences as everyone else does, or else you’re going to make the same connections, and then you won’t be innovative, and then nobody will give you an award. So, what you gotta do, is get different experiences than the normal course of events.”

- Steve Jobs on intelligence and Far Transfer - even if not named directly.

Below is a practical set of activities and exercises designed to help you build verticals quickly and exercise Far Transfer at will. To help you ‘zoom out’.

So here are nine things you can do to accelerate your transition to a Hash-Shaped person.

User's avatar

Continue reading this post for free, courtesy of Mat Munro.

Or purchase a paid subscription.
© 2026 Mat Munro · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture