TL;DR: Everyone fears blockchain will kill banks and AI will kill jobs. But people who actually build these things know the truth: replacing centuries-old financial infrastructure takes centuries of new infrastructure, and giving AI the eyes, hands, and judgment of 8 billion humans requires an almost incomprehensible amount of human work to build. Both technologies are creating more jobs than most people realise — just different ones. The answer isn’t fear. It’s adaptation.


Two technologies. Two waves of fear. Two sets of promises that have been simultaneously oversold and misunderstood.

Everyone is talking about AI and blockchain. The coverage is relentless — in newspapers, in boardrooms, in governments, at dinner tables. And the anxiety follows the same pattern for both: this technology will take something away from us. For blockchain, the fear was that trustless protocols would make banks, exchanges, clearing houses, and every financial intermediary obsolete. For AI, the fear is that machines will take human jobs — that the knowledge worker, the creative, the analyst, the coder are all next in line.

Both fears contain a kernel of truth. Both are, in their current form, wildly premature. And the reason they’re premature is the same in both cases: the gap between a promising technology and a functioning world built on top of it is measured not in months, but in decades — and it requires an enormous amount of human work to close.

I work in both fields. Let me tell you what it actually looks like from the inside.


The Blockchain Promise — And Why the Banks Are Still There

When Bitcoin arrived and then Ethereum followed with smart contracts, a generation of builders and theorists looked at the traditional financial system and said: all of this can be replaced. Banks verify identity and hold funds — blockchain does that trustlessly. Exchanges match buyers and sellers — decentralised exchanges can do that without an intermediary. Clearing houses settle trades — smart contracts settle instantly, automatically, without counterparty risk.

The logic was sound. The vision was real. And yet, a decade-plus later, the banks are still there. The clearing houses are still there. The exchanges — at least the regulated ones that move the serious money — are still there.

Why?

Because replacing a financial intermediary is not just a technology problem. It is a trust problem, a legal problem, a behavioural problem, and an infrastructure problem — all at once.

Think about what “replacing a bank” actually requires. You need smart contracts that are correct — not probably correct, not mostly correct, but formally verified and battle-tested against adversarial conditions. You need tokenised representations of real-world assets: property, equities, bonds, commodities — each of which requires legal frameworks that recognise the token as legitimate ownership. You need identity systems that are both privacy-preserving and regulatorily compliant. You need wallets and interfaces that ordinary people can use without losing their life savings to a lost private key or a phishing attack. You need liquidity deep enough that markets function. You need dispute resolution mechanisms for when things go wrong — and things always go wrong.

And underneath all of that, you need the hardest thing of all: you need people to trust it.

Trust in financial systems is not a feature you ship. It is something that accumulates over time, through repeated use, through survived crises, through regulation that catches bad actors, through the slow accretion of institutional memory. The traditional financial infrastructure we have today took centuries to build. The Bank of England was founded in 1694. The New York Stock Exchange has been operating since 1792. SWIFT, the messaging system that underpins international wire transfers, was only established in 1973 — and it took decades to achieve global adoption.

We are not replacing that in five years. Or ten. What we are doing — what the best builders in the blockchain space are actually doing — is constructing the foundations of a parallel system, piece by piece. Every smart contract standard, every DeFi protocol that survives a bear market, every tokenisation framework that achieves regulatory recognition, every wallet that a non-technical user can actually navigate — these are bricks. The building takes time.

The jobs being created: smart contract auditors, blockchain protocol engineers, tokenisation lawyers, on-chain risk analysts, DeFi liquidity architects, compliance engineers who understand both traditional finance and cryptographic systems. These roles barely existed ten years ago. They are in high demand today.


The AI Promise — And Why 8 Billion Humans Still Have Eyes and Hands

The AI anxiety is sharper because it moves faster. Every few months there is a new model, a new capability, a new profession apparently on the edge of obsolescence. Lawyers, radiologists, writers, programmers — all have been declared, at various points in the last three years, to be soon replaceable.

There is something to this. AI is genuinely transforming how knowledge work gets done. Tasks that took hours now take minutes. Analysis that required specialists can now be initiated by generalists. The marginal cost of certain kinds of content, code, and reasoning is falling fast.

But here is what you understand when you work in the field: we are nowhere near done building.

Consider what it would actually mean for AI to replace a meaningful slice of human cognitive labour at scale. You need base models that are more reliable, more grounded, less prone to hallucination — because in high-stakes domains, a model that is right 95% of the time is not good enough. You need agents — not just a single model answering a question, but systems of models working together, passing information, taking actions, using tools, checking each other’s work. You need agent-to-agent communication protocols: standardised ways for AI systems to interact, negotiate, delegate, and verify — the equivalent of the APIs that power today’s internet, but for autonomous reasoning systems.

And then you need guardrails. Safety mechanisms that prevent agents from taking harmful actions. Interpretability tools that let you understand why a model did what it did. Audit trails. Access controls. Rate limits. Rollback mechanisms. The entire apparatus of reliability engineering, applied to systems that are non-deterministic by nature — systems where the same input does not always produce the same output, and where you cannot simply write a unit test that covers every case.

Now zoom out further. We have 8 billion people on this planet. Every one of them can see, walk, navigate complex environments, understand social context, recognise faces, detect when something is wrong, and respond to the physical world in real time. Machines that can do this at anywhere near human breadth and reliability do not yet exist.

To get there, you need data at a scale that is almost incomprehensible. Billions of cameras, sensors, and microphones collecting information about the physical world. Massive compute infrastructure to process that data. Labelling pipelines — which, despite the progress in self-supervised learning, still require enormous amounts of human annotation for edge cases and domain-specific tasks. Robotic systems that can act in the physical world, each requiring mechanical engineering, control systems, safety certification, and maintenance infrastructure.

Then you need deployment infrastructure. How does an AI system access the world? Through APIs, through devices, through integrations with existing software. Every connection point is an engineering problem. Every integration is a project. Every domain — healthcare, logistics, manufacturing, education, law — has its own data formats, its own regulatory requirements, its own legacy systems that were not designed with AI in mind.

All of this requires people to build it. And it is creating jobs: prompt engineers, AI safety researchers, ML infrastructure engineers, data labellers and curators, AI product managers, evaluation specialists, fine-tuning engineers, AI ethicists, red teamers, model distillation specialists, synthetic data engineers. Most of these job titles did not exist five years ago.


The Pattern Is the Same

What blockchain and AI share is this: they are platform technologies. Like electricity, like the internet, like the mobile phone, their ultimate impact will be enormous — but that impact comes not from the technology itself, but from the countless applications, systems, institutions, and behaviours that are built on top of it over time.

Electricity did not replace factories overnight. It enabled a new kind of factory — one that could be designed around the flexibility of electric power rather than the constraints of steam. That transition took fifty years and created vastly more jobs than it destroyed, though not always the same jobs, and not always in the same places.

The internet did not replace retail overnight. It took two decades of infrastructure investment — broadband, payment systems, logistics networks, trust frameworks, consumer behaviour change — before e-commerce became dominant. And it created industries — cloud computing, social media, the gig economy, the creator economy — that no one predicted in 1995.

Blockchain and AI will follow the same arc. The fear that they will instantly eliminate whole categories of human work misunderstands how platform technologies actually propagate through economies and societies. The reality is messier, slower, and — if you are paying attention — full of opportunity.


What This Means for the Rest of Us

The answer to both of these technologies is the same, and it is not complicated, even if it is demanding: we need to change, and we need to adapt.

Not change in the sense of abandoning everything you know. Change in the sense of staying curious, staying close to where things are moving, and being willing to build new skills on top of existing ones. The lawyer who understands smart contracts will not be replaced by blockchain — they will be essential to it. The data engineer who understands LLM pipelines will not be replaced by AI — they will be indispensable to it. The financial analyst who can interpret on-chain data alongside traditional metrics is worth more than either specialism alone.

The people who should be worried are not the people who are engaging with these technologies — learning them, building on them, finding their edges and their failure modes. The people who should be worried are the people who are waiting for the disruption to be over before deciding to pay attention.

It won’t be over. This is the nature of compounding technological change. There is no point at which you can safely disengage and coast.

But for those who are building — who are putting in the patient, unglamorous work of constructing the smart contracts and the agent frameworks and the data pipelines and the safety evaluations — the picture is not frightening at all. It is one of the most interesting periods in the history of technology.

We have a lot left to build. That is not a warning. It is an invitation.

Updated:

Comments