From dBase and enterprise systems to the internet revolution and generative AI, I have come to see AI not as a sudden break, but as the latest step in a long slope of automation now reaching human cognitive work itself.*
By Po-Sung(Sinclair) Huang
I did not decide to write this essay because AI suddenly became fashionable.
It came out of two images that collided in my mind.
One was a group of younger people trying to imagine new ventures built around AI. The other was the story of a father facing a rare disease so obscure it seemed to leave almost no path forward, and yet continuing, late into the night, to search for a way through with the help of computation, search, and structured reasoning.
When those two images met, I realised I had been walking toward this question for more than three decades.
That realisation did not feel sudden. It felt like finally being able to name something I had already known for a very long time.
Many people talk about AI as if it arrived all at once, like a rupture in history.
I do not see it that way.
To me, AI looks more like the latest point on a long technological slope that started much earlier. First data became structured. Then processes became standardised. Then firms became networked. Now cognition itself is being pulled into the system.
AI was never sudden. It was just the moment when the slope became impossible to ignore.
The long slope
I did not begin believing that machines would reshape work when ChatGPT appeared.
I had already been thinking along that line since the 1980s, when I first encountered tools like dBase, and later Clipper, AS/400, and a range of enterprise systems. What struck me was never the glamour of a programming language itself. It was the deeper question behind it: how much work inside a firm was still being done manually simply because no one had yet articulated the logic clearly enough for a system to take over?
Once a task can be described clearly enough, it rarely remains purely human for long.
That intuition stayed with me.
I also know that many computer scientists and data scientists saw this slope earlier, more precisely, and more rigorously than I did. They were building expert systems in the 1980s, statistical learning frameworks in the 1990s, and increasingly formalised models for markets and decision-making long before generative AI became mainstream. They saw the technical frontier more clearly than I did.
But that is not quite the point I am making here.
My claim is not that I foresaw the technical inevitability of AI better than people at the frontier. My claim is that from inside real organisations, real workflows, and real constraints, I kept seeing what happens when those capabilities stop being technical possibilities and start colliding with coordination, hierarchy, incentives, and responsibility. Technical communities often see where the capability boundary is. Practitioners inside firms see what happens when that boundary enters real life.
Later, the internet revolution added another layer. I came to believe that the firms of the future would not be either “traditional companies” or “internet companies.” They would be both physical and digital at once. Online and offline would not be two separate worlds, but two faces of the same operating logic.
From that point on, I stopped thinking of the internet as a mere tool. It looked more like a one-way road. It lowered the cost of starting new ventures, changed how firms connected with markets, and gradually handed over more and more coordination, transmission, and integration work to systems.
Meanwhile, automation kept advancing elsewhere as well: autopilot and navigation in aviation, process automation in factories, surgical systems, predictive maintenance, and autonomous vehicles. The pattern was always similar. Technology first took over repetitive motion, then standardised processes, then increasingly complex forms of coordination. Now it is moving into domains once thought to belong almost entirely to human judgment.
That is why I cannot see generative AI as something that came from nowhere.
It is the continuation of a much longer story.
When judgment becomes rules
My strongest understanding of this did not come from reading headlines. It came from work I had done myself.
At one point, I was responsible for an accounts payable operation involving hundreds of people. The task was not trivial clerical work. Vendor payments had to be handled correctly across invoices, purchase orders, purchase requests, goods receipt confirmations, partial deliveries, and payment terms. The process was complex, error-sensitive, and labour-intensive.
But once the logic of three-way and four-way matching was truly embedded into the system, the work changed. What had required large numbers of people repeatedly checking, reconciling, and coordinating could now be processed automatically in most normal cases, with humans intervening mainly in exceptions.
What I saw then was not only higher efficiency. It was not only lower headcount.
What I saw was this: once judgment hidden inside human routines, process memory, and interdepartmental coordination can be extracted into rules, it can eventually be handed over to a system.
That experience never left me.
So when people today speak of AI as if it were an alien force dropping into human affairs, I instinctively resist the framing. Much of what astonishes people now is the continuation of something I have seen before: the movement from automating action, to automating process, to automating parts of cognition.
The midnight interlocutor
There is another part of this story that people talk about less.
For most of my adult life, whenever I wanted to think deeply about a problem, I did it mostly alone. Late at night, with books, reports, search results, notes, and a blank page. Not because there were no colleagues around me, but because very few people want to keep going once the problem becomes deep enough, ambiguous enough, or exhausting enough. Most people stop when they have something usable. I rarely did.
That is why AI affected me in a way that is not captured by the usual language of productivity.
AI did not just make me faster. It gave me something I had almost never had before: a counterpart for sustained inquiry. A midnight interlocutor. Something that did not get tired, did not say “that’s probably enough,” did not refuse to keep digging.
Sometimes it frustrates me. Sometimes it makes mistakes that are infuriating. Sometimes it amplifies my blind spots as much as my strengths.
But it also changed those late-night hours. For the first time, they were not entirely solitary.
That matters more to me than most discussions of AI productivity gains.
And precisely because I value that relationship, I do not feel about AI in a simple, one-directional way.
It has amplified my capability. It has also amplified my blind spots.
It lets me move faster into a question, more broadly and more deeply. But it can also tempt people into believing that if they have enough information, they have the truth. I learned long ago that this is not the same thing.
What data cannot read
I remember once preparing a month-end analytical report for a listed company. I was not in the finance department, but I spent a great deal of time combining internal numbers, industry materials I had collected myself, my first-hand observations, and my own judgment into a more comprehensive view of what was happening.
The CEO attended the presentation and told me directly that he was there because of that report.
But he also gave me a lesson I never forgot.
He said: The industry information you included is useful, but you need to remember that much of what appears in reports comes from what analysts are told by the very people inside the industry. Those numbers do not necessarily reflect what is really happening. If you want to know the truth, you have to go into the factory. You have to go into the market. You have to stand in the field.
He was right.
And the word field does not mean just one place. It means multiple information environments, each with its own language, many of which AI still struggles to read fully.
These signals are rarely found in dashboards. They live in texture.
In manufacturing, the field can speak through small operational anomalies that never appear in a report: the half-lit warehouse, the stillness of forklifts, the looseness of movement, the improvised whiteboard coordination that suggests the process is no longer flowing cleanly.
In sales, the field can speak through hesitation that no CRM system records: where someone looks when they say “we’ll think about it,” which question they ask last, and whether the room has shifted from specification to avoidance.
At the negotiation table, the field can speak through structural movement rather than explicit statements: a legal team suddenly appearing in the third meeting, a different person taking the lead, a change in who is silent and who is now attentive. These are not minor social details. They often signal that the true decision chain has moved.
That is why I continue to believe something simple but important:
AI reads the world that has already been written down. But the most important realities in business are often the ones that have not yet been written, or not yet openly acknowledged.
Data tells you what has already happened. The field often tells you what is starting to happen.
I do not mean this as a rejection of AI. On the contrary, I take its future strength seriously. I fully expect stronger systems, better sensors, denser multimodal inputs, and more integrated feedback loops to read more of what we currently call “the field.” One day, systems may detect many of the signals humans currently rely on instinct to perceive.
But that possibility makes me more, not less, concerned about a different question: before that day fully arrives, will human beings still retain the habit of going into the field, defining the problem, and bearing the consequences of judgment?
That is not only an epistemic question. It is a moral and organisational one.
What AI is really repricing
This is why I no longer like to talk about AI as merely “a stronger tool.”
It is a tool, yes. But it is also a repricing system.
Work that can be clearly described, standardised, verified, and replicated is becoming cheaper. Work that requires problem definition, responsibility, exception handling, trust, synthesis, and real-world sensing is becoming more valuable.
Put simply, this repricing is already underway. Not as a distant abstraction, but as an uneven reorganisation unfolding across industries, roles, and time horizons. Some of it is already happening. Some of it will accelerate over the next three to ten years. The pace will vary. The direction is becoming hard to miss.
Here is the contrast as I see it:
Capabilities becoming cheaper
-
Producing standardised summaries
-
Writing routine code
-
Cleaning and formatting first-pass data
-
Drafting generic reports
-
Following known procedural logic
-
Performing rule-based review tasks
Capabilities becoming more valuable
-
Defining what problem actually matters
-
Judging system boundaries and exceptions
-
Deciding what should not go into a report
-
Sensing unwritten signals in the field
-
Handling legal and ethical responsibility
-
Mediating conflict across functions
-
Building trust under uncertainty
-
Bearing the consequence of wrong judgment
So the real change is not simply whether jobs disappear.
The real change is which capabilities become cheaper, which become scarce, which processes get absorbed by systems, and which forms of responsibility remain irreducibly human.
Where individual value moves
For individuals, the danger is not that AI becomes more powerful. The danger is remaining positioned as a provider of standardised answers.
The people who rise in value will not simply be those who generate first drafts fastest. They will be the ones who define the right problem, make trade-offs under uncertainty, interpret context, handle exceptions, integrate across silos, and earn the trust of others.
In other words, the moat shifts from delivering answers to exercising judgment.
How firms reorganise
For firms, I do not think the future looks like a completely autonomous black-box company.
What seems more plausible is a smaller core team, surrounded by an AI coordination layer, connected to an external network of specialised contributors. AI will increasingly take over information compression, routine analysis, first drafts, workflow tracking, and parts of customer interaction. Human beings will concentrate more on direction-setting, brand promises, exception management, cross-domain integration, and final accountability.
This is not just a change in headcount. It is a change in how coordination itself is organised.
What society will have to absorb
For society, the most serious question is not just unemployment.
It is whether career ladders get hollowed out, whether the middle thins out, and how time, income, dignity, and responsibility are redistributed. The most credible research I have seen points less to an immediate “end of work” than to a prolonged period of task restructuring, skill revaluation, and unequal distribution of gains.
That is a harder problem than sensational predictions of total job extinction. It is also a more real one.
What Matt Might’s story illuminated
But I cannot end with organisational analysis alone.
What struck me about Matt Might’s story was not that it was inspirational. It was what illuminated something most AI discussions refuse to hold together.
A father was facing a rare disease for which there was almost no existing map. No clear answer. No widely available treatment. Almost no one truly understood the condition. That kind of darkness is difficult to overstate.
He did not begin by asking whether this was “AI” or whether it fit anyone’s preferred narrative about innovation. He used what he had to the limit: search, organisation, modelling, linkage, and validation. He took what might once have required years of scattered discovery and compressed it into a timeframe in which a family could still act.
In that sense, stronger AI becomes something more than a productivity engine. It becomes a companion in the dark. The last ally in a long night of inquiry. It also hints at something larger: technology can sometimes give ordinary people a way to move through what once looked impossible.
But the same capability has another face.
The power that can amplify a father’s ability to find a path forward can also amplify a system’s ability to identify, rank, target, and harm. The power that helps human beings cut through informational darkness can also help institutions make violence more efficient. The issue is never only the technology. It is the goals into which the technology is inserted, the institutions that deploy it, and the human motives that guide it.
That is why I cannot speak of AI with either naive optimism or easy rejection.
The one thing machines cannot own
So is AI salvation or poison?
The longer I think about it, the more I feel that the question itself is a trap.
What matters more is this: AI can run deeper analysis, generate large amounts of code, and pre-sort anomalies, risks, and potential targets. It can help us model. It cannot take responsibility for us.
Human beings still have to review, confirm, sign off, and bear the consequences.
Machines can give us the estimate; only humans can face the audit.
In high-stakes systems, the real question is not whether the machine can recommend an action. It is who must answer for that action when it is wrong.
That, to me, remains one of the last meaningful human moats in cognitive work: not just intelligence, but accountable consequence.
So the real question is not whether we should embrace AI.
The real question is what we are willing to delegate, what we are not, and what kind of human beings we still need to become in a world where tools grow more powerful every year.
I do not claim to have resolved that question.
I would rather keep walking honestly with these questions than pretend I have already resolved them.
About the Author The author has spent decades in business management, financial analysis, and industrial strategy, with a long-standing interest in the intersection of artificial intelligence, automation, organisational change, and human responsibility.
Further Reading
- Matt Might: https://matt.might.net/- Bertrand Might memorial site: https://bertrand.might.net/- World Economic Forum, The Future of Jobs Report 2025: https://www.weforum.org/publications/the-future-of-jobs-report-2025/- ILO research on generative AI and jobs: https://www.ilo.org/publications/generative-ai-and-jobs-2025-update- IMF, Bridging Skill Gaps for the Future: New Jobs Creation in the AI Age: https://www.imf.org/en/publications/staff-discussion-notes/issues/2026/01/09/bridging-skill-gaps-for-the-future-new-jobs-creation-in-the-ai-age-572136- World Bank, East Asia and Pacific Economic Update 2025: https://www.worldbank.org/en/publication/east-asia-and-pacific-economic-update-october-2025Disclaimer This essay reflects the author’s personal observations, professional experience, and interpretation of public developments. It does not constitute investment, legal, medical, or other professional advice.
Hashtags
#AI#Artificial Intelligence#Future of Work#Technology#Society