From Silos to Systems; How Agentic AI Challenges Traditional Organisational Design
in professional on organisational-design, ai, software-teams, future-of-work
In a previous post, I explored how functional specialisation — at various levels of an organisation — creates hidden organisational debt that slows it down and reduces optionality. Slow delivery, brittle systems, and increasing misalignment between how we structure our people and how we want our systems to behave.
But that analysis assumed a world of mostly human actors.
Now, with the accelerating rise of agentic AI—tools and systems capable of taking initiative, decomposing goals, and coordinating their own workflows—we’re entering a new design space.
And we’re going to need new organisational metaphors, constraints, and incentives to match.
This is still an emerging field. But the early signals are clear: as agentic tooling becomes more capable, the way we structure our organisations will be a major determining factor on our ability to amplify (and constrain) their potential.
Those companies who fail to adapt will get rapidly replaced by those that do, with new kinds of companies emerging that were not possible before.
What thinking that we currently hold tightly to will still be valid, and what will need to let go of or change?
Why Agentic AI Changes the Game
Most automation in the last few decades has operated in a task-execution paradigm: software did narrow things well, often behind the scenes, often embedded in business processes that humans designed and oversaw.
Agentic AI changes that. We now have systems that:
- Take broad goals as input and break them into subtasks
- Select tools, APIs, and data sources
- Manage state across long-lived flows
- Coordinate with humans (or other agents) as collaborators, not just passive endpoints
That changes the role of the human, and shifts the boundary between “doing the work” and “designing the system that does the work.”
It also raises urgent questions:
- Who is accountable for the outcomes of an AI-augmented workflow?
- How do we ensure coordination across agents and humans when both are initiating change?
- What does “team” even mean when a portion of it isn’t human?
The End of the Knowledge Monopoly
Traditional organisational hierarchies were built on the premise that knowledge and expertise flow from the top down. Functional specialists held monopolies over their domains. But agentic AI is about to shatter this model:
Expertise Becomes Ambient: When AI can provide expert-level insights in real-time, the value shifts from holding knowledge to knowing how to apply it effectively
Horizontal Knowledge Flow: Information and learning will flow sideways through AI-mediated channels, bypassing traditional vertical structures
Role Fluidity: Fixed functional roles become less relevant when AI can rapidly scaffold domain knowledge
New Organisational Primitives
Instead of organising around functional specialties, companies may need to organise around:
Context Pods: Small, cross-functional groups with deep shared context about specific business domains
Learning Loops: Structures optimised for rapid experimentation and feedback between humans and AI
Intent Networks: Loose configurations of humans and AI agents aligned around clear outcomes rather than prescribed processes
The Human Advantage
The key differentiator becomes our uniquely human capabilities:
Intent Setting: Defining meaningful direction and purpose
Context Synthesis: Combining multiple viewpoints into coherent understanding
Social Orchestration: Building trust and alignment across human-AI teams
The organisations that thrive won’t be those with the best AI, but those who best understand how to create environments where humans and AI amplify each other’s strengths.
What Principles Still Hold?
Despite the hype and change, some organisational design truths will most likely remain remarkably resilient.
1. Sociotechnical alignment still matters
The core principle from Conway’s Law still applies: your systems will reflect your communication structures. If you introduce autonomous agents into brittle, siloed org structures, you’ll get fragmented, brittle AI usage.
2. Flow beats function
Designing for flow of change, feedback, and value will still outperform static hierarchies and handoff-heavy workflows. Agentic tools can enhance flow—but only if the environment supports it.
3. Clear purpose and context are essential
Agents need intent, scope, and constraints to act meaningfully. Humans do too. Making decision-making context legible at all levels—through goals, metrics, affordances, and APIs—remains essential.
What Might Need to Change?
This is where things get exciting—and uncertain.
- From roles to capabilities to outcome-defined functions Traditional org charts based on stable roles are already under pressure. With agents in the mix, we’ll need to model work in terms of capability graphs, where both humans and AI components can be dynamically assigned to outcome-defined goals.
1. From roles to capabilities to outcome-defined functions
Traditional org charts based on stable roles are already under pressure. With agents in the mix, we’ll need to model work in terms of capability graphs, where both humans and AI components can be dynamically assigned to outcome-aligned goals.
Think: “Who can satisfy this intent under these constraints?” not “Whose job is this?”
2. Teams as dynamic coalitions
The idea of a “team” as a static group of people may give way to fluid coalitions of humans and AI agents assembled around a problem or opportunity. This could drive a move toward short-lived, goal-oriented micro-teams, orchestrated by platform or protocol.
3. Organisational structure becomes more recursive
As agents become internal developers, decision-makers, and testers, we may see orgs that are nested systems of delegation—humans designing high-level strategies and constraints, agents handling tactical execution within those boundaries.
This mirrors how effective engineering teams work today: strategy at the top, execution at the edge, with feedback loops across the system. The difference is who (or what) is at the edge.
4. Governance, ethics, and observability become foundational
When agents are taking actions with consequences, we need radically better observability, traceability, and accountability. These aren’t bolt-ons. They become the scaffolding of trustworthy organisational systems.
Final Thoughts: It’s Not Just About Tech, or making the right tool decisions
This shift isn’t just technical—it’s deeply human and organisational.
Just like DevOps and agile ways of working required rethinking culture, incentives, and collaboration, agentic AI will force a similar reckoning. It’s not enough to bolt agents into old workflows. We’ll need to rethink what work is, who does it, and how systems evolve over time.
The organisations that thrive in this next wave will be those that design for adaptability, not just efficiency.
That means:
- Investing in platforms that support dynamic assembly
- Building incentives around outcomes and learning, not control
- Focusing on resilience, feedback loops, and optionality
We’re just getting started. But the future of work won’t be shaped by AI alone—it’ll be shaped by how we choose to organise around it.