Research

The founding research programme

CIT’s research programme was established around a set of questions that have shown considerable durability across thirty years of inquiry: how does culture form, sustain and change in complex organisations? What is the relationship between individual motivation and collective purpose? How do the levels of awareness operative in leadership affect the quality of institutional decision-making? And what does it mean, in practice, to change an organisation as a whole system rather than as an assembly of separable components?

These questions were not chosen for their theoretical interest alone. They correspond to the most common and the most consequential failure modes in organisational transformation. Strategies that do not account for the existing cultural architecture of an organisation routinely fail to translate into changed behaviour. Leadership development programmes that do not examine the relationship between individual motivation and organisational purpose produce technically capable leaders who are personally disengaged. Change programmes that treat functions, processes and people as independent variables rather than as elements of an interdependent system produce local improvements that do not propagate or endure.

The research has been conducted in parallel with consultancy practice throughout. This means it has been tested continuously against the conditions of real organisations, at different scales and in different sectors, over more than three decades. The result is not a theoretical framework applied to organisational life. It is an understanding of organisational life that has generated its own frameworks.

The contemporary research layer

The emergence of artificial intelligence as a participant in organisational life, rather than merely a tool available to it, has required CIT to extend its research programme significantly without abandoning its foundations. The questions remain recognisable. The complexity of the answers has increased considerably.

CIT is currently conducting research across three interconnected areas.

The first is human behavioural intelligence in the context of the fused workforce. This examines how individuals maintain, or fail to maintain, a coherent sense of purpose, autonomy and accountability when their work is structurally interdependent with AI systems. It includes an examination of how trust operates differently when some participants in a work system are non-human; how knowledge flows or is withheld in environments where the quality of AI performance depends on the quality and generosity of human contribution; and how individuals calibrate their own judgement and agency when operating alongside systems that are, in specific domains, considerably faster and more consistent than they are.

The second area concerns AI behavioural intelligence: the values, assumptions and behavioural parameters embedded in AI systems, whether by design or as a consequence of the data from which they have learned, and the degree to which those parameters are aligned with the declared culture of the organisations deploying them. This is an area in which the gap between organisational intention and operational reality is currently substantial, and in which the costs of misalignment are poorly understood and rarely measured.

The third area is fusion design: the architecture of operating models in which human and digital workers are not simply adjacent but genuinely integrated, with a shared values framework, coherent governance structures, and accountability mechanisms that extend across both human and non-human participants. This requires drawing on CIT’s existing research into whole systems change and motivational alignment, and extending those frameworks into the design space created by the presence of AI at scale.

From AI to Original Augmented Intelligence: the architectural question

The research areas described above are framed in terms of AI, which is the language most organisations are currently using and the context in which most of the relevant decisions are being made. CIT uses that language deliberately, because the audience navigating these questions today is working within an AI frame and the conversation must begin where people are.

But there is a more fundamental architectural question beneath it, and CIT’s research is increasingly oriented toward it.

The dominant model of AI integration treats the human as a participant inside a system the AI structures. The human is in the loop: present, consulted, sometimes decisive, but operating within a frame the technology has set. The design logic runs from the machine outward. Capability is defined by what the AI can do. The human adapts to that.

The question CIT is asking is whether that is the right architecture — or whether it is simply the default one

The alternative model inverts the logic entirely. It begins with the human: their original intelligence, their accumulated judgement, their values, their capacity for meaning-making and contextual reasoning that no system built from aggregated data can fully replicate. In this model, AI does not structure the work. It amplifies the human’s capacity to do it. The AI is in the loop with the human. That is not a subtle distinction. It is a different design philosophy, and it produces different cultural, behavioural and organisational consequences.

CIT uses the term Original Augmented Intelligence to name this architectural position. OAI does not replace AI. It sits above it. AI remains the technological substrate. OAI describes the design orientation: the commitment to keeping human original intelligence as the primary asset, and ensuring that the systems built around it amplify rather than diminish it.

For organisations, the practical implication is a choice that most have not yet named as a choice. They are building AI into their operating models, their workflows and their workforce architecture, but they are not asking whether the model they are building places human judgement at the centre or at the periphery. The cultural and behavioural consequences of that choice are profound. An organisation designed around OAI logic needs its people to bring their full original intelligence to their work: their values, their judgement, their creative and ethical reasoning. That requires a culture of trust, knowledge-sharing and purposeful contribution. An organisation designed around AI-in-the-loop logic needs its people to operate reliably within defined parameters. That requires a very different culture, with very different behavioural architecture.

Most organisations are building the second while claiming to want the first.

Culture, values and behaviour look different depending on which model you are actually designing for. The work of clarifying that is where CIT's research now sits.

The digital twin question

The same architectural distinction applies to how digital twins are understood and designed. The prevailing definition of a digital twin in an organisational context is a digital representation of a process, a workflow or a system: a model that mirrors operational reality and enables simulation, optimisation and prediction. That definition reflects AI-in-the-loop logic. The twin is a twin of the work.

In an OAI model, the more significant asset is a digital twin of the human: a structured, evolving representation of an individual’s original intelligence, expertise, judgement, values and accumulated knowledge. Not a record of what they have done, but a capture of how they think and what they know, built with sufficient fidelity to function as an extension of that person’s contribution rather than a replacement for it.

That distinction matters for how organisations think about knowledge architecture, workforce design and the long-term value of the people within them. A digital twin of a workflow is an operational asset. A digital twin of a human is a strategic one. It compounds in value as the individual develops. It is portable across contexts. It survives organisational restructuring. And crucially, it keeps the human’s original intelligence at the centre of the system rather than treating it as an input to be processed and replaced.

CIT’s research into fusion design engages directly with this question: what does it mean to build an organisation in which the digital representation of human intelligence is treated as the primary asset, and in which the cultural and behavioural architecture supports and sustains that rather than inadvertently undermining it.

Note on terminology: CIT uses the term AI throughout this site because that is the language in current organisational use. Original Augmented Intelligence and OAI are concepts developed within the Bloor Research programme. Where those terms appear, they name a specific architectural position rather than a rebranding of the same idea.

The research ecosystem

CIT’s research is developed within the wider Bloor Research analytical programme, which examines the structural, economic and technological forces reshaping organisations and economies at system level. CIT provides the human, cultural and behavioural architecture within that programme: the analytical layer required to determine whether structural and technological transformation is coherent at the level of the people and organisations living through it. The two bodies of work are complementary and mutually informing: CIT’s findings about behavioural dynamics in fused workforces inform Bloor’s structural analysis of operating model viability; Bloor’s analysis of economic and market forces informs the scale at which CIT frames its research questions about human and organisational resilience.

The central research question for CIT in this period is not what AI can do. It is what kind of cultural, behavioural and values architecture an organisation must have in place for AI to function coherently within it, at scale, over time.

© 2026 CIT Ltd. All rights reserved.