Organizational Abdication

ORGANIZATIONAL ABDICATION

When individual weaknesses harden into structural logic

This is part two. Part one is here: link. Individual abdication is small: a shift of responsibility, a spontaneous choice to save mental energy. This is well documented in cognitive psychology as cognitive laziness (Pennycook & Rand 2019; DOI: 10.1016/j.cognition.2019.01.017). In an organization the same mechanism is magnified. People synchronize their minimum effort. Directions emerge from habit. Direction becomes routine. Routine becomes process. Process becomes standard. AI travels along these standards and amplifies them, consistent with research on machine behaviour (Rahwan et al. 2019; DOI: 10.1038/s41586-019-1138-y).
A logical conclusion is that AI accelerates organizational movements already present in the structure.

RESPONSIBILITY DIFFUSION

Decisions lose owners and land in the system

Organizations like to appear busy. Motion signals importance and confirms the structure’s existence. Reports and check-ins get produced to show activity even when direction is unclear.
A likely consequence is that flow starts replacing ownership, aligning with research on responsibility diffusion (Darley & Latané 1968; DOI: 10.1037/h0025589). When accountability is needed it is often easier to point elsewhere than to carry the decision. The decision becomes a product of “this is just how we do things.” The logical conclusion is that the decision is perceived as the system’s rather than any person’s. AI reinforces the pattern by delivering answers perceived as neutral, reducing human willingness to take ownership—a known effect of automation bias (Mosier et al. 1998).

AUTHORITY OVERTRUST

The system’s printout outweighs experience

Many assume a model is objective, stable, and more reliable than human judgment. This is supported by research on algorithmic authority bias (Logg et al. 2019; DOI: 10.1016/j.obhdp.2018.06.004). Experienced people fall silent when the system signals certainty. Cognitive laziness does the rest. A reasonable outcome is that judgment gets replaced by compliance.

RESOURCE ILLUSION

Review is cut, and costs hide until they explode

Organizations deprioritize control because it is seen as a cost. Research on system failures shows how reduced control creates risks that grow unseen (Dekker 2005). When review disappears, errors accumulate that would otherwise have been stopped early.
This leads to costs surfacing only after they have become systemic.

INCENTIVES

What gets measured becomes reality, even when it lacks reality

When KPIs steer behavior, metrics replace judgment. Goodhart’s Law is well documented (Strathern 1997). Organizations follow what is visible in the dashboard: speed, volume, cost. Quality lacks a metric and fades. A logical conclusion is that quality gets crowded out by numbers that are faster to produce.

COMPETENCE MISALLOCATION

The system is monitored by people who do not see the anomalies

Advanced systems require domain understanding (Baxter & Sommerville 2011; DOI: 10.1016/j.intcom.2010.07.003). Senior talent is expensive, so monitoring tends to be shifted to junior operators. Operators without deep experience follow manuals. Manuals do not capture anomalies.
The logical conclusion is that anomalies can stay invisible until they grow large. When systems have influenced enough decisions, consequences appear.
A likely outcome is that errors are detected late in the process. AI reinforces the pattern with coherent responses that mask drift.

HYPE AND LEGITIMACY

Technology is adopted as a signal rather than a function

Organizations adopt tech as a marker of legitimacy, described in research on institutional isomorphism (DiMaggio & Powell 1983; DOI: 10.2307/2095101). Adoption becomes symbol, not solution.
This leads to technology being shaped by cultural expectations rather than operational needs.

CULTURAL PASSIVITY

When no one pushes back, everything drifts the same way

Silence culture is a documented phenomenon (Edmondson 1999; Morrison & Milliken 2000). No one wants to be the one who goes against the stream. Conflicts are avoided even when friction is necessary. A reasonable consequence is that errors lack counterforce. When objections disappear, errors pass unnoticed. AI drives movement further in an environment with no brakes.

QUALITY GAPS

Implementation before understanding, every time

Organizations implement faster than they understand, a pattern documented in studies of socio-technical systems (Baxter & Sommerville 2011). Testing, rollback, and verification are postponed.
This leads to the system’s logic operating unrefined in production.

STRUCTURAL AMPLIFICATION

AI amplifies structural flaws until they are impossible to ignore

AI models the environment it is placed in (Rahwan et al. 2019). Tilts in culture, processes, and incentives are magnified. A logical conclusion is that organizational weaknesses scale faster in AI environments. As direction accelerates, flaws become visible.
This leads to structural distortions affecting the next layer of decisions.

SYRI

An example where the process became harsher than the human

SyRI locked people into risk flagging with no way back. Appeals were absent and the chain of responsibility was broken. This is documented in academic analysis (van Eck 2018; DOI: 10.2139/ssrn.3590201). The structure held on to errors longer than a human would have.
The logical conclusion is that processes without friction can drown out human judgment.

LAZINESS IN AGGREGATE BECOMES ARCHITECTURE

The organization’s trajectory is shaped by its accumulated shortcuts

Many small abdications create slopes. Review thins out. Control weakens. Processes continue under their own weight. AI gives the movement speed. Humans gave it shape.

SYSTEMS OPTIMIZE THE OBJECTIVE, EVEN WHEN THE OBJECTIVE IS SKEWED

Organizations set KPIs. The model follows them like law. Research on objectives shows how systems optimize even when the goal is inadequate (Amodei et al. 2016; Krakovna et al. 2020). When the goal tilts, the system follows the tilt consistently.
A logical conclusion is that the system produces errors with high precision when the objective is ill-defined. That is where part three picks up.


Sources

Pennycook, G., & Rand, D.G. (2019). Lazy, not biased. Cognition. DOI: 10.1016/j.cognition.2019.01.017
Mosier, K. L. et al. (1998). Automation Bias. Int. J. Aviation Psychology. DOI: 10.1207/s15327108ijap0801_3
Logg, J.M. et al. (2019). Algorithm Appreciation. OBHDP. DOI: 10.1016/j.obhdp.2018.06.004
Edmondson, A.C. (1999). Psychological Safety. DOI: 10.2307/2666999
Morrison, E.W. & Milliken, F.J. (2000). Organizational Silence. DOI: 10.5465/amr.2000.2791608
Strathern, M. (1997). Improving ratings. European Review. DOI: 10.1002/(SICI)1234-5679(199705)5:3<305
Baxter, G., & Sommerville, I. (2011). Socio-technical systems. Interacting With Computers. DOI: 10.1016/j.intcom.2010.07.003
Rahwan, I. et al. (2019). Machine behaviour. Nature. DOI: 10.1038/s41586-019-1138-y
Amodei, D. et al. (2016). Concrete Problems in AI Safety. DOI: 10.48550/arXiv.1606.06565
Krakovna, V. et al. (2020). Specification gaming examples. DOI: 10.48550/arXiv.2008.02275
van Eck, M. (2018). SyRI: System Risk Indication. DOI: 10.2139/ssrn.3590201

comments powered by Disqus