← Back to Stress Mapper v0.1

Stress Mapper Methodology

Technical note on construct design, scoring logic, weighting rationale, and limitations for the GlassCase Institutional Stress Mapper.

1. Purpose and scope

The Institutional Stress Mapper is a structured triage tool. We built it to detect patterns of institutional stress over time. It does not produce findings or conclusions. It sits in the tradition of risk-based regulatory triage (Sparrow, 2000): identifying where deeper inquiry may be warranted, without claiming to determine what happened or why.

The intended audience is broad: oversight practitioners, researchers, educators, anyone trying to make sense of observable institutional signals in a structured way.

What this tool is not. Not an audit. Not an assessment. Not a measurement instrument. Scores represent user judgement based on observable proxy indicators. They are not validated findings. Output should not be used as evidence in legal, regulatory or administrative proceedings without independent professional verification.

2. Why "stress" and not "trauma"

We measure stress. The word choice matters more than it looks.

Stress is a load condition. Every organisation carries some. It is observable, it varies over time, and it can be scored without implying that something has gone wrong. Trauma is what happens when stress exceeds capacity for long enough. Vivian and Hormann (2013) developed a framework for understanding how trauma enters and moves through organisations. Their model treats trauma as the object of study. We treat stress as the measured variable and reference trauma only as a potential downstream consequence.

This is a posture choice. "Stress" keeps the tool diagnostic and forward-looking. "Trauma" would make it retrospective and potentially accusatory. We want to show where load is accumulating. We are not diagnosing what that load has done.

3. Constructs and subsystems

We decompose institutional health into five subsystems. These are not exhaustive. They are the dimensions we found to be most commonly observable and externally verifiable in public-sector and education contexts.

Subsystem Observable proxies Why included
Leadership & Governance Decision quality, turnover, structural changes, acting appointments Leadership instability is a leading indicator of systemic stress across all other dimensions
Workforce Stability Staff churn, vacancies, secondments, LOA spikes Workforce disruption both signals and amplifies institutional strain
Transparency & Compliance FOI volume, refusal rates, processing times, deemed refusals FOI and compliance data are the most publicly verifiable signals of governance health
Organisational Climate Morale indicators, grievances, cultural signals, survey data Climate captures the human experience of institutional stress that quantitative metrics miss. This is the least externally verifiable subsystem — scoring is more susceptible to user bias and less defensible from public data alone. It is included because it matters, but users should weight their confidence in Climate scores accordingly
External Scrutiny Audits, OVIC, WorkSafe, Ombudsman, media External engagement reflects both the severity and visibility of institutional problems

These five draw on organisational failure literature (Reason, Vaughan, Perrow), institutional crisis theory (Boin & 't Hart), safety culture frameworks (Westrum), regulatory craft (Sparrow) and responsive regulation theory (Braithwaite). No single model produced this list. It is a synthesis of dimensions that keep recurring across those traditions.

The subsystem decomposition follows Reason's model. Latent failures accumulate across organisational layers before they surface as active failures. Scoring across multiple time periods draws on Perrow's argument that cascading failures in tightly coupled systems are inherent properties of the system rather than aberrations. Vaughan's normalisation of deviance explains why the timeline view matters. Each tolerated deviation resets the baseline. Trigger incidents compound rather than resolve.

We weight Transparency & Compliance highest (×1.3) because FOI data provides the strongest publicly verifiable signal. OAIC quarterly statistics back this up. Refusal rates are a measurable, longitudinal indicator of governance health. Rising refusal rates and FOI volume surges have been documented as stress signals on the institutions themselves, beyond just the impact on applicants.

4. Scoring scale

Each subsystem is scored on a 0–3 ordinal scale per time period:

Score Band Meaning
0 Baseline No observable stress signals beyond normal operational variation
1 Elevated Early or isolated signals that may warrant monitoring
2 Acute Clear stress signals across multiple indicators; active concern
3 Chronic Sustained, cross-subsystem stress with no recovery trajectory

This is an ordinal scale. It is not interval or ratio. A score of 2 does not mean "twice as stressed" as 1. The bands are qualitative states. The scale is deliberately coarse because we are building a triage tool.

At Acute and Chronic levels, the tool encourages users to record a justification note explaining what observable signal supports the rating. This is advisory. Users can score without writing a note. But high scores without documented reasoning are analytically weak, and anyone reviewing the output should treat them accordingly.

5. Composite index and weighting

Individual subsystem scores are combined into a single Stress Index (0–10 scale) using a weighted arithmetic mean, normalised to the 0–10 range.

Why weights exist

Equal weighting is not neutral. It is an implicit claim that all subsystems contribute equally to institutional stress. We do not think that is true. Some signals are more externally verifiable, more reliably observable or more strongly predictive of systemic problems. The weights reflect that.

Current weights

Subsystem Weight Rationale
Leadership & Governance ×1.2 Leadership instability is a strong leading indicator with downstream effects across all subsystems
Workforce Stability ×1.0 Standard baseline weight
Transparency & Compliance ×1.3 FOI and compliance data are the most publicly verifiable signals; repeated statutory non-compliance is a strong proxy for systemic governance stress
Organisational Climate ×1.0 Standard baseline weight
External Scrutiny ×1.1 External oversight engagement reflects severity and public visibility of institutional problems
Sensitivity. These weights are provisional. We kept the range narrow (1.0–1.3) so that no single subsystem can dominate the composite. Our sensitivity testing was qualitative and design-oriented. It was not a formal statistical robustness analysis. Under reasonable alternative weighting scenarios, including equal weighting, the rank-order of periods and overall trend shape are generally preserved. That suggests the patterns the tool surfaces are not artefacts of the weighting choices.

Severity bands

The 0–10 Stress Index is mapped to four severity bands:

These bands are evenly distributed across the 0–10 scale. They are communicative scaffolding. We have not empirically validated the cutpoints. The graduated escalation borrows from Ayres and Braithwaite's responsive regulation framework (1992), essentially the enforcement pyramid turned sideways into a diagnostic scale. Each band implies a different posture: monitor, pay active attention, consider intervention or recognise sustained systemic concern.

6. Data handling

Everything stays in the browser. All data is stored locally via localStorage. Nothing is transmitted to GlassCase servers. Data persists across sessions on the same device until the user clears it or resets the tool.

Users can export a structured JSON file containing all scores, justification notes, computed indices, and full metadata (tool version, methodology reference, disclaimer). Notes are user commentary. They are included in exports but never published or shared unless the user distributes the file themselves.

7. Limitations and non-claims

Narrative timelines

The narrative event timeline in the tool is a demonstration only. The scenario is entirely fictional. It is not derived from, inspired by, or modelled on any specific real institution, dispute, investigation, or set of events, including any matter involving the author. The sequence, timing, and event types are deliberately compressed to show how pattern logic works rather than to simulate a realistic case. Any resemblance to actual events is coincidental.

Narrative is where the risk concentrates. It is the component most likely to be read as allegation. We constrain the demonstration with a structural guardrail: events only appear on the timeline where they coincide with or precipitate observable stress signals in at least one other subsystem. No narrative-only entries. Event descriptions are schematic placeholders. They are generated solely to illustrate how narrative elements are constrained by concurrent subsystem signals. The panel is not user-editable.

8. Citation anchors

The following works informed our design choices. Inclusion here does not imply endorsement by the authors.

Ayres, I., & Braithwaite, J. (1992). Responsive regulation: Transcending the deregulation debate. Oxford University Press. — The enforcement pyramid and graduated regulatory response based on signals. Precedent for the severity band escalation logic.

Boin, A., & 't Hart, P. (2000). Institutional crises in policy sectors: An exploration of characteristics, conditions and consequences. In H. Wagenaar (Ed.), Government institutions: Effects, changes and normative foundations (pp. 9–31). Kluwer. See also: Boin, A., 't Hart, P., Stern, E., & Sundelius, B. (2005). The politics of crisis management: Public leadership under pressure. Cambridge University Press. — Institutional crisis as a systemic concept distinct from individual incidents.

Braithwaite, J. (2002). Restorative justice and responsive regulation. Oxford University Press. — Responsive regulation theory and the escalation logic that informs severity band design.

Hormann, S. (2018). The strengths and shadows model. Humanistic Management Journal, 3(1), 91–104. See also: Vivian, P., & Hormann, S. (2013). Organizational trauma and healing. CreateSpace. — Foundational work on organisational trauma as a systemic concept. Informs the stress-vs-trauma framing distinction: stress is the load condition; trauma is the outcome when stress exceeds capacity over time.

Office of the Australian Information Commissioner. (n.d.). FOI statistics. Published quarterly. https://www.oaic.gov.au — Raw data source for FOI volume, refusal rates, and processing times as longitudinal institutional health indicators.

Organisation for Economic Co-operation and Development. (2024). OECD public integrity indicators. OECD Publishing. — Dimensional precedent for decomposing institutional integrity into observable subsystems.

Organisation for Economic Co-operation and Development/Joint Research Centre. (2008). Handbook on constructing composite indicators: Methodology and user guide. OECD Publishing. — Framework for composite indicator construction, weighting, normalisation, and robustness testing.

Perrow, C. (1984). Normal accidents: Living with high-risk technologies. Basic Books. — Cascading failure in complex, tightly coupled systems as an inherent property rather than an aberration. Theoretical basis for the timeline heat bar and escalation patterns.

Reason, J. (1997). Managing the risks of organizational accidents. Ashgate. — Organisational accident theory and the concept of latent conditions producing systemic failure.

Sparrow, M. K. (2000). The regulatory craft: Controlling risks, solving problems, and managing compliance. Brookings Institution Press. — Risk-based triage logic and the distinction between pattern detection and determination.

Vaughan, D. (1996). The Challenger launch decision: Risky technology, culture, and deviance at NASA. University of Chicago Press. — Normalisation of deviance and how institutional signals are missed or reinterpreted.

Westrum, R. (2004). A typology of organisational cultures. BMJ Quality & Safety, 13(Suppl. 2), ii22–ii27. — Organisational culture typology (pathological, bureaucratic, generative) as a framework for interpreting climate signals.

Last updated: 8 February 2026