Executive Summary
Artificial intelligence (AI) has outgrown the control models once used to govern it. Checklists and perimeters cannot manage the systemic risks emerging from interconnected AI ecosystems. The Unified Linkage Model (ULM) provides a novel approach to visualizing, quantifying, and governing these relationships—enabling the mapping of how trust, inheritance, and adjacency influence risk across digital enterprises. By making hidden dependencies visible, ULM transforms compliance from static reporting into continuous assurance, aligning with the NIST AI Risk Management Framework (AI RMF)[1], Executive Order 14110,[2] and the March 2024 Executive Memorandum’s[3] call for “safe, secure, and trustworthy AI.”1. Introduction: The Complexity of AI Risk
Artificial intelligence is no longer a discrete capability—it has become an adaptive, embedded layer across nearly every digital system. From predictive maintenance in industrial operations to generative models powering enterprise analytics, AI now shapes mission-critical decision-making. However, as its intelligence grows, so does its complexity—and with it, systemic risk.[4] (Brundage et al., 2020). Traditional cybersecurity governance evolved around perimeters and checklists. These frameworks assume bounded systems with controllable variables. AI environments defy such assumptions. They are rhizomatic, not hierarchical—comprising interdependent nodes that exchange data, models, and trust across organizational and national boundaries.[5] Dozens of derivative models may inherit a single training dataset. A misconfigured API can cascade bias or exposure across multiple partners. In such systems, the relevant question is no longer what failed, but how the failure propagated. The Unified Linkage Model (ULM) addresses this challenge by expanding the notion of governance to include relationships rather than solely controls. ULM models the digital enterprise as a living network, where risk and accountability flow through linkages that connect people, systems, and policy.[6]By mapping adjacencies, inheritances, and trust pathways, ULM improves AI governance in three measurable ways: it increases visibility of interdependence, accountability of decision authority, and adaptability of policy response.
Unlike traditional frameworks that catalog risks as static control failures, the Unified Linkage Model (ULM) governs the relationships that produce those risks. By mapping adjacencies, inheritances, and trust pathways, ULM improves AI governance in three measurable ways: it increases visibility of interdependence, accountability of decision authority, and adaptability of policy response.
2. Why Governance Fails at Scale
Governance breaks down not from malicious activity but from complexity without clarity. As AI environments distribute authority across vendors, APIs, and federated cloud services. Each node operates correctly in isolation, yet collectively they generate ungoverned interdependence.
This fragmentation produces what could be called governance debt, a backlog of unexamined linkages that accumulate over time.[7] Like technical debt, governance debt compounds silently until an incident exposes its scope.
A fine-tuned model may inherit permissions from a foundation model hosted on a third-party platform. An inference engine might rely on external identity providers whose certificates silently expire. In both cases, risk emerges not within the component but between components—along their linkages.
Traditional compliance frameworks struggle in this area because they treat systems as isolated entities.[8] What is missing is relational intelligence, the ability to understand how governance obligations move through connected systems. This visibility is precisely what the ULM is intended to provide.
3. The Unified Linkage Model (ULM): Seeing the System
The ULM defines governance, in part, as the management of relationships among systems, roles, and policies. It formalizes three universal linkage types that describe how risk and trust propagate.[9]
ULM maps these linkages across three overlapping layers:
- Functional Layer – systems, APIs, and data flows
- Organizational Layer – teams, roles, and decision authorities
- Control Layer – policy frameworks and compliance obligations
4. Linkage-Aware Threat Modeling and Risk Quantification
Traditional threat models, such as STRIDE or MITRE ATLAS, focus on component vulnerabilities rather than propagation.[10] ULM extends these approaches by examining how threats, risks, vulnerabilities, and failures travel through linkages.ULM provides a structured approach to identify and govern these relationships before they lead to operational or compliance failures.
AI governance challenges often arise from unrecognized but straightforward linkages. A poisoned dataset represents an inheritance threat, as its effects ripple through every model retrained with those weights. A compromised API endpoint represents an adjacency threat, allowing malicious code to traverse multiple services. A broken identity linkage introduces a trust threat, undermining authentication across federated domains. Trust relationships can degrade quietly when token lifetimes, certificates, or identity providers become misaligned across systems. A malfunctioning model rarely causes these issues; they result from the connections that tie models, data, and platforms together. ULM provides a structured approach to identify and govern these relationships before they lead to operational or compliance failures.
By modeling these relationships, ULM enables linkage-aware threat modeling. This involves modeling predictive analysis of potential contagion paths. As noted elsewhere, the ULM complements quantitative frameworks such as the FAIR model.[11] FAIR measures the probable magnitude and frequency of losses. The ULM reveals the structural paths along which those losses propagate.[12] In tandem, these two approaches allow organizations to quantify both the impact and velocity of systemic failure.
This approach transforms threat modeling into a networked discipline, one where mitigation is prioritized not solely by asset value, but by the reach of its linkages. [13]
5. Integrating ULM with the Risk Management Framework (RMF)
Federal cybersecurity governance remains anchored to the NIST Risk Management Framework (NIST SP 800-37 Rev. 2), which defines six iterative phases: categorize, select, implement, assess, authorize, and monitor.[14] The ULM enhances each phase by embedding relational analytics directly into the workflow.
Where the RMF traditionally describes what must be secured, ULM clarifies how security responsibilities connect. This shift replaces linear risk documentation with dynamic relational analysis—improving coordination among control families and reducing duplicative oversight.
For practitioners, ULM does not replace the RMF; rather, it clarifies how RMF responsibilities are connected. In a typical AI system review, ULM encourages assessors to document adjacency boundaries directly within the architecture description, track inheritance lineage as part of configuration management, and assign explicit ownership for trust relationships that cross organizational teams. These minor adjustments strengthen the RMF workflow by making interdependence visible. They also reduce the likelihood that control effectiveness will be assessed in isolation from the relationships that determine how those controls actually behave in production environments.
Within ULM, the Authorizing Official (AO) becomes a linkage steward rather than a passive approver. They can visualize how authorizations in one enclave affect others—supporting continuous authorization under OMB M-25-04.[15]
This linkage visibility also strengthens alignment with the NIST AI RMF, whose “Govern” function emphasizes transparency, accountability, and explainability as continuous, not episodic, activities.[16]
6. How the ULM Improves AI Governance
These initial structural mappings show where the ULM aligns with established governance processes. They do not yet demonstrate its practical effect. To clarify its contribution, the following section very briefly examines how the model enhances visibility, accountability, and adaptability across AI ecosystems.
- Visibility. Most governance frameworks observe system states. ULM focuses on the connective tissue that links them. It introduces structural visibility by mapping who and what interacts across model, dataset, and policy layers. The visual linkage graphs expose hidden dependencies—making inheritance chains, data provenance, and authorization boundaries transparent to both auditors and engineers. This relational clarity transforms opaque AI ecosystems into traceable ones.
- Accountability. ULM seeks to bind governance authority directly to each linkage. It closes the gap between “who is responsible” and “what is connected.” In conventional governance, responsibility diffuses as systems scale; with ULM, every adjacency and inheritance has a documented owner. This linkage-level accountability reduces orphan risk and allows authorizing officials to act on verified, rather than assumed, control data.
- Adaptability. AI environments evolve continuously, with new models, retrained weights, and changing APIs. Static compliance cycles can’t keep pace. ULM improves adaptability by embedding linkage metrics into continuous monitoring loops. As linkages change, metrics update automatically, supporting real-time authorization decisions under the NIST AI RMF’s “Monitor” and “Govern” functions.[17]
In essence, ULM converts AI governance from an episodic audit process into an adaptive intelligence process, reducing reaction time, increasing precisions, and elevating trustworthiness and transparency across the system’s lifecycle.
7. Policy and Procurement Implications
Adopting ULM principles reshapes how agencies write, evaluate, and enforce AI contracts. Under this model, vendors would provide Linkage Integrity Plans (LIPs)—analogous to System Security Plans—that specify how adjacency, inheritance, and trust are managed throughout the development process.
Contract oversight could then shift from deliverables to interconnection accountability, evaluating whether suppliers maintain integrity of linkage over time.
At the policy level, ULM supports continuous risk reciprocity, where one agency’s trust metrics dynamically inform those of others. This approach aligns directly with the OMB’s modernization directives (OMB M-24-04, 2024) and with the U.S. Government Accountability Office’s call for “evidence-based accountability frameworks” in AI adoption.[18]
ULM offers a shared taxonomy for collaboration internationally. The EU AI Act’s transparency requirements[19] and NATO’s Federated Mission Networking standards both depend on traceable linkages amongst participants.[20] ULM’s relational metrics could enable such traceability, and allows for measurement across boundaries.
8. Quantifying Linkage Integrity: The ULM Metrics Suite
ULM provides a structural framework for understanding relationships. However, organizations still require clear indicators and metrics to assess how those relationships perform over time. ULM metrics provide the relational context that other frameworks assume but do not explicitly measure, enabling practitioners to tie systemic behavior to established risk and compliance models directly.
Linkage integrity metrics translate the model’s conceptual elements into observable, repeatable measures that support governance decisions. These metrics help practitioners identify where interdependence creates concentration risk, where lineage is incomplete, and where trust boundaries may be eroding. Together, they provide a practical foundation for continuous oversight, enabling leaders to monitor the health of AI ecosystems with greater precision.
To begin making linkage governance more actionable, the ULM proposes an initial set of Linkage Integrity Metrics (LIMs)—early-stage quantitative indicators intended to sketch the contours of systemic health.[21]
As these early metrics mature, tracking them over time could help shift governance toward a more measurable discipline. Even preliminary observations enable analysts to discern how interdependence evolves and where linkages seem to contribute to emerging issues. When paired with established approaches such as FAIR and RMF, LIMs offer the potential for a more anticipatory form of governance—highlighting relationships that may warrant attention before they lead to failures.
9. Case Illustration: AI Model Reciprocity in a Federal Context
As a possible use case, consider a potential U.S. Department of Transportation initiative that utilizes a commercial large-language model (LLM). Deployed in a FedRAMP-authorized cloud (inheritance), the LLM analyzes traffic sensor data across multiple state agencies. It consumes near-real-time feeds (adjacency), and authenticates users through federated identity providers (trust).
Using ULM analysis, the department can:
- Quantify linkage density to isolate high-risk adjacencies.
- Verify that inherited configurations meet agency baselines.
- Continuously assess trust degradation via TLRS metrics.
- Present the Authorizing Official with a live linkage map showing how authorization decisions propagate across partner systems.
By mapping dependencies across AI supply chains and linking them to accountable roles, ULM can highlight where authorization decisions depend on unexamined assumptions.
In this scenario, ULM transforms compliance from static documentation into continuous assurance. It allows the AO to make informed, real-time decisions about inherited and adjacent risks—without impeding operational readiness. By mapping dependencies across AI supply chains and linking them to accountable roles, ULM can highlight where authorization decisions depend on unexamined assumptions. AOs, the risk stewards, can now identify incomplete lineage, undocumented adjacencies, or fragile trust relationships early in the decision process. Policy mandates are transformed into visual, data-driven accountability.
These qualitative improvements set the stage for future empirical validation and structured research. To be very clear, while the model offers a clear conceptual pathway for improving governance, ULM has not yet been formally piloted within federal agencies.
10. From Metatheory to Governance
The ULM originated not as a governance tool, but as a metatheoretical framework for understanding interdependence. Its core insight is that systems must be interpreted through their relationships, not as isolated entities. Its emphasis on relationships reflects a long intellectual tradition in which systems are understood through the connections that shape their meaning. Heidegger argued that entities gain significance through their context; Gadamer extended this insight by showing how understanding emerges from the interaction of perspectives; and Dallmayr brought these relational ideas into the analysis of modern institutional life. Taken together, this tradition stresses that systems cannot be grasped in isolation—they must be interpreted through the structures of interdependence that define them.
ULM translates this relational perspective into a practical model for AI oversight. Its constructs of adjacency, inheritance, and trust operationalize these philosophical insights by providing concrete categories for analyzing how technical components, governance roles, and policy requirements interact. Many governance challenges do not arise from flaws within a single element but from the unrecognized relationships between them.
By making these connections explicit, ULM extends metatheoretical concepts into usable governance intelligence. In doing so, the model reframes AI governance as a discipline centered on understanding and managing interdependence. Rather than treating AI systems as discrete artifacts, ULM presents them as networks of linked responsibilities, data flows, and decision pathways. Its governance value follows directly from its theoretical grounding: a clearer view of the whole becomes possible when its relationships are understood.
11. Limitations and Future Directions
As with any governance model, ULM’s utility depends on the quality of the data. Accurate linkage mapping requires up-to-date inventories, SBOMs, and model registries. In organizations with weak configuration management, visibility into linkage will be partial.
Human factors remain another challenge. Culture, politics, and cognitive bias often shape trust more than data does. Integrating sociotechnical analytics—such as sentiment analysis of governance decision logs—will be an essential research frontier.[22]
Future development will focus on automation, specifically extracting linkage data from CI/CD pipelines and visualizing metrics in real-time. Open-source ULM toolkits could standardize this process, enabling consistent adoption across agencies and industry sectors.
Although formal pilots have not yet been conducted, experience from adjacent disciplines—such as supply chain visualization, model lineage tracking, and dependency mapping in software engineering—suggests that visibility into interdependence can significantly reduce coordination delays and documentation gaps.
ULM applies these proven concepts to AI governance, offering a structured approach to reveal and manage the hidden linkages that often complicate authorization and oversight. While empirical validation remains an open research direction, the conceptual benefits of increased transparency are already well understood.
Since ULM is a new conceptual model, empirical validation remains a crucial area for future research. What the framework does provide today is a structured theoretical basis for anticipating governance improvement: greater transparency into adjacency, inheritance, and trust linkages should reduce misalignment, shorten decision cycles, and improve documentation quality. These hypotheses will form the basis of future evaluation, but formal trials do not yet support them.
12. Conclusion: Governing the Web of Confidence
AI governance demands more than defensive controls; it requires continuous, relational accountability. The Unified Linkage Model provides a structured way to see and manage interdependence across systems, organizations, and policies.
By focusing on governing relationships rather than artifacts, ULM facilitates faster, evidence-based decision-making, thereby closing the loop between discovery, authorization, and assurance.
The ULM is designed to enhance AI governance by converting complexity into clarity, and it transforms invisible dependencies into observable data. This approach aligns accountability with technical authority, transforming compliance from a paperwork-based process into real-time insight. By focusing on governing relationships rather than artifacts, ULM facilitates faster, evidence-based decision-making, thereby closing the loop between discovery, authorization, and assurance. By doing so, it redefines governance itself as a form of operational intelligence.
By mapping how risk and trust flow through adjacency, inheritance, and trust linkages, ULM empowers decision-makers to see what was previously invisible – the architecture of confidence that underpins digital society. It aligns with Secure-by-Design, Continuous Authorization, and global AI governance mandates, offering a common language for resilience.
For organizations seeking a starting point, ULM can be introduced with a few practical steps. Firstly, it is necessary to identify one AI system and map its key adjacencies, inheritances, and trust paths. Even a simple diagram can often reveal overlooked dependencies. Secondly, linkage ownership fields should be added to existing inventories or ATO documentation so that each relationship — API call, data feed, or inherited model — has a responsible party. Thirdly, we recommend that linkage review be included as a standing item on architecture or change-control boards. This will ensure that interdependence is considered alongside traditional security controls. These low-cost actions establish the foundation for more mature, metric-driven governance.
Ultimately, it is not isolation that will provide systemic security, but rather disciplined interconnection – or, to put it another way, governing the web of confidence that binds the digital world together. ![]()
Henry J. Sienkiewicz
References:
Barabási, A.-L. (2003). Linked: How everything is connected to everything else and what it means. Perseus Publishing.
Brundage, M. e. (2020). Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims. doi:
Executive Office of the President. (2023). Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. National Archives. Washington, DC: Federal Register. Retrieved November 7, 2025, from https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence
Floridi, L. C. (2021). A Unified Framework of Five Principles for AI in Society. Ethics, Governance, and Policies in Artificial Intelligence. Philosophical Studies Series,, 144. doi:https://doi.org/10.1007/978-3-030-81907-1_2
Future of Life Institute (FLI). (2024, February 27). High-level summary of the AI Act. Retrieved from https://artificialintelligenceact.eu/high-level-summary/
Government Accountability Office (GAO). (2021). Artificial Intelligence: An Accountability Framework for Federal Agencies and Other Entities. Government Accountability Office. Washington, DC: GAO. Retrieved from https://www.gao.gov/products/gao-21-519sp
Jones, J. a. (2014). Measuring and Managing Information Risk: A FAIR Approach 1st Edition. Butterworth-Heinemann.
MITRE. (n.d.). Navigate threats to AI systems through real-world insights. Retrieved November 7, 2025, from https://atlas.mitre.org/: https://atlas.mitre.org/
National Insitute of Standards and Technology (NIST). (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0). Retrieved November 7, 2025, from https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf
National Institute for Standards and Technology (NIST). (2020, September ). NIST SP 800-53B Control Baselines for Information Systems and Organizations. Retrieved from https://csrc.nist.gov/pubs/sp/800/53/b/upd1/final
National Institute of Standards and Technology. (2015, June 13). NIST SP 800-82 Rev. 2 Guide to Industrial Control Systems (ICS) Security. Retrieved from National Institute of Standards and Technology: https://csrc.nist.gov/pubs/sp/800/82/r2/final
National Institute of Standards and Technology. (2018, December). NIST SP 800-37 Rev. 2 Risk Management Framework for Information Systems and Organizations: A System Life Cycle Approach for Security and Privacy. Retrieved from https://csrc.nist.gov: https://csrc.nist.gov/pubs/sp/800/37/r2/final
North Atlantic Treaty Organization (NATO). (n.d.). Federated Mission Network. Retrieved from https://coi.nato.int/FMNPublic/SitePages/Home.aspx
Raji, I. D. (2022). AI and the Everything Problem. Communications of the ACM,. Retrieved November 7, 2025, from https://arxiv.org/abs/2111.15366
Shostack, A. (2014). Threat Modeling: Designing for Security 1st Edition. Wiley.
Sienkiewicz, H. J. (2025). Establishing Trustworthiness: An Adaptive Governance Approach. United State Cybersecurity Magazine.
Sienkiewicz, H. J. (2025, October). Extending FAIR: How the Unified Linkage Model Strengthens Cyber Risk Quantification. Retrieved from FAIR Insitute : https://www.fairinstitute.org/blog/extending-fair-unified-linkage-model-strengthens-cyber-risk-quantification-1
Sienkiewicz, H. J. (2025). Unified Linkage Models: Recontextualizing Cybersecurity . United States Cybersecurity Magazine.
Young, S. (2024). Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence . Executive Office of the President , Office of Management and Budget, Washington, DC. Retrieved November 7, 2025, from https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf
Young, S. (2025). Fiscal Year 2025 Guidance on Federal Information Security and Privacy M-25-04 . Office of Management and Budget. Retrieved from https://bidenwhitehouse.archives.gov/wp-content/uploads/2025/01/M-25-04-Fiscal-Year-2025-Guidance-on-Federal-Information-Security-and-Privacy-Management-Requirements.pdf
Leave a Comment