Organizing Architecture Around Anticipated Change: A Practitioner-Oriented Articulation
Author: William Christopher Anderson Date: April 2026 Version: 1.0
Executive Summary
Modern software systems rarely fail because of poor initial design; they fail because change accumulates faster than the architecture can absorb it. Volatility-Based Decomposition (VBD) addresses this problem by treating change as the primary organizing force in system design.
Rather than decomposing systems solely by domain concepts or technical layers, VBD organizes architectural boundaries around anticipated sources of volatility — functional change, non-functional pressures, cross-cutting concerns, and environmental dependencies. By aligning components with these forces, systems can evolve without widespread refactoring.
At a practical level, VBD applies established component roles:
- Managers coordinate workflow and intent and remain stable over time.
- Engines encapsulate business rules, computation, and transformation logic that change more frequently.
- Resource Accessors isolate interactions with databases, external services, vendors, and infrastructure.
- Utilities encapsulate cross-cutting capabilities such as logging, monitoring, security, and observability, allowing these concerns to evolve independently without contaminating core business logic.
These roles are reinforced through explicit communication rules and validated against a small number of core use cases. Over time, this approach localizes change, reduces unintended coupling, and preserves architectural integrity even as systems and organizations grow.
VBD is most effective in long-lived systems, platform architectures, and integration-heavy environments where change is constant and unavoidable. It provides architects and senior engineers with a clear, practical reference for applying volatility-first architectural thinking at scale.
Abstract
Modern software systems operate in environments defined by continuous change — shifting business requirements, evolving user expectations, regulatory pressure, and rapid technological advancement. Traditional architectural decomposition techniques often fail to account explicitly for change as a primary design force, resulting in brittle systems that degrade over time. Volatility-Based Decomposition (VBD) is an architectural approach that treats change as a first-class concern by identifying, classifying, and isolating areas of anticipated volatility within a system. This paper presents a structured articulation of Volatility-Based Decomposition, covering functional and non-functional volatility, cross-cutting concerns, core use cases, and component role definition. By aligning architectural boundaries with volatility axes, VBD supports the design of flexible, maintainable, and evolvable software systems. The articulation emphasizes modularity, explicit communication rules, and continuous architectural evaluation, providing architects and senior engineers with a practical reference for applying VBD consistently in long-lived systems.
1. Introduction
Software architecture exists to manage complexity over time. While many architectural approaches focus on current functional requirements, the dominant force acting on long-lived systems is change. Business models evolve, regulations shift, infrastructure platforms are replaced, and user expectations rise. Architectures that do not explicitly account for these forces tend to accumulate coupling, resist modification, and require costly refactoring or replacement.
Volatility-Based Decomposition (VBD) addresses this problem by treating change — not functionality — as the primary driver of architectural structure. Rather than decomposing systems solely by domain concepts or technical layers, VBD decomposes systems along axes of anticipated volatility. This approach enables architects to localize the impact of change, reduce unintended coupling, and preserve system integrity as requirements evolve.
This paper provides a practitioner-oriented articulation of VBD. It describes how to identify volatility, align components to volatility boundaries, apply established component roles and communication rules, and validate architectural decisions over time.
Business strategy evolves continuously. Markets shift. Regulations change. Competitive pressures emerge without warning. To manage this complexity, organizations naturally structure themselves around functional responsibilities such as Sales, Operations, Finance, and Compliance. This functional decomposition brings clarity of ownership, accountability, and decision-making authority.
Software systems frequently mirror this structure. Teams, codebases, and modules are aligned to functional domains in an attempt to reflect how the business operates. Early in a system's life, this alignment can be effective, as changes tend to be localized and coordination costs remain manageable.
Over time, however, tension emerges. The business adapts quickly, while software systems resist change. Most meaningful changes cut across functional boundaries rather than remaining contained within them. Enhancements require coordination across teams and components. Small adjustments trigger broad testing cycles. Risk increases as more parts of the system must move together.
This divergence exposes a fundamental mismatch: while organizations decompose for accountability, volatility does not respect functional boundaries. Volatility-Based Decomposition addresses this mismatch by aligning architectural boundaries with change dynamics rather than organizational structure alone.
2. Background and Architectural Foundations
Volatility-Based Decomposition builds upon established architectural principles, including separation of concerns, information hiding, modularity, and loose coupling. Classic decomposition strategies — such as layered architectures, service-oriented architectures, and microservices — implicitly attempt to manage change by isolating responsibilities. However, these approaches often rely on static assumptions about where change will occur.
VBD makes these assumptions explicit. It acknowledges that not all parts of a system change at the same rate or for the same reasons. By identifying which aspects of a system are most likely to change, architects can proactively structure boundaries that align with those forces, rather than reacting after change has already caused architectural erosion.
3. Defining Volatility in Software Systems
For the purposes of this paper, volatility is defined as the likelihood that a given system responsibility, requirement, or implementation detail will change over time, along with the frequency and impact of that change.
3.1 Functional Volatility
Functional volatility refers to changes in system behavior driven by evolving business needs, user feedback, or regulatory requirements. Examples include the addition of new features, modification of existing workflows, or removal of obsolete functionality. Functional volatility is most commonly associated with core use cases and domain logic.
3.2 Non-Functional Volatility
Non-functional volatility concerns changes to system qualities such as performance, scalability, reliability, security, and maintainability. These changes are often driven by external forces, including infrastructure upgrades, platform migrations, or increased usage demands.
3.3 Cross-Cutting Volatility
Cross-cutting concerns — such as logging, monitoring, authentication, authorization, and error handling — exhibit volatility that spans multiple components. Changes to these concerns can have widespread impact if not properly isolated.
3.4 Environmental and Infrastructure Volatility
Infrastructure platforms, third-party services, deployment models, and hosting environments are subject to frequent change. Treating these elements as stable foundations often leads to tight coupling between business logic and infrastructure details.
Figure 1 — The Four Axes of Volatility and the Component Roles That Encapsulate Them: each role addresses a primary axis of volatility, but the roles work in union — the Manager orchestrates the what, the Engine executes the how, the Resource Accessor isolates the where, and the Utility provides the with-what. No single role contains all change; the four together localize change across every axis.
graph TB
subgraph axes["Axes of Volatility"]
FV["Functional<br/><em>What changes — workflows and rules</em>"]
NF["Non-Functional<br/><em>Performance, scalability, reliability</em>"]
CC["Cross-Cutting<br/><em>With what shared capabilities</em>"]
EV["Environmental<br/><em>Where data lives and flows</em>"]
end
subgraph roles["Component Roles — Working in Union"]
MGR["Manager<br/>The <strong>What</strong><br/>Stable functional layer — orchestrates workflow"]
ENG["Engine<br/>The <strong>How</strong><br/>Volatile functional layer — executes business logic"]
ACC["Resource Accessor<br/>The <strong>Where</strong><br/>Isolates external boundaries"]
UTL["Utility<br/>The <strong>With What</strong><br/>Provides cross-cutting capabilities"]
end
FV -.->|stable layer| MGR
FV -.->|volatile layer| ENG
EV -.->|primary axis| ACC
CC -.->|primary axis| UTL
NF -.->|addressed across all roles| MGR
NF -.->|addressed across all roles| ACC
MGR -->|invokes| ENG
MGR -->|invokes| ACC
ENG -->|may call| ACC
UTL -.->|consumed by all| MGR
UTL -.->|consumed by all| ENG
UTL -.->|consumed by all| ACC
style FV fill:#f0f4ff,stroke:#0053e2,color:#0053e2,stroke-width:2px
style NF fill:#fff8e0,stroke:#e6a800,color:#333,stroke-width:2px
style CC fill:#f0fff0,stroke:#76c043,color:#333,stroke-width:2px
style EV fill:#e8f5e9,stroke:#2a8703,color:#2a8703,stroke-width:2px
style MGR fill:#0053e2,color:#fff,stroke-width:0px
style ENG fill:#ffc220,color:#000,stroke-width:0px
style ACC fill:#2a8703,color:#fff,stroke-width:0px
style UTL fill:#76c043,color:#000,stroke-width:0px
4. Identifying Core Use Cases and Volatility Axes
The first step in volatility analysis is identifying core use cases — the high-level behaviors that define the system's purpose. Core use cases provide the context necessary to evaluate where change is most likely to occur.
The term core use case is intentionally narrow. In practice, even very large organizations tend to have a surprisingly small number of truly core use cases — often fewer than five. These represent the fundamental value-producing behaviors of the system, not the many procedural variations that surround them.
While business analysts may document dozens or even hundreds of use cases, the majority are not architecturally core. They are alternative paths, conditional flows, exception handling, or policy-driven variants of a much smaller set of essential behaviors. Treating all documented use cases as equal drivers of architecture obscures volatility rather than revealing it.
Architects should identify volatility axes by:
- Reviewing existing requirements, user stories, and architectural documentation
- Interviewing stakeholders across business, engineering, and operations
- Analyzing historical change patterns in similar systems
- Considering plausible future business and technology shifts
The goal is not to predict exact changes, but to identify where change is likely and why.
5. Volatility-Based Decomposition
Volatility-Based Decomposition proceeds through the following steps:
- Identify core use cases that represent the system's primary value.
- Enumerate volatility axes across functional, non-functional, cross-cutting, and environmental dimensions.
- Classify responsibilities based on their likelihood and drivers of change.
- Define architectural boundaries that align with volatility classifications.
- Apply established component roles and communication rules to isolate volatile responsibilities from stable ones.
This process results in an architecture where change is localized and predictable, reducing the risk of cascading modifications.
6. Component Roles and Communication Rules
Clear component roles and communication rules are essential to preserving volatility boundaries. In addition to Managers, Engines, and Resource Accessors, Volatility-Based Decomposition explicitly recognizes Utilities as a first-class role for isolating cross-cutting volatility. The roles and rules described in this section are derived from Juval Löwy's IDesign methodology and component-oriented architecture teachings.
One aspect of the methodology that is often undervalued is the explicit separation of business logic into two distinct concerns: orchestration and execution. Orchestration governs workflow sequencing, coordination, and intent, while execution encapsulates the business rules and policies that perform the work.
When orchestration logic and execution logic are interwoven within the same unit, they become change-coupled. A modification to workflow sequencing can force changes in business rule implementation, and a modification to business rules can require restructuring the workflow. This tight coupling increases fragility by expanding the blast radius of change and reducing predictability.
By separating orchestration from execution, architectures can absorb these changes independently, allowing workflows and business rules to evolve at different rates without destabilizing the system.
A useful mental model for classifying components is to ask what, how, and where:
| Role | Question | Concern |
|---|---|---|
| Manager | What does the system do? | Orchestration — workflow, sequencing, intent |
| Engine | How does it do it? | Execution — business rules, calculations, policies |
| Resource Accessor | Where does data live? | Integration — databases, vendors, external systems |
| Utility | With what support? | Cross-cutting — logging, auth, monitoring, observability |
If a unit of work decides what happens next, it belongs in a Manager. If it computes how to do it, it belongs in an Engine. If it reaches out to where data or services live, it belongs in a Resource Accessor. If it supports everything but belongs to no domain, it is a Utility.
6.1 Managers
Managers coordinate operation flow and encapsulate high-level business orchestration. They represent business intent and workflow coordination and should remain stable over time.
Managers MUST NOT perform heavy computation. Managers MUST NOT directly share state. Managers MAY communicate with other managers only through queued, fire-and-forget mechanisms. Managers MAY invoke engines and resource accessors directly.
Only Managers emit and consume events. Async dispatch, pub/sub, and queued messaging are Manager-layer concerns. Engines and Resource Accessors operate synchronously within a request.
Illustrative Manager examples:
- Order Processing Manager — Coordinates the lifecycle of an order by sequencing steps such as validation, pricing, fulfillment, and confirmation without embedding business rules.
- Customer Interaction Manager — Orchestrates user-facing workflows, invoking engines to fulfill intent and translating technical outcomes into business-level results.
- Batch Execution Manager — Coordinates scheduled or bulk operations by partitioning work, invoking engines per unit of work, and handling workflow-level retries.
Managers should not implement business rules, perform data aggregation, or contain persistence or integration logic. Their primary responsibility is to express what the system is trying to accomplish, not how it is achieved.
6.2 Engines
Engines execute complex business rules, transformations, or computationally intensive operations. They encapsulate the logic most likely to change due to policy shifts, experimentation, or optimization.
Engines MUST NOT communicate with other engines. Engines MUST NOT use queued or pub/sub mechanisms. Engines MAY call dependent services directly.
Illustrative Engine examples:
- Pricing Engine — Calculates prices based on rules, tiers, and promotions, evolving frequently as business strategies change.
- Eligibility or Policy Engine — Evaluates compliance, qualification, or constraint logic driven by regulatory or policy updates.
- Recommendation or Matching Engine — Performs scoring, ranking, or matching algorithms that change due to tuning or experimentation.
- Transformation Engine — Converts, normalizes, or enriches data representations without awareness of workflow or persistence.
Engines must not coordinate workflows, interact with messaging infrastructure, or embed persistence concerns. They are invoked by managers and remain unaware of the broader execution context.
6.3 Resource Accessors
Resource accessors manage interaction with persistence layers, external systems, vendors, and infrastructure-facing resources. They isolate environmental and integration volatility from the rest of the system.
Resource accessors MUST NOT communicate with engines or other resource accessors. Resource accessors MUST NOT use queued or pub/sub mechanisms. Resource accessors MAY call dependent services directly.
Illustrative Resource Accessor examples:
- Order Repository Accessor — Encapsulates database access and schema evolution, shielding the system from persistence changes.
- External Payment Gateway Accessor — Manages vendor-specific APIs, retries, error translation, and versioning.
- Messaging or Queue Accessor — Publishes or consumes messages while hiding transport protocols and infrastructure configuration.
- Configuration or Secrets Accessor — Retrieves environment-specific configuration values without leaking deployment concerns.
Resource accessors must not apply business rules, coordinate workflows, or make policy decisions. Their responsibility is to interact with external resources reliably and predictably.
6.4 Utilities
Utilities encapsulate cross-cutting concerns that apply broadly across the system and evolve independently of business workflows. They are orthogonal to core behavior and should remain free of domain-specific knowledge.
Illustrative Utility Component examples:
- Logging Utility — Provides standardized logging APIs, log formatting, and correlation identifiers without embedding business meaning.
- Monitoring and Telemetry Utility — Collects metrics, traces, and health signals to support observability without influencing execution flow.
- Error Classification and Mapping Utility — Normalizes and categorizes errors across components, translating low-level failures into consistent error types.
- Feature Flag Utility — Enables conditional behavior toggling and experimentation while remaining decoupled from business rules.
- Security Utility — Supports cryptographic operations, token validation helpers, or hashing functions without making authorization decisions.
Utilities must not coordinate workflows, enforce business policy, or directly interact with external systems on behalf of managers or engines. Their role is to provide shared capabilities that reduce duplication while preserving architectural boundaries.
Figure 2 — Component Roles and Communication Rules: illustrates the four component roles in VBD — Managers, Engines, Resource Accessors, and Utilities — along with their permitted communication patterns and constraints.
flowchart TB
MGR["Manager"] -->|invokes| ENG["Engine"]
MGR -->|invokes| ACC["Resource Accessor"]
ENG -->|may call| ACC
MGR -.->|consumes| UTL["Utility"]
ENG -.->|consumes| UTL
ACC -.->|consumes| UTL
7. Core Use Cases as Architectural Validation
Core use cases are end-to-end scenarios that exercise all relevant ancillary behaviors. These core use cases serve as validation mechanisms, ensuring that architectural boundaries support real execution paths without introducing hidden coupling or responsibility leakage.
If a core use case requires bypassing defined communication rules, the architecture should be reconsidered.
8. Continuous Evaluation and Architectural Evolution
Volatility analysis is not a one-time activity. As systems evolve, new volatility axes emerge and existing assumptions may become invalid.
To maintain architectural integrity:
- Monitor changes in requirements, infrastructure, and usage patterns
- Conduct periodic architectural reviews
- Update volatility classifications and component boundaries
- Communicate architectural changes clearly to all stakeholders
9. Architectural Watchpoints
Volatility-Based Decomposition is most effective when applied intentionally and proportionally. Like any architectural approach, it introduces forces that must be actively managed over time. The following watchpoints highlight areas that warrant ongoing attention rather than serving as hard limitations.
Decomposition Granularity Over-decomposition can increase cognitive overhead and coordination cost. Architects should monitor whether component boundaries continue to align with meaningful volatility axes or whether decomposition has become finer than the rate of change justifies.
Organizational Discipline The effectiveness of VBD depends on consistent adherence to component roles and communication rules. When these boundaries are eroded — often in the interest of short-term delivery — volatility begins to leak across components, reintroducing change coupling.
System Scale and Lifespan Smaller or short-lived systems may not benefit from full application of VBD. Architects should assess expected system longevity and change rate to determine the appropriate level of rigor.
Evolution of Volatility Axes Volatility is not static. Axes that were once stable may become volatile as business strategy, technology, or regulatory environments change. Periodic architectural review is required to ensure boundaries remain aligned with current realities.
9A. Volatility-Based Decomposition in Real-World Systems
The principles of Volatility-Based Decomposition are most clearly demonstrated when applied to real, long-lived software systems operating under continuous change. The following examples are intentionally domain-agnostic and generalized, focusing on architectural forces rather than implementation specifics.
Example 1: Long-Lived Enterprise Platform
In large enterprise platforms supporting multiple business units, functional requirements evolve unevenly. Core workflows may remain stable for years, while regulatory logic, reporting requirements, and integration points change frequently.
Applying VBD in this context typically reveals:
- Managers remain stable, coordinating high-level workflows and orchestration that change infrequently.
- Engines absorb volatility related to business rules, calculations, policy enforcement, and workflow-specific decisioning.
- Resource Accessors isolate churn driven by database migrations, schema evolution, vendor swaps, and third-party integrations.
- Utilities encapsulate cross-cutting concerns such as auditing and compliance logging, which evolve independently of business logic.
This decomposition localizes regulatory- and integration-driven change, preventing widespread refactoring when external requirements shift.
Figure 3 — Core Use Case: Order Processing: demonstrates how a core use case flows through VBD component roles, with Managers coordinating, Engines executing business logic, Accessors handling persistence, and Utilities providing cross-cutting capabilities.
sequenceDiagram
actor Client
participant OM as OrderManager
participant VE as ValidationEngine
participant PE as PricingEngine
participant OR as OrderRepository
participant PG as PaymentGateway
participant LOG as LoggingUtility
Client->>OM: submitOrder(orderRequest)
OM->>LOG: log(Order received, correlationId)
OM->>VE: validateOrder(orderRequest)
VE-->>OM: ValidationResult
alt Validation passed
OM->>PE: calculatePrice(orderRequest)
PE-->>OM: PricedOrder
OM->>OR: saveOrder(pricedOrder)
OR-->>OM: orderId
OM->>PG: processPayment(orderId, amount)
PG-->>OM: PaymentConfirmation
OM->>LOG: log(Order complete, orderId)
OM-->>Client: OrderConfirmation
else Validation failed
OM->>LOG: log(Order rejected, errors)
OM-->>Client: ValidationError
end
Example 2: Integration-Heavy or Platform Systems
Systems that serve as integration hubs — connecting internal services, external partners, and third-party APIs — experience high environmental and infrastructural volatility. APIs version independently, protocols evolve, and reliability characteristics vary widely.
Under VBD:
- Managers remain insulated from integration complexity, focusing on orchestration and intent.
- Engines handle transformation, enrichment, and aggregation logic without direct exposure to external systems.
- Resource Accessors encapsulate protocol translation, retries, circuit breaking, vendor-specific behavior, and API version management.
This structure allows integrations to be replaced or upgraded with minimal impact beyond the accessor layer.
Example 3: Rapidly Evolving Product Systems
Product-focused systems often face intense functional volatility driven by experimentation, user feedback, and market pressure. Performance and scalability concerns may also shift rapidly as adoption grows.
VBD enables such systems to evolve by:
- Keeping Managers stable, preserving workflow integrity and orchestration even as underlying behavior shifts.
- Allowing Engines to absorb frequent algorithmic, rules, and decision-logic changes.
- Using Resource Accessors to isolate churn in persistence choices, external dependencies, and infrastructure-facing concerns.
- Using Utilities to adapt cross-cutting needs such as observability and feature flagging without contaminating core logic.
This separation enables rapid iteration while maintaining architectural coherence.
Observed Outcomes
Across these scenarios, consistent outcomes emerge:
- Change is localized rather than systemic.
- Architectural intent remains visible and enforceable.
- Teams can evolve different parts of the system independently.
These results reinforce VBD's central premise: aligning architectural boundaries with volatility produces systems that are resilient over time.
9B. Practitioner Observations
The following observations emerge from applying Volatility-Based Decomposition across multiple systems over extended periods. They are not specific to any organization or domain. Rather, they describe recurring patterns, tensions, and emergent behaviors that practitioners are likely to encounter when VBD is applied consistently as a system matures.
Observation 1: Volatility Migration
Volatility axes are not permanent. What was stable for years can become intensely volatile due to external forces, and what was once highly volatile can stabilize as a domain matures. A common example is compliance logic: rules that remained unchanged for a decade may suddenly enter a period of rapid, continuous change when new regulation is introduced. Architectures that hardcoded compliance assumptions into Managers or even Utilities find themselves performing invasive surgery across the system. The practical implication is that volatility classification must be revisited periodically. Systems designed under VBD absorb this migration more gracefully because the boundaries already exist — the work becomes reclassification and, at worst, extraction of logic from one role into another, rather than a fundamental restructuring.
Observation 2: Accessor Accumulation
As systems grow, Resource Accessors tend to proliferate. Each new vendor integration, database, external API, or infrastructure dependency typically receives its own Accessor, which is correct according to VBD principles. Over time, however, the sheer number of Accessors can become a management burden. The diagnostic question is whether multiple Accessors share the same volatility profile — if two Accessors change for the same reasons, at the same rate, and are owned by the same team, consolidation is appropriate. If they change independently, they should remain separate regardless of superficial similarity. The temptation to merge Accessors for tidiness should be resisted when their volatility profiles differ, as doing so reintroduces the coupling VBD was designed to eliminate.
Observation 3: Engine Extraction
Logic that begins inside a Manager has a natural tendency to migrate into Engines over time. In early development, when the business rules are not yet fully understood, teams often embed decision logic directly in the Manager's orchestration flow because the rules seem simple or because the team is still discovering what the rules actually are. As the system matures and the rules become clearer, more complex, or more independently volatile, practitioners consistently find themselves extracting that logic into dedicated Engines. This is not a failure of initial design — it is a healthy and expected progression. VBD accommodates this migration by design: the Manager's orchestration structure remains intact while the extracted Engine absorbs the volatile logic behind a clean interface.
Observation 4: The Utility Trap
Utilities are the most frequently misused component role in VBD. Because they are accessible to all other roles and represent "shared" capabilities, there is a persistent temptation to place logic in a Utility simply because multiple consumers need it. The diagnostic is straightforward: if the Utility contains domain knowledge — if it knows about orders, customers, pricing, or any business concept — it is not a Utility. It is an Engine or an Accessor that has been misclassified for convenience. True Utilities are domain-agnostic: logging, cryptographic hashing, date formatting, correlation identifier generation, metric collection. When a Utility begins accumulating business-aware conditional logic, it should be reclassified and relocated before it becomes a hidden coupling point that undermines the entire decomposition.
Observation 5: Conway's Law Alignment
VBD interacts productively with Conway's Law — the observation that system structure tends to mirror organizational communication structure. When component boundaries are aligned with volatility axes, team ownership naturally follows. The team that owns pricing policy owns the Pricing Engine. The team that manages vendor relationships owns the relevant Resource Accessors. This alignment reduces cross-team coordination costs because changes are localized not only architecturally but organizationally. Conversely, when VBD boundaries conflict with team structure, one of two things must change: either the architecture is adjusted to reflect organizational reality, or the organization is restructured to match the architecture. In practice, the most successful outcomes occur when both are considered together.
Observation 6: Emergent Configuration-Drivenness
Systems built under VBD tend to become configuration-driven over time, even when this was not an explicit design goal. The mechanism is straightforward: because Engines encapsulate business rules behind stable interfaces, and because those rules change frequently, teams naturally begin externalizing rule parameters into configuration rather than modifying code for each change. Pricing tiers become configuration tables. Eligibility thresholds become feature flags. Validation constraints become schema-driven. This emergent behavior is a structural consequence of properly isolating volatile logic — once the volatility boundary is clean, the path of least resistance for absorbing change shifts from code modification to configuration update. Practitioners should recognize and embrace this tendency rather than treating it as accidental.
Observation 7: The Stability Paradox of Managers
Managers are designed to be the most stable component in a VBD system, yet they are often the first component written and the last to be understood correctly. Early in a system's life, Managers tend to accumulate logic that belongs elsewhere — business rules, data transformation, error handling policy — because the team has not yet identified which concerns are independently volatile. The paradox is that achieving Manager stability requires the most architectural discipline, precisely because Managers sit at the top of the invocation hierarchy and are the most convenient place to add logic quickly. Teams that enforce Manager stability from the outset consistently report lower long-term maintenance costs, while teams that allow Managers to accumulate non-orchestration logic find themselves performing costly extractions later.
Observation 8: Boundary Pressure as a Health Signal
In mature VBD systems, the points where teams feel the most pressure to violate communication rules serve as reliable indicators of architectural misalignment. When an Engine needs to call another Engine, it typically signals that the Manager above them is not properly coordinating the workflow, or that the two Engines should be merged because they share a volatility axis. When a Resource Accessor begins applying business rules, it signals that an Engine is missing from the architecture. Rather than treating boundary violations as failures of discipline alone, experienced practitioners use them as diagnostic signals — the pressure to violate a rule reveals where the current decomposition no longer matches the system's actual volatility profile.
10. Conclusion
Volatility-Based Decomposition provides a disciplined approach to software architecture that treats change as an explicit design constraint rather than an afterthought. By identifying functional, non-functional, cross-cutting, and environmental sources of volatility, architects can align system boundaries with the forces most likely to cause erosion over time.
Through established component roles, strict communication rules, and continuous validation against core use cases, Volatility-Based Decomposition supports the creation of architectures that remain adaptable as systems grow in scale, complexity, and organizational impact.
In many organizations, architectural weakness is revealed not by the initial change itself, but by the unintended side effects that follow. A seemingly localized modification triggers a cascade of downstream impacts, forcing broad testing cycles, emergency fixes, and production instability. These domino effects erode confidence, slow delivery, and increase operational risk.
By aligning architectural boundaries with volatility and validating changes against a small set of core use cases, VBD localizes change and limits its blast radius. While no approach can eliminate change, volatility-based decomposition reduces the likelihood that a single modification will destabilize unrelated parts of the system, preserving architectural integrity and lowering long-term maintenance costs.
As software systems continue to operate in increasingly dynamic environments, architectures that explicitly design for volatility will prove more resilient than those optimized solely for present-day requirements.
Appendix A: Glossary
Blast Radius — The extent of system impact caused by a single change. VBD aims to minimize blast radius by aligning boundaries with volatility axes.
Change Coupling — A condition where modifying one component forces changes in another, even when the two have no logical dependency. Indicates misaligned volatility boundaries.
Communication Rules — The explicit constraints governing which component roles may invoke which others, and through what mechanisms. Prevents dependency erosion and preserves volatility isolation.
Component Role — One of the four architectural roles assigned to a component based on the type of concern it encapsulates: Manager, Engine, Resource Accessor, or Utility.
Core Use Case — A high-level system behavior that defines primary business value and exercises all component tiers. Used to validate architectural decisions and communication rules.
Decomposition — The process of breaking a system into components along boundaries that align with anticipated sources of change.
Engine — A component that encapsulates business rules, calculations, transformations, and policy logic. Engines answer how the system performs its work and absorb functional volatility as business rules evolve.
Environmental Volatility — Change driven by infrastructure platforms, third-party services, vendor APIs, deployment models, and hosting environments.
Functional Volatility — Change driven by evolving business behavior, workflows, features, regulations, and user requirements.
Information Hiding — Parnas's principle of decomposing systems based on design decisions likely to change. VBD generalizes this from individual modules to architectural boundaries.
Manager — A component that coordinates workflow, sequencing, and intent. Managers answer what the system does and remain stable over time by containing no business logic or infrastructure awareness.
Non-Functional Volatility — Change driven by performance, scalability, reliability, and other quality-of-service requirements.
Orchestration — The coordination of workflow steps, sequencing, and intent. Separated from execution in VBD to allow workflows and business rules to evolve independently.
Resource Accessor — A component that isolates interactions with databases, external APIs, vendors, and infrastructure. Resource Accessors answer where data and services live and shield the system from environmental volatility.
Stability — The property of a component that changes infrequently relative to others. Managers are designed to be the most stable components in a VBD system.
Utility — A component that encapsulates cross-cutting concerns such as logging, monitoring, security, and observability. Utilities are orthogonal to business logic and may be called by any other component.
Volatility — The likelihood, frequency, and impact of change affecting a system responsibility or requirement. The primary organizing force in VBD.
Volatility Axis — A dimension along which change is expected to occur. The four primary axes are functional, non-functional, cross-cutting, and environmental.
Volatility Boundary — An architectural seam placed to contain a specific source of change, preventing it from propagating to unrelated components.
Volatility Migration — The phenomenon where a concern shifts from one volatility axis to another over time, requiring reassessment of component boundaries.
Appendix B: Applicability Checklist
Volatility-Based Decomposition is particularly well-suited for systems that:
- Are expected to evolve over multiple years
- Operate across changing infrastructure or regulatory environments
- Support multiple teams or organizational boundaries
- Require long-term maintainability and extensibility
Appendix C: Case Study — Multi-Tenant SaaS Billing Platform
This appendix presents a fictional but architecturally realistic case study applying Volatility-Based Decomposition to a multi-tenant SaaS billing platform. The system is responsible for generating invoices, calculating charges based on usage and subscription tiers, applying externally-imposed adjustments across multiple jurisdictions, processing payments through multiple providers, and notifying tenants of billing activity.
C.1 Volatility Analysis
The first step is identifying what changes and what remains stable.
High Volatility:
- Pricing rules — Subscription tiers, volume discounts, promotional offers, and per-unit rates change continuously as business strategy evolves. Pricing data lives behind an accessor because where and how prices are stored is itself a volatility axis.
- External adjustments — Tax rates, nexus rules, exemption categories, regulatory fees, and reporting obligations change across jurisdictions independently and unpredictably. External tax services and local databases are isolated behind an accessor.
- Payment providers — New providers are added, existing providers change APIs, and regional payment methods must be supported. All provider volatility is isolated behind a single accessor.
- Invoice storage — Where and how invoices are persisted — schema, storage technology, archival strategy — is isolated behind an accessor owned by the InvoicingEngine.
- Notification channels — How tenants are notified (email, SMS, webhook, in-app) changes based on product decisions and tenant preferences.
Low Volatility:
- Billing lifecycle — The fundamental sequence of calculating charges, applying adjustments, generating an invoice, collecting payment, and notifying the tenant has remained stable across billing systems for decades.
- Audit requirements — The need to maintain a complete, immutable audit trail of all financial transactions is a permanent architectural requirement.
C.2 Component Identification
Managers:
- BillingManager — Orchestrates the end-to-end billing lifecycle: delegates pricing calculation, adjustment application, invoice creation and persistence, payment collection, and tenant notification. Contains no business rules. Expresses intent and sequence only. Fires an async event to NotificationManager after payment is confirmed.
- NotificationManager — Orchestrates tenant notification delivery. Receives a billing completion event asynchronously from BillingManager and coordinates the appropriate notification channels. Decoupled from the billing flow — notification failure does not affect billing outcome.
Engines:
- PricingEngine — Encapsulates all pricing logic: subscription tier calculations, usage-based metering aggregation, volume discounts, and promotional adjustments. Retrieves current pricing rules via PriceAccessor. Discounts are an aspect of pricing strategy and belong here.
- AdjustmentEngine — Encapsulates all externally-imposed modifications to a priced transaction: tax calculations, regulatory fees, payment processing surcharges, and any other obligations not owned by the business's pricing strategy. Tax is the primary case — AdjustmentEngine retrieves rates via TaxAccessor and applies jurisdiction-specific rules — but the Engine's boundary is defined by the volatility axis, not the adjustment type. When a new regulatory fee or cross-border surcharge must be applied, AdjustmentEngine is the only component that changes.
- InvoicingEngine — Assembles the invoice from priced and adjusted line items, calculates totals, and persists the completed invoice via InventoryAccessor. Owns both creation and persistence — BillingManager receives only the invoiceId in return.
Resource Accessors:
- PriceAccessor — Retrieves pricing rules, tier definitions, and discount schedules from the pricing data store. Isolates PricingEngine from changes in pricing data structure, storage technology, and external pricing service APIs.
- TaxAccessor — Retrieves tax rates and jurisdiction rules from external tax services (Avalara, TaxJar) or local tax databases. Isolates AdjustmentEngine from changes in data sources and service provider APIs.
- InventoryAccessor — Manages persistence of generated invoices, line items, and billing history. Called by InvoicingEngine, not by BillingManager directly.
- PaymentAccessor — Manages integration with external payment providers (Stripe, PayPal, bank ACH, and future providers). Translates provider-specific protocols into a uniform PaymentConfirmation interface.
C.3 Component Architecture
Figure C.1 — Billing Platform Component Architecture: BillingManager orchestrates the billing lifecycle, delegating to each Engine and calling PaymentAccessor directly. Each Engine owns its own accessor boundary. NotificationManager receives an async event after payment is confirmed and operates independently of the billing flow.
graph TB
BM[BillingManager]
PE[PricingEngine]
TE[AdjustmentEngine]
IGE[InvoicingEngine]
PA[PaymentAccessor]
PRA[PriceAccessor]
TSA[TaxAccessor]
ISA[InventoryAccessor]
NM[NotificationManager]
BM --> PE
BM --> TE
BM --> IGE
BM --> PA
BM -.->|async event| NM
PE --> PRA
TE --> TSA
IGE --> ISA
style BM fill:#0053e2,color:#fff,stroke-width:0px
style NM fill:#0053e2,color:#fff,stroke-width:0px
style PE fill:#ffc220,color:#000,stroke-width:0px
style TE fill:#ffc220,color:#000,stroke-width:0px
style IGE fill:#ffc220,color:#000,stroke-width:0px
style PA fill:#2a8703,color:#fff,stroke-width:0px
style PRA fill:#2a8703,color:#fff,stroke-width:0px
style TSA fill:#2a8703,color:#fff,stroke-width:0px
style ISA fill:#2a8703,color:#fff,stroke-width:0px
C.4 Communication Rules Applied
| Source | Target | Permitted | Rationale |
|---|---|---|---|
| BillingManager | PricingEngine | Yes | Manager invokes Engine |
| BillingManager | AdjustmentEngine | Yes | Manager invokes Engine |
| BillingManager | InvoicingEngine | Yes | Manager invokes Engine |
| BillingManager | PaymentAccessor | Yes | Manager invokes Accessor — after invoice is complete |
| BillingManager | NotificationManager | Yes (async) | Manager-to-Manager via event — fire and forget |
| BillingManager | InventoryAccessor | No | Invoice persistence is InvoicingEngine's responsibility |
| PricingEngine | AdjustmentEngine | No | Engine-to-Engine communication prohibited |
| PricingEngine | PriceAccessor | Yes | Engine calls its own Accessor |
| AdjustmentEngine | TaxAccessor | Yes | Engine calls its own Accessor |
| InvoicingEngine | InventoryAccessor | Yes | Engine calls Accessor to complete its responsibility |
| PaymentAccessor | InventoryAccessor | No | Accessor-to-Accessor communication prohibited |
BillingManager does not call InventoryAccessor directly. Invoice persistence is encapsulated within InvoicingEngine, which calls InventoryAccessor as part of fulfilling its own responsibility. BillingManager receives only the invoiceId in return.
C.5 Core Use Case Walkthrough: Generate and Process Monthly Invoice
- BillingManager receives a trigger to generate the monthly invoice for a tenant.
- BillingManager calls PricingEngine with the tenant's usage data and subscription tier. PricingEngine retrieves current pricing rules via PriceAccessor, calculates line items, applies discounts and pro-rations, and returns a priced line item set.
- BillingManager calls AdjustmentEngine with the priced line items and transaction context (jurisdiction, payment method, tenant classification). AdjustmentEngine applies all externally-imposed obligations — retrieving tax rates via TaxAccessor, applying jurisdiction-specific rules, and layering in any applicable fees or surcharges — and returns the adjusted line items. BillingManager has no knowledge of which adjustment types were applied or how many.
- BillingManager calls InvoicingEngine with the priced and adjusted line items. InvoicingEngine assembles the invoice document, calculates totals, and persists the completed invoice via InventoryAccessor. BillingManager receives only the invoiceId in return.
- BillingManager calls PaymentAccessor with the invoiceId, total amount, and payment method. PaymentAccessor interacts with the configured payment provider, handles provider-specific protocols, and returns a PaymentConfirmation.
- BillingManager fires an async event to NotificationManager with the billing result. NotificationManager operates independently — it coordinates notification delivery without blocking or affecting the billing outcome.
Figure C.2 — Core Use Case: Generate and Process Monthly Invoice. Steps 1–4 are sequential — pricing before adjustments, adjustments before invoice, invoice before payment. Step 5 is fire-and-forget.
graph TB
U([Tenant])
BM[BillingManager]
PE[PricingEngine]
PRA[PriceAccessor]
TE[AdjustmentEngine]
TSA[TaxAccessor]
IGE[InvoicingEngine]
ISA[InventoryAccessor]
PA[PaymentAccessor]
NM[NotificationManager]
U -->|generateInvoice| BM
BM -->|"1 · calculateCharges"| PE
PE -->|getPricingRules| PRA
PRA -->|PricingRules| PE
BM -->|"2 · applyAdjustments"| TE
TE -->|getTaxRates| TSA
TSA -->|TaxRates| TE
BM -->|"3 · createInvoice"| IGE
IGE -->|persist| ISA
ISA -->|invoiceId| IGE
BM -->|"4 · collectPayment"| PA
BM -.->|"5 · async event"| NM
style U fill:#f8f9fa,stroke:#636e72,color:#636e72,stroke-width:1px
style BM fill:#0053e2,color:#fff,stroke-width:0px
style NM fill:#0053e2,color:#fff,stroke-width:0px
style PE fill:#ffc220,color:#000,stroke-width:0px
style TE fill:#ffc220,color:#000,stroke-width:0px
style IGE fill:#ffc220,color:#000,stroke-width:0px
style PA fill:#2a8703,color:#fff,stroke-width:0px
style PRA fill:#2a8703,color:#fff,stroke-width:0px
style TSA fill:#2a8703,color:#fff,stroke-width:0px
style ISA fill:#2a8703,color:#fff,stroke-width:0px
C.6 Internal Design: PaymentAccessor and the Strategy Pattern
VBD defines the boundaries between components and the rules governing communication across those boundaries. It says nothing about how a component is organized internally. That is deliberate — once a boundary is established, the team that owns the component is free to structure its internals however best serves maintainability, testability, and extensibility, without any obligation to the rest of the system.
PaymentAccessor illustrates this clearly. Its contract to the outside world is simple: receive a payment request, return a PaymentConfirmation or failure. How it fulfills that contract is entirely its own concern. Internally, PaymentAccessor implements the Strategy pattern — a PaymentProviderFactory resolves the correct IPaymentProvider implementation based on the context of the call (payment method, region, tenant configuration, or any other dispatch criterion), and delegates execution to it.
Figure C.3 — PaymentAccessor internal structure. The Strategy pattern isolates provider-specific implementations behind IPaymentProvider. PaymentProviderFactory resolves the correct implementation at runtime based on call context. The rest of the system sees none of this.
graph TB
subgraph boundary["PaymentAccessor — internal structure"]
direction TB
F["PaymentProviderFactory"]
I["<<interface>> IPaymentProvider"]
S1["StripeProvider"]
S2["PayPalProvider"]
S3["ACHProvider"]
F -->|resolves| I
I -.->|implemented by| S1
I -.->|implemented by| S2
I -.->|implemented by| S3
end
style F fill:#e8f4fd,stroke:#0053e2,color:#0053e2,stroke-width:1px
style I fill:#f8f9fa,stroke:#636e72,color:#2d2d2d,stroke-width:1px,stroke-dasharray:4
style S1 fill:#2a8703,color:#fff,stroke-width:0px
style S2 fill:#2a8703,color:#fff,stroke-width:0px
style S3 fill:#2a8703,color:#fff,stroke-width:0px
Because BillingManager holds only a reference to PaymentAccessor — not to any of its internals — the team owning PaymentAccessor can:
- Add a new provider by writing a new IPaymentProvider implementation. No other component changes.
- Refactor the factory — change dispatch logic, add caching, introduce weighted routing — without touching BillingManager or any other component.
- Replace the internal architecture entirely — swap the Strategy pattern for a plugin registry, a configuration-driven resolver, or anything else — as long as the external contract is preserved.
- Test provider implementations in isolation, mock at the IPaymentProvider boundary, and evolve internal test strategy independently of the rest of the system.
This is the compounding benefit of a well-placed boundary. VBD does not prescribe how PaymentAccessor is built — it only prescribes where its boundary is and what may cross it. Everything inside that boundary is a local decision, free from the coordination cost that would accompany any change visible to the rest of the system.
The same principle applies to every component. PricingEngine may internally maintain a chain of discount calculators. InvoicingEngine may use a template engine or a document builder. AdjustmentEngine may dispatch to multiple adjustment strategies in sequence. These are internal implementation choices. The system does not know and does not need to know.
C.7 Volatility Isolation Demonstrated: Adding Crypto Payments
The business decides to accept cryptocurrency payments. A new CryptoProvider implementation is written — handling blockchain confirmation logic, wallet address validation, exchange rate locking, and the crypto processor's API — and registered with the factory. That is the entirety of the change.
Figure C.4 — CryptoProvider added to PaymentAccessor. One new class, registered with the factory. The interface contract is unchanged. No other component in the system is modified or redeployed.
graph TB
subgraph boundary["PaymentAccessor — after adding crypto support"]
direction TB
F["PaymentProviderFactory"]
I["<<interface>> IPaymentProvider"]
S1["StripeProvider"]
S2["PayPalProvider"]
S3["ACHProvider"]
S4["CryptoProvider"]
F -->|resolves| I
I -.->|implemented by| S1
I -.->|implemented by| S2
I -.->|implemented by| S3
I -.->|implemented by| S4
end
style F fill:#e8f4fd,stroke:#0053e2,color:#0053e2,stroke-width:1px
style I fill:#f8f9fa,stroke:#636e72,color:#2d2d2d,stroke-width:1px,stroke-dasharray:4
style S1 fill:#2a8703,color:#fff,stroke-width:0px
style S2 fill:#2a8703,color:#fff,stroke-width:0px
style S3 fill:#2a8703,color:#fff,stroke-width:0px
style S4 fill:#2a8703,color:#fff,stroke:#fff,stroke-width:2px,stroke-dasharray:4
BillingManager still calls collectPayment(invoiceId, amount, paymentMethod) — the same call it has always made. It has no knowledge that a new payment type exists. PricingEngine, AdjustmentEngine, InvoicingEngine, and NotificationManager are untouched. The factory now routes crypto payment requests to CryptoProvider; everything else routes exactly as before.
One class added. Eight components untouched. The payment provider axis of volatility is fully contained within PaymentAccessor, and the boundary enforces that containment regardless of how many providers are added over time.
References and Influences
The concepts presented in this paper are informed by, and build upon, established work in software architecture, component design, and object-oriented systems. Volatility-Based Decomposition is not presented as a novel invention, but as a practitioner-oriented articulation of principles that have emerged and matured through decades of architectural thought and practice.
Juval Löwy Löwy, Juval. Righting Software. Addison-Wesley, 2019. Löwy, Juval. Programming .NET Components. O'Reilly Media, 2005.
Juval Löwy's work is the primary foundation for Volatility-Based Decomposition. His IDesign methodology explicitly frames volatility as the dominant architectural force and emphasizes designing systems around anticipated change rather than static functionality. The Manager, Engine, and Resource Accessor role taxonomy, along with the associated communication rules and component interaction discipline described in this paper, are derived directly from IDesign training. This paper builds upon Löwy's work by consolidating these principles into a single, cohesive decomposition process and demonstrating their application across diverse system contexts.
David L. Parnas Parnas, David L. "On the Criteria To Be Used in Decomposing Systems into Modules." Communications of the ACM, 1972.
Parnas introduced the principle of information hiding, arguing that systems should be decomposed based on the design decisions most likely to change. This idea is foundational to Volatility-Based Decomposition. VBD can be viewed as a system-level extension of Parnas's insight, generalizing information hiding beyond individual modules to entire architectural boundaries aligned with volatility axes.
Gang of Four (Erich Gamma, Richard Helm, Ralph Johnson, John Vlissides) Gamma, Erich; Helm, Richard; Johnson, Ralph; Vlissides, John. Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley, 1994.
The Gang of Four cataloged recurring object-oriented patterns that encapsulate and localize variation at the class and collaboration level. These patterns demonstrate how volatility can be managed through indirection, composition, and role separation. Volatility-Based Decomposition extends this pattern-based thinking from the object scale to the architectural scale, applying the same principles of variation isolation across components and services.
Robert C. Martin Martin, Robert C. Clean Architecture. Pearson, 2017. Martin, Robert C. Agile Software Development: Principles, Patterns, and Practices. Pearson, 2002.
Martin's work emphasizes responsibility-driven design, stable dependency direction, and the separation of policy from implementation details. These ideas complement VBD's focus on isolating volatile concerns and enforcing communication rules that prevent dependency inversion from eroding architectural boundaries over time.
Eric Evans Evans, Eric. Domain-Driven Design: Tackling Complexity in the Heart of Software. Addison-Wesley, 2003.
Domain-Driven Design introduces bounded contexts as strategic boundaries for managing conceptual and organizational complexity. While VBD does not prescribe domain boundaries, it aligns with Evans's emphasis on explicit boundary definition. Volatility-Based Decomposition can coexist with DDD by treating bounded contexts as one possible axis of volatility among others, such as regulatory change or infrastructure churn.
Gregor Hohpe and Bobby Woolf Hohpe, Gregor; Woolf, Bobby. Enterprise Integration Patterns. Addison-Wesley, 2003.
Hohpe and Woolf formalized integration and messaging patterns that address the volatility inherent in distributed systems. Their work informs the disciplined communication rules emphasized in VBD, particularly around asynchronous messaging, decoupling, and the containment of integration complexity within well-defined architectural boundaries.
Mary Shaw and David Garlan Shaw, Mary; Garlan, David. Software Architecture: Perspectives on an Emerging Discipline. Pearson, 1996.
Shaw and Garlan helped establish software architecture as a first-class discipline distinct from programming and design. Their work provides the conceptual grounding for system-level decomposition approaches like Volatility-Based Decomposition, reinforcing the idea that architectural structure must be reasoned about explicitly and evaluated continuously as systems evolve.
Author's Note
Volatility-Based Decomposition (VBD), including its terminology, volatility-first orientation, component role taxonomy, and communication rules, originates from Juval Löwy's IDesign methodology. This paper does not introduce a new architectural approach. It provides a consolidated, practitioner-oriented articulation of VBD, emphasizing decomposition mechanics, validation strategies, and real-world application across long-lived software systems.
The intent of this paper is to serve as a durable reference that translates volatility-centered principles into a form suitable for consistent application, discussion, and review within modern engineering organizations.
Distribution Note
This document is provided for informational and educational purposes. It may be shared internally within organizations, used as a reference in architectural discussions, or adapted for non-commercial educational use with appropriate attribution. This paper does not represent official policy, standards, or architectural mandates of any current or former employer. All examples are generalized and abstracted to avoid disclosure of proprietary or sensitive information.