Value-Based Design

Volatility-Based Decomposition

Modern software systems rarely fail because of poor initial design; they fail because change accumulates faster than the architecture can absorb it. VBD addresses this by treating change as the primary organizing force in system design.


The Core Idea

Rather than decomposing systems by domain concepts or technical layers, VBD organizes architectural boundaries around anticipated sources of volatility. Components that change for the same reason — and at the same rate — belong together. Components that change for different reasons belong apart, regardless of their current functional relationship.

Business strategy evolves continuously. Markets shift. Regulations change. Organizations naturally structure themselves around functional responsibilities — Sales, Operations, Finance — and software frequently mirrors that structure. Early in a system’s life, this alignment works. Over time, tension emerges: most meaningful changes cut across functional boundaries rather than remaining contained within them. VBD addresses this mismatch by aligning architectural boundaries with change dynamics rather than organizational structure alone.

The Four Axes of Volatility

AxisDescriptionExamples
FunctionalChanges in system behavior driven by evolving business needsNew features, modified workflows, regulatory changes
Non-FunctionalChanges to system qualitiesPerformance, scalability, reliability, security
Cross-CuttingChanges spanning multiple componentsLogging, monitoring, authentication, error handling
EnvironmentalChanges to infrastructure and external systemsDatabase migrations, vendor APIs, hosting platforms

Component Roles: What, How, Where, With What

VBD assigns each volatility axis to a dedicated component role. The mental model is four questions: what does the system do, how does it do it, where does data live, and with what shared capabilities. No single role contains all change. The four roles working together localize change across every axis.

RoleQuestionResponsibility
ManagerWhat does the system do?Orchestration — workflow, sequencing, intent. Remains stable over time.
EngineHow does it do it?Execution — business rules, calculations, policies. Changes most frequently.
Resource AccessorWhere does data live?Integration — databases, vendors, external systems. Thin translation layer.
UtilityWith what support?Cross-cutting — logging, auth, monitoring, observability. No domain knowledge.

If a unit of work decides what happens next, it belongs in a Manager. If it computes how to do it, it belongs in an Engine. If it reaches out to where data or services live, it belongs in a Resource Accessor. If it supports everything but belongs to no domain, it is a Utility.

Role Definitions

Managers

Managers coordinate operation flow and encapsulate high-level business orchestration. They represent business intent and workflow coordination. A Manager sequences steps such as validation, pricing, fulfillment, and confirmation without embedding the business rules that perform those steps. Managers should not implement business rules, perform data aggregation, or contain persistence logic. Their responsibility is to express what the system is trying to accomplish, not how it is achieved.

Engines

Engines execute business rules, transformations, and computationally intensive operations. They encapsulate the logic most likely to change due to policy shifts, experimentation, or optimization. Engines may persist data when appropriate — they are not required to delegate all data access to Resource Accessors. They are invoked by Managers and remain unaware of the broader execution context.

Resource Accessors

Resource Accessors manage interaction with persistence layers, external systems, vendors, and infrastructure-facing resources. They isolate environmental and integration volatility from the rest of the system. Their job is translation: convert a domain request into an external call, convert the response back. They must not apply business rules or make policy decisions.

Utilities

Utilities encapsulate cross-cutting concerns that apply broadly and evolve independently of business workflows. Logging, monitoring, error classification, feature flags, security primitives. They are orthogonal to core behavior and remain free of domain-specific knowledge.

Communication Rules

The communication rules enforce clean volatility boundaries. It is the coordination between roles that captures volatility, not individual roles in isolation. The principle: state flows downward, results propagate upward, horizontal coordination is prohibited.

  • Managers invoke Engines and Resource Accessors directly
  • Managers communicate with other Managers only through asynchronous, fire-and-forget mechanisms
  • Engines must not call sibling Engines — the Manager composes instead
  • Engines may call Resource Accessors for reference data or persistence
  • Resource Accessors must not call Engines or other Accessors
  • Utilities are consumed by any role but never coordinate
  • Managers must not perform heavy computation
  • Engines must not use queued or pub/sub mechanisms

Separating Orchestration from Execution

One aspect of VBD that is often undervalued is the explicit separation of business logic into two distinct concerns: orchestration (what happens next) and execution (how to do it). When orchestration and execution are interwoven within the same unit, they become change-coupled. A modification to workflow sequencing forces changes in business rule implementation. By separating them into Managers and Engines respectively, architectures absorb these changes independently, allowing workflows and business rules to evolve at different rates.

Interface and Implementation

Interface and implementation do not have to live together from the start. Begin simple — a component can present its interface and contain its implementation in the same deployable unit. As deployment needs evolve, separate them. The architecture supports this progression because the communication rules already define the contracts. Premature physical separation adds complexity without benefit; the structural boundaries exist regardless of how the code is packaged.

The Decomposition Process

  1. Identify core use cases — the high-level behaviors that define the system’s purpose (typically fewer than five)
  2. Enumerate volatility axes — across functional, non-functional, cross-cutting, and environmental dimensions
  3. Classify responsibilities — based on likelihood and drivers of change
  4. Define boundaries — aligning with volatility classifications
  5. Apply component roles and communication rules — to isolate volatile responsibilities from stable ones

Validation

VBD validates structural decisions by tracing core use cases through the component hierarchy. If a scenario can be traced through Manager, Engine, and Accessor without bypassing communication rules, the boundaries are correct. If a scenario requires an Engine to call another Engine, or an Accessor to apply business logic, the boundaries need adjustment. If a core use case requires bypassing defined communication rules, the architecture should be reconsidered.

Continuous Evaluation

Volatility analysis is not a one-time activity. As systems evolve, new volatility axes emerge and existing assumptions may become invalid. Maintain architectural integrity by monitoring changes in requirements, infrastructure, and usage patterns; conducting periodic architectural reviews; updating volatility classifications and component boundaries; and communicating architectural changes to all stakeholders.

Where VBD Applies

VBD is most effective in long-lived systems, platform architectures, and integration-heavy environments where change is constant and unavoidable. It provides architects and senior engineers with a clear, practical reference for applying volatility-first architectural thinking at scale.