Project Design: The Design of Projects

Published March 2026

The Design of Projects: A Practitioner-Oriented Articulation

Author: William Christopher Anderson
Date: February 2026

Executive Summary

Projects fail not because teams lack effort, but because they lack informed planning. Traditional project management estimates cost and schedule from requirements alone, producing plans that are routinely two to four times off from reality. Project Design addresses this problem by treating the system architecture as the foundation for all planning decisions.

Rather than estimating from requirements documents or user stories, this approach derives the project plan directly from the architectural decomposition. Components become work packages. Dependencies between components become the project network. The critical path through that network determines the project duration. Compression and decompression of that path determine the feasible range of schedules. Risk is quantified objectively from the float distribution across all activities.

The result is not a single plan, but a set of viable options — typically three — spanning conservative, balanced, and aggressive approaches. Each option comes with quantified cost, schedule, risk, and staffing requirements. Management selects from these options in a structured decision meeting, committing resources or canceling the project based on objective trade-off analysis.

This methodology integrates established techniques from critical path analysis, earned value management, and risk quantification into a cohesive process that is compatible with both traditional and agile delivery frameworks. It provides architects, project managers, and engineering leaders with a practical, repeatable approach to planning software projects with confidence.

Abstract

Software project planning has historically operated independently of software architecture, resulting in plans that fail to account for the structural realities of the systems they propose to build. Estimation without decomposition produces unreliable forecasts. Scheduling without dependency analysis produces plans that collapse under execution pressure. Risk management without float analysis produces subjective assessments that fail to predict actual failures.

Project Design integrates system design with project planning by deriving the project plan directly from the architectural decomposition. This paper presents a structured methodology covering activity inventorying, network construction, critical path analysis, multi-level estimation, schedule compression, objective risk quantification, earned value validation, staffing principles, staged delivery, and structured decision-making. The approach produces multiple viable project options with quantified trade-offs, enabling rational decision-making by stakeholders.

The methodology is grounded in Juval Löwy’s IDesign method and draws on established project management techniques, adapted for modern software development contexts including agile delivery. It is applicable to projects of any scale, from small team efforts to large enterprise programs, and provides practitioners with a repeatable process for transforming architectural designs into executable project plans.

1. Introduction

Software projects exist in a state of chronic estimation failure. Industry data consistently shows that large software projects exceed their budgets by fifty to two hundred percent, miss deadlines by comparable margins, and deliver only a fraction of projected benefits. These failures are not primarily failures of execution — they are failures of planning.

The root cause is a fundamental disconnect between what is being planned and how the plan is constructed. Traditional project management treats software as a manufacturing process where effort can be estimated from requirements, parallelized across resources, and tracked against milestones defined by feature completion. This model fails because software is not manufactured — it is designed and integrated. The effort required to build a system depends not on the number of features, but on the structure of the system, the dependencies between its components, and the integration points where those components must work together.

Project Design addresses this disconnect by making the system architecture the foundation of the project plan. The premise is simple but profound: you cannot estimate what you have not designed. Without an architectural decomposition, you cannot identify the components that constitute the work. Without components, you cannot identify the dependencies that constrain sequencing. Without dependencies, you cannot identify the critical path that determines duration. Without duration, you cannot calculate cost. Without cost and risk, you cannot make informed decisions.

While the examples and terminology in this paper are drawn primarily from software engineering — where the methodology originated and where its application is most developed — the underlying techniques are not software-specific. Critical path analysis, float-based risk quantification, earned value tracking, compression analysis, and structured decision-making apply to any project with a decomposable structure and identifiable dependencies. In software, the architecture is the structure — the architectural decomposition provides the components, dependencies, and integration points from which the project plan is derived. In construction, product development, organizational transformation, or infrastructure programs, the structural decomposition takes a different form, but the planning discipline is identical: design the project before you estimate it.

This paper presents a practitioner-oriented articulation of project design, covering the complete process from architectural review through structured decision-making. It describes how to construct project networks from architectural dependencies, analyze critical paths, estimate effort at multiple levels, compress schedules within feasible limits, quantify risk objectively, validate plans using earned value curves, staff projects according to architectural boundaries, and present options to stakeholders for informed decision-making.

2. Foundations

2.1 The Core Premise

Project design rests on a single foundational premise: the system architecture is the project plan. This is not a metaphor. The components identified during architectural decomposition become the work packages of the project. The dependencies between those components become the precedence relationships in the project network. The integration points where components must work together become the milestones of the project. The critical path through those dependencies determines the duration. The staffing required to execute that path determines the cost.

This premise inverts the traditional relationship between architecture and project management. In conventional practice, project managers estimate effort from requirements, construct schedules from those estimates, and treat architecture as a technical detail handled during execution. In project design, the architecture comes first. It must be substantially complete before any meaningful estimation can begin. The investment in architectural design is not an overhead cost — it is the primary mechanism by which estimation accuracy is achieved.

2.2 The Core Triad

The Core Triad

Three roles collaborate throughout the project design process, each bringing a distinct perspective that the others cannot provide.

The Architect owns the system decomposition. They identify components, define dependencies, analyze the critical path, and provide technical risk assessment. They are responsible for the structural integrity of the plan.

The Project Manager owns resource allocation. They assess availability, cost, organizational constraints, and political feasibility. They translate the architectural plan into an executable staffing and scheduling strategy.

The Product Manager owns requirements prioritization. They serve as the customer proxy, resolving requirement conflicts, defining priorities, and managing stakeholder expectations.

In agile contexts, these roles map to the Tech Lead, Scrum Master or Delivery Lead, and Product Owner respectively. The important principle is that all three perspectives are present throughout the planning process. Removing any one of them produces plans with blind spots that manifest as failures during execution.

2.3 The Fuzzy Front End

The period between project inception and the commitment to a specific plan is the fuzzy front end. During this period, the core triad works to produce the architecture, the project design, and the options for management. This period typically consumes fifteen to twenty-five percent of the total project duration.

This investment is not wasted time. It is the mechanism by which the remaining seventy-five to eighty-five percent of the project is executed efficiently. Projects that skip or compress the front end invariably spend more total time and money than those that invest in it, because the cost of planning failures during execution far exceeds the cost of planning itself.

Staffing during the front end is minimal — only the core triad. Full team staffing begins after management commits to a plan. This means the project’s most expensive resources — the full development team — are not engaged until the plan is validated, reducing financial exposure.

3. Activity Inventory

The first step in project design is constructing a complete inventory of all activities required to deliver the system. This inventory must be comprehensive — the most common source of estimation error is not incorrect estimates for known activities, but the complete omission of activities that were never identified.

3.1 Architecture-Derived Activities

The architectural decomposition provides the primary source of activities. Each component identified during design — whether Manager, Engine, Resource Accessor, or Utility — becomes a work package with a predictable lifecycle: detailed design, implementation, unit testing, code review, and documentation. Each pair of connected components generates an integration activity. Each use case generates a system-level verification activity.

3.2 Non-Code Activities

A software project involves substantially more than writing code. The activity inventory must explicitly include activities that are routinely underestimated or omitted entirely. These fall into several categories.

Planning and design activities include requirements analysis and refinement, architectural design, project design itself, technology spikes and prototyping, and vertical slice or proof-of-concept development.

Organizational activities include stakeholder communication and alignment, change management and organizational readiness, vendor coordination and procurement, licensing, and legal or regulatory review where applicable.

Quality and compliance activities include security review and compliance audit, accessibility audit, user acceptance testing, performance and load testing, security penetration testing, and disaster recovery testing.

Operational activities include environment setup for development, staging, and production; continuous integration and deployment pipeline setup; monitoring, alerting, and observability infrastructure; runbook and playbook creation; on-call training and rotation setup; release planning and communication; go-live and cutover planning; and rollback planning.

People and knowledge activities include learning curves for new technology or domain knowledge, training and knowledge transfer, documentation of architecture, APIs, and user-facing content, and team onboarding materials.

Post-launch activities include a hypercare or stabilization period, post-mortem or retrospective sessions, and a warranty support period.

The distinction between code activities and non-code activities is critical because non-code activities often account for thirty to fifty percent of total project effort, yet are routinely excluded from initial estimates.

3.3 The Activity Lifecycle

Each component follows a lifecycle that should be estimated as a unit. A common rule of thumb suggests that total effort is on the order of six times the design effort: roughly one unit for design, three for implementation, one for unit testing, and one for integration testing. This ratio varies significantly by complexity, domain, and team experience, but provides a useful starting point for calibration.

4. Network Construction

Project Network with Critical Path

With the activity inventory complete, the next step is constructing the project network — a directed graph that models the precedence relationships between all activities. The network is the structural foundation for all subsequent analysis.

4.1 Deriving Dependencies from Architecture

In a Volatility-Based Decomposition, dependencies flow naturally from the component taxonomy. Infrastructure and utilities have no upstream dependencies and form the leaf nodes of the network. Resource Accessors depend on infrastructure. Engines depend on Resource Accessors. Managers depend on Engines and Resource Accessors. Client applications depend on Managers. This ordering provides the skeleton of the project network.

Non-code activities are inserted into the network at appropriate positions. Preparation activities such as requirements and architecture precede all construction. Test planning runs parallel to early construction. Integration testing follows the completion of connected component pairs. System testing follows the completion of all Managers. Documentation and training activities may run parallel to late-stage construction.

4.2 Dependency Consolidation

Unconsolidated networks contain redundant dependencies that distort analysis. If activity A depends on B and B depends on C, then A implicitly depends on C and the explicit dependency should be removed. Failing to consolidate dependencies inflates the apparent complexity of the network, produces incorrect float values, and obscures the true critical path.

A well-consolidated network has three to four dependencies per activity. Networks with higher dependency counts should be evaluated for architectural restructuring. If the project network is too complex, the architecture may need to be simplified.

4.3 Milestone Placement

Infrastructure completion should be pushed to milestone nodes that separate preparation from construction. This reduces dependency fan-out and simplifies the network. The project should have a single start event and avoid multiple end points.

4.4 Network Representation

Two representations are common. Node diagrams place activities on nodes with arrows representing dependencies. Arrow diagrams place activities on arrows with nodes representing events or milestones. Arrow diagrams, while less intuitive initially, provide superior visualization for project design because they can be drawn to scale, with arrow length proportional to duration, making the schedule visible at a glance.

5. Critical Path Analysis

The critical path is the longest path through the project network from start to finish. Its duration is the duration of the project. No project can be accelerated beyond its critical path regardless of resource availability.

5.1 Forward and Backward Pass

The forward pass calculates the earliest possible start and finish time for each activity by traversing the network from start to finish. The earliest start of each activity is the maximum of the earliest finish times of all its predecessors.

The backward pass calculates the latest allowable start and finish time for each activity by traversing the network from finish to start. The latest finish of each activity is the minimum of the latest start times of all its successors.

5.2 Float Analysis

The difference between the latest and earliest start times of an activity is its total float — the amount of time the activity can slip without delaying the project. Activities with zero total float are on the critical path.

Free float measures how much an activity can slip without delaying any immediate successor. Interfering float is the difference between total float and free float — consuming interfering float delays successors but not the project.

Float is the objective measure of risk. Activities with zero float are maximally risky — any delay directly extends the project. Activities with large float can absorb delays without consequence. The distribution of float across all activities determines the overall risk profile of the project.

5.3 Float Classification

Float Classification Drives Resource Assignment

Activities are classified by their float into risk categories. Critical activities have zero float and receive the highest resource priority. Red activities have small float — typically less than five days — and are near-critical. Yellow activities have moderate float and represent medium risk. Green activities have large float and can be staffed flexibly.

This classification directly drives resource assignment. The best resources are assigned to the critical path. Strong resources go to near-critical activities. Float on non-critical activities can be traded for resource flexibility.

5.4 Proactive Risk Management

The primary reason well-managed projects slip is that non-critical activities consume their float and become critical, creating a new critical path that was not planned for and does not have the best resources assigned to it. Continuous monitoring of float degradation on near-critical chains enables proactive resource reassignment before this transition occurs.

6. Estimation

Estimation in project design operates at three levels, each serving a distinct purpose. The levels validate each other — significant discrepancies between levels indicate problems that must be resolved before proceeding.

6.1 Activity-Based Estimation

The primary estimation method is bottom-up, activity-by-activity estimation. Each activity is estimated individually, ideally by the person who will perform it. Estimates are expressed in units of five days, aligning with week boundaries and reducing waste in the plan.

The estimate for each activity must include its complete lifecycle — not just implementation, but design, testing, review, and documentation. Activities that are routinely omitted from estimates include learning curves for new technology, test harness construction, deployment and installation effort, integration with adjacent components, and documentation.

Both underestimation and overestimation are harmful. Underestimation produces schedule pressure, corner-cutting, and quality degradation. Overestimation produces gold plating, scope creep, and inverted priority structures where padding becomes an entitlement. The goal is nominal estimation — the most likely duration assuming normal conditions and competent execution.

6.2 Broadband Estimation

Broadband estimation gathers input from twelve to thirty diverse participants through successive refinement iterations. The statistical advantage of group estimation is that the error of the sum is less than the sum of the errors — individual overestimates and underestimates tend to cancel.

The participant pool should include a diverse profile: veterans and newcomers, specialists and generalists, optimists and skeptics. In agile contexts, Planning Poker is a form of broadband estimation.

Broadband estimation serves as a sanity check against the detailed bottom-up estimate. It is not actionable on its own — it lacks the structural precision needed for network analysis. But significant discrepancies between broadband and detailed estimates indicate areas requiring investigation.

6.3 Overall Project Estimation

Historical data and estimation tools provide a third validation point. Comparison of the bottom-up estimate against historical performance on similar projects reveals systemic biases — teams that consistently underestimate integration effort, for example, or that fail to account for organizational overhead.

6.4 Cost Calculation

Project cost is the area under the staffing curve — the integral of staffing level over time. This calculation requires both the staffing distribution and the cost rates for each role. Cost consists of direct costs — developer salaries, equipment, tools, and licenses — which scale with team size and duration, and indirect costs — management overhead, facilities, and administrative support — which scale primarily with duration.

The total cost is a quadratic polynomial of time, combining the linear indirect costs with the nonlinear direct costs. This relationship means that both excessively compressed and excessively extended projects are more expensive than the optimal point, which typically falls near the all-normal solution.

6.5 Efficiency

Project efficiency measures the ratio of productive effort to total effort. For complex projects with multiple contributors, efficiency in the range of fifteen to twenty-five percent is common — the remainder is consumed by communication overhead, coordination, waiting, and context switching. An efficiency calculation significantly exceeding this range may indicate that dependencies and communication overhead have been underestimated.

7. Schedule Compression

Schedule compression reduces the project duration below the all-normal solution. There are exactly two mechanisms for compressing a schedule: making individual activities shorter, and restructuring the network to enable more parallelism.

7.1 Activity Compression

A top-performing resource can often complete an activity in less time than the normal estimate, but at a disproportionately higher cost — the relationship between time saved and cost incurred is not linear. The exact ratios depend on domain, team composition, and activity type, and should be calibrated from historical data where available. The underlying principle is consistent: compression trades cost for time, and only activities on or near the critical path should be compressed — compressing non-critical activities does not accelerate the project.

Compression has diminishing returns. Each activity compressed on the critical path may create a new critical path through activities that were previously non-critical. These new critical path activities are typically staffed with weaker resources, creating fragility. As a general guideline, the practical maximum compression tends to fall in the range of twenty-five to thirty percent of the normal duration, though this varies by project structure and resource availability.

7.2 Network Restructuring

The more profound acceleration technique is restructuring the network to enable parallel work. This can be achieved by investing in detailed design and contracts that allow implementation against interfaces rather than implementations, developing simulators that decouple Manager development from Engine and Accessor availability, and separating design activities from construction activities to enable pipelining.

Network restructuring requires additional investment — simulators must be built, contracts must be specified in detail, and coordination overhead increases. The cost is higher, but the schedule reduction can exceed what activity compression alone can achieve.

7.3 The Time-Cost Curve

Time-Cost Curve

Plotting all feasible solutions — from maximum compression to padded decompression — produces the time-cost curve. This curve is fundamental to project design. Points below the curve are infeasible — they cannot be achieved regardless of resources. Points on the curve represent efficient solutions. Points above the curve represent wasteful solutions.

The curve has several named points. The all-normal point represents all activities at nominal duration. The least-cost point is typically near the all-normal point and represents the minimum total cost. The all-crash point represents maximum compression of all critical activities. The least-duration point represents the absolute minimum achievable schedule, including network restructuring.

The region to the left of the least-duration point is the death zone — no amount of resources can achieve these schedules. When management requests a delivery date in the death zone, the project designer’s responsibility is to communicate this clearly with supporting analysis.

7.4 Compression and Complexity

The true cost of compression is not financial — it is complexity. Project execution complexity, measured as the cyclomatic complexity of the project network, increases nonlinearly with compression. A normal solution might have a complexity of five to eight. A compressed solution might reach ten to twelve. The practical maximum complexity is ten to twelve, beyond which project management becomes unmanageable. A fully parallel project has complexity equal to the number of activities, which is always impractical.

8. Risk Quantification

Risk in project design is measured objectively from the float distribution. Risk represents the fragility of the plan — its sensitivity to activities slipping — not the probability of specific events occurring.

8.1 Criticality Risk Index

The criticality risk index classifies all activities by their float into four categories — critical, red, yellow, and green — and computes a weighted average. The formula assigns the highest weight to critical activities and the lowest to green activities, producing a value between 0.25 and 1.0.

The risk interpretation scale provides guidance for evaluating the result. Values below 0.30 indicate an over-decompressed project — resources are being wasted on excessive buffer. Values between 0.30 and 0.50 represent the comfortable zone. The design target is approximately 0.50, which balances schedule efficiency against fragility. Values between 0.50 and 0.75 indicate increasing fragility. Values above 0.75 should be avoided — the plan is too brittle to survive normal execution variability.

Risk is not linear. A risk index of 0.69 is substantially more dangerous than 0.50 — the relationship is more analogous to the Richter scale than to a linear percentage.

8.2 Activity Risk Index

The activity risk index uses actual float values rather than categorical classifications, providing finer-grained measurement. It is computed as one minus the normalized sum of float values across all activities. This measure is sensitive to outlier activities with extremely large float values, which can be addressed by capping outliers at one standard deviation above the mean.

8.3 Risk Decompression

Deliberately planning for a later delivery date than the normal solution reduces risk by adding float to near-critical activities. Even small decompression — two weeks beyond the normal solution — can significantly reduce the risk index. The optimal decompression point is typically found at the second derivative zero of the risk curve, which produces a risk index of approximately 0.50.

The cost of decompression is the additional direct cost of the extended schedule. In practice, this cost is modest compared to the risk reduction achieved, making decompression one of the most cost-effective risk mitigation strategies available.

8.4 Risk Cross-Over Point

The risk cross-over point is the schedule duration at which risk begins rising faster than cost. Beyond this point, further compression becomes unwise because the fragility of the plan increases disproportionately relative to the cost savings achieved by the shorter schedule. The cross-over analysis produces an acceptable risk zone bounded by upper and lower cross-over points.

8.5 Risk During Execution

Once a plan is selected and execution begins, risk should be tracked continuously. A well-executed project shows a gradual, smooth decline in risk as activities complete and float is consumed predictably. A sudden jump in measured risk — a discontinuity in the risk curve — indicates a pending problem and warrants immediate investigation.

9. Options and Decisions

The culmination of project design is the presentation of options to management. This is not a status report or a request for permission — it is a structured decision framework that enables rational choice among viable alternatives.

9.1 The Three Options

At least three project design options should be presented, spanning the feasible zone of the time-cost curve.

The conservative option offers the longest schedule, lowest cost, and lowest risk. It uses minimal compression, provides substantial float buffers, and accommodates uncertainty in requirements, technology, and team capability.

The balanced option represents the recommended approach for most projects. It applies moderate compression to the critical path, achieves a risk index near the decompression target of 0.50, and balances schedule efficiency against plan fragility.

The aggressive option offers the shortest feasible schedule at the highest cost and risk. It applies maximum practical compression, requires top resources on all critical activities, and provides minimal buffer for absorbing delays.

Each option must include quantified values for duration, direct and indirect cost, risk index, peak staffing, critical path identification, milestone schedule, key assumptions, and the top three to five specific risks.

9.2 The Feed Me or Kill Me Decision

Management reviews the options and makes one of two decisions: commit resources to the selected option, or cancel the project. Both outcomes are valid. Killing a project that cannot be executed within acceptable cost, schedule, and risk parameters is a responsible act that preserves resources for better opportunities.

The architect recommends. Management decides. The architect’s responsibility is to illuminate the trade-off space with objective analysis. Management’s responsibility is to navigate that space based on business priorities, strategic considerations, and risk tolerance.

Never staff up before this decision. Never let management select an option outside the feasible zone. Never present a single plan — that is a demand, not a recommendation.

9.3 Post-Decision

Once management selects an option, several commitments follow. Scope is locked and changes go through formal change control. Resources are assigned according to the staffing plan for the selected option. Milestones are set. Staged delivery begins with the first stage — typically infrastructure and preparation. Earned value tracking commences against the selected option’s planned progress curve.

10. Earned Value Planning and Tracking

Earned value serves dual purposes in project design: as a validation tool during planning and as a tracking tool during execution.

10.1 Plan Validation

Earned Value S-Curve

Each activity is assigned a value representing its contribution to system completion as a percentage of total project work. Plotting cumulative earned value over time for a well-designed plan produces a shallow S-curve — slow initial progress during the core team’s front-end work, accelerating progress during the ramp-up and construction phases, and decelerating progress during integration and testing.

The shape of the S-curve validates the plan’s sanity. A steep early curve indicates unrealistic optimism. A flat early curve with a steep late section indicates unrealistic pessimism. A straight line indicates fixed team size, which is nearly always suboptimal. A very shallow, nearly straight curve indicates sub-critical staffing. A well-designed plan produces an S-curve with a coefficient of determination exceeding 0.95 when fit to a third-degree polynomial.

10.2 Execution Tracking

During execution, three lines are tracked: the planned progress curve, the actual progress curve, and the actual effort curve. The relationship between these curves provides diagnostic information.

When progress tracks the plan and effort tracks the plan, the project is healthy. When progress falls behind while effort exceeds the plan, the project is underestimating complexity. When progress leads while effort is below the plan, the project may be overestimating or the team may be sandbagging. When progress falls behind while effort dramatically exceeds the plan, there is a resource leak — effort is being consumed without proportional progress.

The critical capability of earned value tracking is projection. By extrapolating actual progress and effort curves, the project manager can project the completion date and final cost while there is still time to take corrective action. This forward-driving analysis is the essence of project management — the ability to project is the essence of a project.

11. Staffing

Staffing in project design follows directly from the architectural decomposition and the selected project option.

11.1 The One-to-One Rule

Each component is assigned to one developer. This principle enables reliable estimation, clear ownership, and minimal communication overhead. Assigning multiple developers to a single component introduces coordination costs that are difficult to estimate and manage. If a component cannot be built by one developer within the project timeline, it should be decomposed further — this is an architectural problem, not a staffing problem.

This principle is a direct application of Conway’s Law: the interaction between team members mirrors the interaction between the components they build. A well-decomposed architecture with minimized inter-component coupling naturally minimizes inter-developer communication overhead.

11.2 The Staffing Curve

Staffing Curves

A properly planned project produces a staffing curve shaped like a smooth hump: a small core team during the front end, a gradual ramp-up as construction begins, a peak during maximum parallel activity, and a gradual wind-down during integration and testing.

Staffing anti-patterns include peaks and valleys, which indicate poor float utilization; erratic patterns where team members join, leave, and rejoin, which is organizationally impractical; steep ramps, which exceed teams’ absorption capacity; and fixed team sizes, which ignore critical path dynamics.

Developers cannot come and go elastically. Staffing changes have ramp-up costs, context-switching overhead, and organizational friction. The staffing curve must be realistic — smooth transitions that real organizations can execute.

11.3 The Hand-Off Point

The hand-off point is where the architect transfers design responsibility to developers. At the senior hand-off point, the architect specifies service-level contracts and interfaces, and developers handle detailed design and implementation. This is faster, cheaper, and enables pipelining of architect design with developer construction, but requires senior developers.

At the junior hand-off point, the architect specifies detailed class-level design, and developers implement to specification. This is disproportionately more work for the architect but necessary when the team lacks design experience. The hand-off point must match the actual team composition — using a senior hand-off point with a junior team produces architectural drift and quality problems.

12. Staged Delivery and Integration

Project design always uses staged delivery. The system is delivered in incremental stages, each producing a working increment that demonstrates progress and enables early feedback.

12.1 VBD-Aligned Stages

VBD-Aligned Staged Delivery

In systems built using Volatility-Based Decomposition, stages align naturally with the component taxonomy. The first stage delivers infrastructure and utilities — the foundation on which everything else builds. The second stage delivers Resource Accessors — verified data access and external integration. The third stage delivers Engines with their Accessors — working business logic. The fourth stage delivers Managers — complete workflows and orchestration. The fifth stage delivers client applications and user interfaces.

Public releases begin after Managers are complete, because only at that point do user-visible workflows function end-to-end. Earlier stages are internal releases that build confidence and enable early integration testing.

12.2 Integration Planning

The necessary and sufficient unit of integration is two connected services. Integration begins bottom-up from leaf nodes and proceeds upward through the dependency graph. Each integration point is small, focused, and localizes bugs to the newest component added.

Milestones are based on integration, not features. Features are aspects of integration, not implementation — a feature only exists when the components required to produce it are integrated and functioning together. Planning around feature completion crosses component boundaries, obscures ownership, and produces milestones that are difficult to verify objectively.

13. Special Situations

13.1 God Activities

God activities are activities that are large both in absolute duration and relative to the mean activity size — typically more than one standard deviation above the mean. They are almost always on the critical path and deform all project design techniques: risk models yield misleadingly low values, float analysis is skewed, and compression analysis becomes unreliable.

God activities should be addressed before trusting any analysis built on the network. Solutions include splitting the activity into internal phases that can overlap, developing simulators to reduce the blocking effect on dependent activities, and factoring the activity into a separate sub-project.

13.2 Very Large Projects

Projects with hundreds or thousands of activities exceed the human capacity for network comprehension, which is approximately one hundred activities. Research by Bent Flyvbjerg demonstrates that project size maps directly to poor outcomes — one in six medium-size projects experiences two hundred percent cost overrun and seventy percent schedule overrun.

The solution is the network of networks approach: decompose the large project into manageable sub-projects, each with its own critical path, risk analysis, and staffing plan. A preliminary mini-project discovers the network of networks structure. The primary investment for decoupling sub-networks is architecture and interface definition — the same discipline that enables good system decomposition also enables good project decomposition.

13.3 Small Projects

Small projects are paradoxically more sensitive to project design mistakes than large ones. With small teams on short durations, the resolution of any planning error is substantial relative to the total timeline. Almost every mistake becomes critical. Project design should be applied when the team size exceeds the number of network paths — which for small projects is often the case.

13.4 Sub-Critical Staffing

When resource availability constrains the project below the minimum staffing needed for the critical path, the project enters the sub-critical zone. Sub-critical staffing tends to increase cost substantially — often by twenty-five percent or more — while extending the schedule by a comparable margin and pushing the risk index into dangerous territory. There are no savings from sub-critical staffing — the combination of extended duration and increased overhead consistently exceeds the cost of adequate staffing.

The telltale sign of sub-critical staffing is an earned value curve that approximates a straight line rather than an S-curve. The staffing distribution is anemic — truncated and flat, missing the healthy ramp-up hump. In the extreme case of a single developer, all activities are serial, all are critical, risk is 1.0, and duration equals the sum of all effort.

14. Alignment with Agile Methodologies

Project design is not opposed to agile delivery — it is complementary to it. The distinction is between planning and execution. Project design provides the planning discipline that determines what is built, in what order, with what resources, and at what cost. Agile methods provide the execution discipline that governs how the work is performed day to day.

14.1 Agile Mappings

The Feed Me or Kill Me decision maps to Sprint Zero or PI Planning commitment. The core triad maps to the Product Owner, Scrum Master, and Tech Lead. The activity inventory maps to the refined backlog with architectural dependencies. The three options map to release plan scenarios. Broadband estimation maps to Planning Poker. Sprint velocity maps to the earned value rate. The S-curve maps to the cumulative flow diagram or burnup chart.

Staged delivery maps to Program Increments or release trains. Internal stages map to sprints with internal demos. Public releases map to production deployments.

14.2 Good Agile, Bad Agile

Good agile combines architectural discipline with iterative delivery — an effective, proven approach. Bad agile uses agile terminology as a justification for skipping architecture and project design, producing teams that are busy but not productive. The distinguishing characteristic is whether the team has a structural understanding of what they are building and a quantified plan for how they will build it.

All agile construction techniques — stand-ups, Kanban, user stories, burndown charts — are assembly techniques. They are excellent for execution. But they are not design techniques. Just as lean manufacturing depends on meticulous design of both the product and the assembly line, good agile depends on meticulous architecture and project design. Features are aspects of integration, not implementation — plucking stories off a Kanban board and coding without structural understanding is the epitome of bad agile.

15. Project Recovery

When a project has failed its plan — missing every deadline, exceeding every budget, degrading in quality to the point where fixes create more defects than they resolve — recovery is the appropriate intervention. Recovery is rescue, not rehabilitation. It is short-term, decisive, and often uncomfortable.

Recovery begins with an honest assessment of whether the project can be saved. Not all projects can. The assessment examines root causes rather than symptoms — date, budget, and quality are symptoms, not causes. Causes are typically deterministic failures in project inception, execution failures during construction, or process failures in tracking and response.

A successful recovery requires a recovery lead with the authority to make unquestioned demands, the trust of senior executives, and the willingness to make decisive changes. The recovery project design is short-term, not aspiring for fastest or cheapest, designed around 0.50 risk, and meticulous. The first one to two milestones must be delivered on schedule and on budget to rebuild trust.

The most common recovery mistakes are throwing more people at the problem, working harder doing more of the same, and the defibrillator effect — weak recovery attempts that produce a temporary improvement followed by relapse.

16. The Meta-Process

Project design itself is a project with its own activities, dependencies, and critical path. Understanding this meta-process helps plan the planning effort.

The meta-process proceeds through four phases. The preparation phase covers use case and call chain identification, architectural decomposition, non-code activity identification, and estimation at all three levels. The solutions exploration phase covers the normal solution, limited resource scenarios, sub-critical analysis, and progressive compression using top resources, parallel work, activity changes, and full crashing. The analysis phase covers throughput and efficiency analysis, time-cost curve construction, risk decompression, risk quantification and modeling, and exclusion zone identification. The decision phase covers option recommendation, staged delivery planning, milestone identification, and the Feed Me or Kill Me review.

A seasoned architect can complete a single planning option in a few hours. A full project design with multiple options and complete analysis takes a few days to a week. The return on this investment is substantial — even a few percentage points of improved estimation accuracy on a large project justifies the effort many times over.

17. Conclusion

Project design transforms project planning from a subjective exercise in estimation and hope into an objective, structured process grounded in the realities of the system being built. By deriving the project plan from the architectural decomposition, this approach ensures that the plan reflects the actual work, dependencies, and risks of the project rather than abstract estimates disconnected from structural reality.

The methodology is not complex. Construct the network from the architecture. Find the critical path. Estimate activities. Compress within feasible limits. Quantify risk from floats. Validate with earned value curves. Present options to management. Let them decide.

What makes this approach effective is not sophistication but discipline — the discipline to design before estimating, to quantify before deciding, and to present options rather than demands. Projects that apply this discipline consistently deliver more predictably than those that do not, not because the methodology eliminates uncertainty, but because it makes uncertainty visible and manageable.

The most dangerous project is not the one with high risk — it is the one where risk is unknown. Project design ensures that risk is known, quantified, and communicated. From that foundation, good decisions follow.

“Where there is no vision, the people perish.” — Proverbs 29:18

Appendix A: Glossary

Activity Network — The directed graph of work packages and their dependencies that defines the execution order and parallelism of a project. It is derived from the architectural decomposition, not from team preferences.

All-Crash Solution — The schedule produced when every activity is compressed to its minimum possible duration using maximum resources. It represents the shortest feasible project duration and the highest cost point on the time-cost curve.

All-Normal Solution — The schedule produced when every activity uses its normal (uncompressed) duration. It represents the lowest-cost feasible schedule and serves as the starting point for compression analysis.

Boundary Work Package — A work package located at an integration point with external systems or other teams. Boundary work packages carry interface risk but tend to have lower estimation uncertainty than core work packages.

Compression — Reducing an activity’s duration by adding resources, typically at increasing marginal cost. Compression is only applied to critical-path activities, and only when the cost-per-unit-time saved is justified.

Core Work Package — A work package containing execution logic — the algorithmic or business-rule heart of a component. Core work packages carry the highest estimation uncertainty in the network.

Critical Path — The longest sequence of dependent activities from project start to finish; determines minimum project duration.

Decompression — Deliberately extending the project schedule beyond the all-normal solution to reduce risk.

Earned Value — A metric tracking the value of completed work against the planned value at each point in time.

Feed Me or Kill Me — The decision point where management commits resources to a selected project option or cancels the project.

Float (Slack) — The amount of time an activity can slip without delaying the project (total float) or its immediate successors (free float).

God Activity — An activity that is disproportionately large relative to other activities, deforming project analysis techniques.

Hand-Off Point — The level of design detail at which the architect transfers work to developers.

Milestone — A zero-duration node in the activity network marking a significant integration or delivery point. Milestones anchor earned-value tracking and provide natural reporting boundaries.

Project Design — An architecture-driven methodology for project planning that derives schedule, cost, and risk estimates from the structural decomposition of the system to be built, rather than from bottom-up task guessing.

Resource Leveling — Adjusting the schedule to avoid resource over-allocation by shifting non-critical activities within their available float. Resource leveling preserves the critical-path duration while smoothing peak demand.

Risk Index — A measure of schedule risk based on the distribution of float across the activity network and the density of near-critical paths. A project with many low-float paths has a higher risk index than one with a single clear critical path.

S-Curve — The characteristic shape of cumulative earned value over time in a well-planned project.

Sub-Critical Staffing — Resource levels insufficient to staff the critical path, forcing serial execution and dramatically increasing risk.

Time-Cost Curve — A plot of all feasible project solutions showing the relationship between schedule duration and total cost.

Work Package — A unit of work derived from an architectural component, sized to be estimable by a single developer or small team. Work packages are the nodes of the activity network.

Appendix B: Applicability Checklist

Project design is applicable to any project, but is particularly valuable for projects that:

Have an identified architectural decomposition or can invest in creating one

Require accurate cost and schedule estimates for stakeholder commitment

Involve multiple developers or teams working on interconnected components

Face schedule pressure that demands understanding of compression limits

Have experienced estimation failures on prior similar projects

Must present options to management for resource commitment decisions

Operate under regulatory, compliance, or contractual obligations that require defensible planning

Appendix C: Key Formulas

Forward Pass: ES(i) = max(EF of all predecessors); EF(i) = ES(i) + Duration(i)

Backward Pass: LF(i) = min(LS of all successors); LS(i) = LF(i) – Duration(i)

Total Float: TF(i) = LS(i) – ES(i)

Free Float: FF(i) = min(ES of all successors) – EF(i)

Criticality Risk Index: Risk = (4C + 3R + 2Y + 1G) / (4N)

Activity Risk Index: Risk = 1 – sum(Fi + 1) / (N * M)

Compression Factor: Duration = 0.7 * Normal; Cost = 1.8 * Normal

Project Cost: Cost = integral of staffing curve over time

Efficiency: Efficiency = sum of activity durations / (staffing * total duration)

References and Influences

The concepts presented in this paper are grounded in established work in project management, critical path analysis, and software engineering. Project design is not presented as a novel invention, but as a practitioner-oriented articulation of principles that have emerged through decades of practice.

Juval Löwy

Löwy, Juval. Righting Software. Addison-Wesley, 2019.

Löwy’s work is the primary foundation for this articulation of project design. His IDesign methodology integrates system architecture with project planning, treating design as the prerequisite for estimation. The activity derivation from architecture, the Feed Me or Kill Me decision framework, the time-cost curve, the risk quantification models, the staffing principles, and the earned value validation approach described in this paper are derived from IDesign training. This paper consolidates these principles into a single cohesive reference.

Critical Path Method (CPM)

Kelley, James E.; Walker, Morgan R. “Critical-Path Planning and Scheduling.” Proceedings of the Eastern Joint Computer Conference, 1959.

The Critical Path Method, developed independently by DuPont and the U.S. Navy in the late 1950s, provides the mathematical foundation for network analysis used throughout this paper. The forward and backward pass algorithms, float calculations, and the concept of the critical path as the project duration constraint are direct applications of CPM.

Earned Value Management

Fleming, Quentin W.; Koppelman, Joel M. Earned Value Project Management. Fourth Edition. Project Management Institute, 2010.

Earned value management provides the tracking and projection techniques used during project execution. The S-curve validation during planning and the three-line tracking during execution are applications of EVM adapted for software project contexts.

Bent Flyvbjerg

Flyvbjerg, Bent. “Over Budget, Over Time, Over and Over Again: Managing Major Projects.” Oxford Handbook of Project Management, 2011.

Flyvbjerg’s research on megaproject performance provides the empirical basis for the discussion of large project fragility and the network of networks approach.

Frederick P. Brooks Jr.

Brooks, Frederick P. The Mythical Man-Month. Addison-Wesley, 1975.

Brooks’s observation that adding people to a late project makes it later is reflected throughout this paper’s treatment of staffing, compression limits, and the relationship between team size and communication overhead.

Nassim Nicholas Taleb

Taleb, Nassim Nicholas. Antifragile: Things That Gain from Disorder. Random House, 2012.

Taleb’s framework for antifragility informs the discussion of project design as an asymmetric investment — small capped cost with potentially large payoffs — and the principle that projects should be designed to benefit from variability rather than merely resist it.

Author’s Note

Project Design, including its integration of architectural decomposition with project planning, the multi-option decision framework, and the risk quantification techniques, originates from Juval Löwy’s IDesign methodology. This paper does not introduce a new project management approach. It provides a consolidated, practitioner-oriented articulation that integrates established project management techniques with architecture-first planning, emphasizing the complete activity inventory — including the many non-code activities that are routinely underestimated — and the structured decision-making process.

The intent of this paper is to serve as a durable reference that translates project design principles into a form suitable for consistent application, discussion, and review within modern engineering organizations.

Distribution Note

This document is provided for informational and educational purposes. It may be shared internally within organizations, used as a reference in project planning discussions, or adapted for non-commercial educational use with appropriate attribution. This paper does not represent official policy, standards, or project management mandates of any current or former employer. All examples are generalized and abstracted to avoid disclosure of proprietary or sensitive information.

Stay in the loop.