{"id":366,"date":"2026-04-05T12:42:12","date_gmt":"2026-04-05T17:42:12","guid":{"rendered":"https:\/\/dev.harmonic-framework.com\/swarm-architecture\/"},"modified":"2026-04-05T12:42:50","modified_gmt":"2026-04-05T17:42:50","slug":"swarm-architecture","status":"publish","type":"page","link":"https:\/\/dev.harmonic-framework.com\/es\/swarm-architecture\/","title":{"rendered":"Swarm Architecture: Bounded Parallel Agent Execution"},"content":{"rendered":"<p># Swarm Architecture<\/p>\n<h2>Bounded Parallel Agent Execution<\/h2>\n<p><strong>Author:<\/strong> William Christopher Anderson<\/p>\n<p><strong>Date:<\/strong> March 2026<\/p>\n<p><strong>Version:<\/strong> 1.0<\/p>\n<hr>\n<h2>Executive Summary<\/h2>\n<p>The Compiled Context Runtime (CCR) solves the problem of agent continuity. Process definitions codify what to do. Compiled context injection provides what to know. Memory chains preserve what was learned. Together, they produce an agent that operates with precision, consistency, and accumulating intelligence.<\/p>\n<p>But the CCR, as described in its foundational whitepaper, operates as a single sequential executor. One agent. One process. One step at a time. This is correct for most work \u2014 the majority of knowledge tasks are inherently sequential, and parallelism introduces coordination complexity that rarely justifies the overhead.<\/p>\n<p>Some work, however, is naturally parallel. A code review that spans twelve files can be decomposed into twelve independent analyses. A migration that touches eight databases can execute eight schema changes simultaneously. A research task that consults fourteen sources can dispatch fourteen retrieval operations and merge the results. In these cases, sequential execution is not merely slow \u2014 it is structurally wrong. The task&#8217;s natural shape is parallel, and forcing it through a sequential pipeline distorts the work.<\/p>\n<p>Swarm Architecture extends the CCR with a model for bounded parallel agent execution. It introduces three constructs:<\/p>\n<p>1. <strong>Swarms<\/strong> \u2014 bounded groups of agents executing task instances in parallel, governed by a single coordinator and a single correlation identity. A swarm is not a cluster, not a pool, not an unbounded collection of workers. It is a precisely scoped execution boundary: one process step decides to fan out, the swarm executes the parallel work, and the results converge before execution continues.<\/p>\n<p>2. <strong>Containment rules<\/strong> \u2014 structural constraints that prevent swarm workers from escaping their execution boundary. A swarm worker may spawn sub-agents within its own process boundary, but it may not join another swarm, initiate new swarms, or communicate laterally with sibling workers. These rules are not conventions. They are enforced by the runtime.<\/p>\n<p>3. <strong>Convergence protocols<\/strong> \u2014 mechanisms for collecting, merging, and validating the results of parallel execution before the parent process continues. Convergence is not implicit. It is a defined step in the process, with explicit merge strategies, conflict resolution rules, and quality gates.<\/p>\n<p>A fourth property emerges from the containment model that is not obvious from its safety-oriented design:<\/p>\n<p>4. <strong>Location independence<\/strong> \u2014 because containment rules prohibit workers from accessing shared state, communicating laterally, or reaching outside their execution boundary, workers have no requirement for co-location. A worker can execute on the coordinator&#8217;s machine, on a cloud instance, on an edge device in a factory, or on a partner organization&#8217;s infrastructure across the planet. The containment rules designed for safety become the enabling constraints for distribution. The compiled context boundary becomes the security boundary \u2014 workers receive only what their task requires, and cannot leak what they were never given.<\/p>\n<p>This property transforms the scope of what the architecture can do. The same swarm model that parallelizes a local code review can distribute a compliance analysis across twelve jurisdictions, a research synthesis across institutions that cannot share raw data, or a global monitoring operation across edge devices on every continent. The mechanism is identical. The topology varies.<\/p>\n<hr>\n<h2>Abstract<\/h2>\n<p>The Compiled Context Runtime provides process-driven, context-compiled agent execution with persistent memory. Its sequential execution model is correct for the majority of agent workflows, but structurally inadequate for tasks whose natural decomposition is parallel. Swarm Architecture extends the CCR with bounded parallel execution: swarms of agents that execute independent task instances simultaneously under a single coordinator, governed by containment rules that prevent execution boundary violations, and converged through explicit merge protocols before the parent process continues. This paper describes the architectural model, the containment and coordination mechanisms, the convergence protocols, the relationship to the CCR&#8217;s process definition language, the failure and recovery model, the cost implications of parallel versus sequential execution, and the distributed execution model that emerges from the containment architecture \u2014 enabling swarms whose workers span local machines, cloud regions, edge devices, and partner infrastructure without sacrificing determinism, auditability, or data security.<\/p>\n<hr>\n<h2>1. Introduction<\/h2>\n<h3>1.1 The Sequential Assumption<\/h3>\n<p>The Compiled Context Runtime, as described in its foundational paper, executes processes as ordered sequences of steps. Step one completes before step two begins. Each step receives compiled context scoped to its requirements. Each step&#8217;s output is captured in execution history and available to subsequent steps. The model is simple, predictable, and auditable.<\/p>\n<p>This sequential model is not a limitation \u2014 it is a design choice rooted in a structural observation: most knowledge work is inherently sequential. Writing code requires understanding the context before writing. Reviewing a pull request requires reading the changes before forming an opinion. Planning a project requires understanding the dependencies before sequencing the work. Forcing parallelism onto inherently sequential work produces coordination overhead without meaningful speedup.<\/p>\n<p>But not all work is sequential.<\/p>\n<h3>1.2 The Parallelism That Already Exists<\/h3>\n<p>Consider a process that reviews a large pull request. The process definition might specify:<\/p>\n<p>1. Retrieve the PR metadata and changed file list<\/p>\n<p>2. For each changed file, analyze the diff against the relevant architectural standards<\/p>\n<p>3. Synthesize individual file analyses into a coherent review<\/p>\n<p>4. Post the review<\/p>\n<p>Steps 1, 3, and 4 are inherently sequential. Step 2 is inherently parallel \u2014 the analysis of <code class=\"\" data-line=\"\">billing_engine.py<\/code> does not depend on the analysis of <code class=\"\" data-line=\"\">payment_accessor.py<\/code>. They share no state. They require no coordination. They can execute simultaneously without affecting each other&#8217;s results.<\/p>\n<p>Today, the CCR executes step 2 as a loop: analyze file one, then file two, then file three. Each analysis is independent, but they execute sequentially because the runtime has no mechanism to express or execute parallel work. The result is correct but slow \u2014 a twelve-file review takes twelve sequential analysis cycles instead of one parallel cycle.<\/p>\n<p>This is not a theoretical concern. It is a concrete performance penalty applied to every naturally parallel task the agent encounters.<\/p>\n<h3>1.3 Why Not General Multi-Agent Systems?<\/h3>\n<p>The obvious response is: deploy multiple agents. Let them coordinate. Let them discover work, distribute it, and merge results dynamically.<\/p>\n<p>This is the approach taken by most multi-agent frameworks, and it fails for predictable reasons.<\/p>\n<p><strong>Containment failure.<\/strong> When agents can spawn other agents without structural constraints, the system&#8217;s execution boundary becomes unbounded. An agent debugging a test failure spawns an agent to read the source code, which spawns an agent to check the git history, which spawns an agent to analyze the CI configuration. Each spawn is locally reasonable. The aggregate is an uncontrolled expansion of execution scope, token consumption, and coordination complexity.<\/p>\n<p><strong>Coordination overhead.<\/strong> General multi-agent coordination requires consensus mechanisms, shared state management, conflict resolution, and deadlock detection. These mechanisms are well-understood in distributed systems, but they introduce complexity that is disproportionate to the problem. The CCR&#8217;s value proposition is deterministic, auditable execution. Adding distributed coordination undermines that proposition.<\/p>\n<p><strong>Emergent behavior.<\/strong> When multiple agents operate with overlapping scope and lateral communication, the system&#8217;s behavior becomes emergent rather than specified. The process definition says what should happen; the agents decide what actually happens. This is the opposite of the CCR&#8217;s design philosophy, where the process definition is the single source of truth for execution.<\/p>\n<p>Swarm Architecture avoids all three failure modes by constraining parallelism to a specific, bounded pattern: fan out, execute independently, converge. No lateral communication. No dynamic scope expansion. No emergent coordination.<\/p>\n<h3>1.4 Scope of This Paper<\/h3>\n<p>This paper describes Swarm Architecture as an extension to the Compiled Context Runtime. It assumes familiarity with the CCR&#8217;s process definitions, compiled context injection, memory chains, and execution model. Readers unfamiliar with these concepts should consult the CCR whitepaper before proceeding.<\/p>\n<p>The paper covers the swarm execution model, containment rules, convergence protocols, process definition extensions, failure and recovery, cost analysis, and the relationship to VBD component architecture. It does not cover general-purpose multi-agent orchestration, distributed consensus algorithms, or agent-to-agent communication protocols \u2014 these are explicitly out of scope.<\/p>\n<hr>\n<h2>2. The Swarm Model<\/h2>\n<h3>2.1 Definition<\/h3>\n<p>A <strong>swarm<\/strong> is a bounded group of agents executing independent task instances in parallel, governed by a single coordinator, identified by a single correlation ID, and converged through an explicit merge step before the parent process continues.<\/p>\n<p>Every swarm has exactly five properties:<\/p>\n<p>1. <strong>A parent step<\/strong> \u2014 the process step that initiated the fan-out. The parent step is suspended until convergence completes.<\/p>\n<p>2. <strong>A task blueprint<\/strong> \u2014 the process definition (or task definition) that each worker executes. All workers in a swarm execute the same blueprint against different inputs.<\/p>\n<p>3. <strong>An input set<\/strong> \u2014 the collection of independent work items to be processed. Each item becomes the input to one worker instance.<\/p>\n<p>4. <strong>A convergence strategy<\/strong> \u2014 the mechanism for collecting, merging, and validating worker outputs before returning control to the parent process.<\/p>\n<p>5. <strong>A correlation ID<\/strong> \u2014 a unique identifier that links every worker instance, every log entry, every memory record, and every artifact produced by the swarm back to the parent step that initiated it.<\/p>\n<h3>2.2 The Fan-Out \/ Converge Pattern<\/h3>\n<p>Swarm execution follows a single pattern:<\/p>\n<pre><code class=\"\" data-line=\"\">Parent Process\n  \u2502\n  \u251c\u2500 Step N: Sequential work\n  \u2502\n  \u251c\u2500 Step N+1: Fan-out (swarm)\n  \u2502     \u251c\u2500 Worker 1: Task(input_1) \u2500\u2500\u2192 result_1\n  \u2502     \u251c\u2500 Worker 2: Task(input_2) \u2500\u2500\u2192 result_2\n  \u2502     \u251c\u2500 Worker 3: Task(input_3) \u2500\u2500\u2192 result_3\n  \u2502     \u2514\u2500 Worker K: Task(input_k) \u2500\u2500\u2192 result_k\n  \u2502\n  \u251c\u2500 Step N+2: Converge(result_1..k) \u2500\u2500\u2192 merged_result\n  \u2502\n  \u251c\u2500 Step N+3: Sequential work (uses merged_result)\n  \u2502<\/code><\/pre>\n<p>The pattern is deliberately simple. There is no nesting of swarms within swarms. There is no lateral communication between workers. There is no dynamic addition of work items after fan-out begins. The swarm is a structural primitive \u2014 a single level of parallelism \u2014 not a recursive coordination framework.<\/p>\n<h3>2.3 What a Swarm Is Not<\/h3>\n<p><strong>A swarm is not a thread pool.<\/strong> Thread pools are infrastructure-level concurrency mechanisms. Swarms are architecture-level execution patterns. A swarm might be implemented using threads, processes, API calls, or distributed workers \u2014 the implementation is invisible to the process definition.<\/p>\n<p><strong>A swarm is not a MapReduce job.<\/strong> MapReduce operates on data partitions with a fixed reduce function. Swarms operate on task instances with configurable convergence strategies. The workers are agents executing process steps, not functions applied to data shards.<\/p>\n<p><strong>A swarm is not an agent swarm in the multi-agent literature.<\/strong> The term &#8220;swarm&#8221; in multi-agent systems typically implies emergent coordination, stigmergic communication, and self-organizing behavior. None of these properties apply here. A CCR swarm is deterministic, bounded, and fully specified by the process definition. The term is used for its intuitive meaning \u2014 a group working in parallel \u2014 not for its academic connotations.<\/p>\n<hr>\n<h2>3. Containment<\/h2>\n<h3>3.1 The Containment Problem<\/h3>\n<p>Parallelism without containment is the defining failure mode of multi-agent systems. When an agent can spawn other agents without constraint, three problems emerge:<\/p>\n<p>1. <strong>Scope creep<\/strong> \u2014 Each spawned agent may itself spawn agents, producing an expanding tree of execution that no single process definition governs.<\/p>\n<p>2. <strong>Resource exhaustion<\/strong> \u2014 Each agent consumes context window tokens, API calls, and memory. Unbounded spawning produces unbounded cost.<\/p>\n<p>3. <strong>Audit failure<\/strong> \u2014 When the execution tree is dynamic and unbounded, tracing what happened and why becomes intractable.<\/p>\n<p>Swarm Architecture prevents all three through structural containment rules enforced by the runtime.<\/p>\n<h3>3.2 The Three Containment Rules<\/h3>\n<h4>Rule 1: A swarm worker may not join another swarm.<\/h4>\n<p>A worker is executing a task instance within a specific swarm boundary. It may not register itself as a worker in a different swarm, even if that swarm is executing the same task blueprint. This prevents cross-swarm contamination and ensures that each swarm&#8217;s execution boundary is closed.<\/p>\n<h4>Rule 2: A swarm worker may not initiate a new swarm.<\/h4>\n<p>If a worker&#8217;s task requires further parallelism, it must express that need through its process definition, and the parent process must orchestrate it as a separate swarm step. Workers do not have the authority to create swarms. Only the process coordinator does. This prevents recursive fan-out and bounds the total parallelism to what the process definition explicitly specifies.<\/p>\n<h4>Rule 3: A swarm worker may not communicate laterally with sibling workers.<\/h4>\n<p>Workers in the same swarm share a correlation ID, but they do not share state, messages, or coordination signals. Worker 3 cannot read Worker 7&#8217;s intermediate results. Worker 7 cannot signal Worker 3 to change its approach. The only communication path is vertical: worker to coordinator (via result submission) and coordinator to worker (via task input and compiled context).<\/p>\n<h3>3.3 What Workers Can Do<\/h3>\n<p>The containment rules constrain inter-swarm and inter-worker behavior. Within its own execution boundary, a worker has full CCR capabilities:<\/p>\n<ul>\n<li><strong>Execute process steps<\/strong> \u2014 The worker runs its assigned task blueprint as a normal CCR process.<\/li>\n<li><strong>Use compiled context<\/strong> \u2014 The worker receives context compiled for its specific input, just as any CCR process step would.<\/li>\n<li><strong>Record memory<\/strong> \u2014 The worker writes to memory chains, tagged with the swarm&#8217;s correlation ID.<\/li>\n<li><strong>Spawn sub-agents<\/strong> \u2014 The worker may use the CCR&#8217;s standard sub-agent mechanism (tool calls, delegate steps) within its own process boundary. These sub-agents are scoped to the worker&#8217;s process and do not constitute a new swarm.<\/li>\n<li><strong>Produce artifacts<\/strong> \u2014 The worker generates output artifacts that are collected during convergence.<\/li>\n<\/ul>\n<p>The distinction is precise: a worker is a full CCR agent within its boundary, but it cannot extend its boundary or interact with agents outside it.<\/p>\n<h3>3.4 Runtime Enforcement<\/h3>\n<p>Containment rules are not guidelines. They are enforced by the runtime through structural checks:<\/p>\n<ul>\n<li><strong>Swarm registration<\/strong> \u2014 When a swarm is created, each worker receives a swarm scope token. API calls that would create or join a swarm are rejected if the calling context already holds a swarm scope token.<\/li>\n<li><strong>Communication isolation<\/strong> \u2014 Workers receive isolated memory chain namespaces. Cross-worker memory queries are structurally impossible because the namespace scoping prevents it.<\/li>\n<li><strong>Execution boundary tracking<\/strong> \u2014 The runtime maintains an execution tree with strict parent-child relationships. Any attempt to create a lateral edge (worker-to-worker) or an upward edge (worker initiating a new swarm) is rejected.<\/li>\n<\/ul>\n<hr>\n<h2>4. Convergence<\/h2>\n<h3>4.1 The Convergence Step<\/h3>\n<p>When all workers in a swarm complete (or when a timeout or failure threshold is reached), the swarm enters convergence. Convergence is not implicit \u2014 it is a defined step in the parent process with its own context, logic, and quality gates.<\/p>\n<p>The convergence step receives:<\/p>\n<ul>\n<li>The ordered list of worker results<\/li>\n<li>The original input set (for correlation)<\/li>\n<li>Metadata about each worker&#8217;s execution (duration, token usage, success\/failure status)<\/li>\n<li>The swarm&#8217;s correlation ID (for memory chain queries)<\/li>\n<\/ul>\n<p>The convergence step produces:<\/p>\n<ul>\n<li>A merged result that the parent process uses in subsequent steps<\/li>\n<li>A convergence report (which workers succeeded, which failed, what conflicts were resolved)<\/li>\n<li>Memory chain entries recording the swarm&#8217;s execution for future reference<\/li>\n<\/ul>\n<h3>4.2 Convergence Strategies<\/h3>\n<p>The process definition specifies which convergence strategy applies. The CCR provides four built-in strategies and supports custom strategies:<\/p>\n<p><strong>Collect<\/strong> \u2014 The simplest strategy. Worker results are collected into an ordered list and passed to the next step without transformation. The parent process is responsible for interpretation. Appropriate when results are independent observations that don&#8217;t need merging (e.g., file-level code review comments).<\/p>\n<p><strong>Merge<\/strong> \u2014 Worker results are combined into a single output using a merge function specified in the process definition. Conflicts are resolved by the merge function. Appropriate when results contribute to a single deliverable (e.g., parallel document sections assembled into a complete document).<\/p>\n<p><strong>Vote<\/strong> \u2014 Worker results are treated as votes. The convergence step tallies results and selects the majority or highest-confidence output. Appropriate when multiple workers analyze the same input from different perspectives and the system needs a consensus decision (e.g., parallel classification with confidence scoring).<\/p>\n<p><strong>Reduce<\/strong> \u2014 Worker results are processed sequentially through a reduction function, producing a single accumulated result. Appropriate when results need ordered integration (e.g., parallel test results reduced into a pass\/fail summary with aggregated metrics).<\/p>\n<p><strong>Custom<\/strong> \u2014 The process definition specifies a convergence process (itself a CCR process definition) that receives the worker results and produces the merged output. This allows arbitrary convergence logic, including multi-step convergence with its own compiled context and quality gates.<\/p>\n<h3>4.3 Partial Convergence and Failure Thresholds<\/h3>\n<p>Not all workers may succeed. A swarm of twelve workers analyzing twelve files may have one worker fail due to a token limit, a model error, or an input that cannot be processed. The convergence strategy must handle partial results.<\/p>\n<p>The process definition specifies failure behavior through two parameters:<\/p>\n<ul>\n<li><strong><code class=\"\" data-line=\"\">min_success_ratio<\/code><\/strong> \u2014 The minimum fraction of workers that must succeed for convergence to proceed. Default is 1.0 (all workers must succeed). Setting this to 0.8 means convergence proceeds if at least 80% of workers succeed; failed workers&#8217; inputs are recorded for retry or manual review.<\/li>\n<\/ul>\n<ul>\n<li><strong><code class=\"\" data-line=\"\">failure_action<\/code><\/strong> \u2014 What happens when a worker fails: <code class=\"\" data-line=\"\">retry<\/code> (resubmit the failed input to a new worker), <code class=\"\" data-line=\"\">skip<\/code> (proceed without the failed input), <code class=\"\" data-line=\"\">abort<\/code> (fail the entire swarm and return control to the parent process&#8217;s error handler).<\/li>\n<\/ul>\n<p>These parameters allow the process definition to express the task&#8217;s tolerance for partial results without embedding retry logic in every worker.<\/p>\n<hr>\n<h2>5. Process Definition Extensions<\/h2>\n<h3>5.1 Swarm-Eligible Steps<\/h3>\n<p>A process step becomes swarm-eligible through a <code class=\"\" data-line=\"\">fan_out<\/code> declaration in the process definition:<\/p>\n<pre><code class=\"\" data-line=\"\">process: review_pull_request\nversion: 1\n\nsteps:\n  - name: retrieve_pr\n    action: &quot;Fetch PR metadata and changed file list&quot;\n    output: pr_files\n\n  - name: analyze_files\n    action: &quot;Analyze each changed file against architectural standards&quot;\n    fan_out:\n      over: pr_files\n      task: analyze_single_file\n      concurrency: 8\n      convergence:\n        strategy: collect\n        min_success_ratio: 0.9\n        failure_action: skip\n    output: file_analyses\n\n  - name: synthesize_review\n    action: &quot;Combine file analyses into a coherent review&quot;\n    input: file_analyses\n    output: review\n\n  - name: post_review\n    action: &quot;Post the review to the pull request&quot;\n    input: review<\/code><\/pre>\n<p>The <code class=\"\" data-line=\"\">fan_out<\/code> block specifies:<\/p>\n<ul>\n<li><strong><code class=\"\" data-line=\"\">over<\/code><\/strong> \u2014 The collection to iterate. Each element becomes the input to one worker.<\/li>\n<li><strong><code class=\"\" data-line=\"\">task<\/code><\/strong> \u2014 The task blueprint each worker executes. This is a reference to a task definition.<\/li>\n<li><strong><code class=\"\" data-line=\"\">concurrency<\/code><\/strong> \u2014 The maximum number of workers executing simultaneously. This is a resource constraint, not a parallelism constraint \u2014 all items will be processed, but at most <code class=\"\" data-line=\"\">concurrency<\/code> workers run at any time.<\/li>\n<li><strong><code class=\"\" data-line=\"\">convergence<\/code><\/strong> \u2014 The convergence strategy and failure parameters.<\/li>\n<\/ul>\n<h3>5.2 Task Blueprints<\/h3>\n<p>A task blueprint is a process definition designed to be executed by a swarm worker. It is a standard CCR process with one constraint: it must accept a single input item and produce a single output result.<\/p>\n<pre><code class=\"\" data-line=\"\">task: analyze_single_file\nversion: 1\n\nknowledge:\n  - architecture\/patterns\/vbd-component-taxonomy\n  - coding\/python\/style-guide\n\ninput:\n  file_path: string\n  diff_content: string\n  pr_context: string\n\nsteps:\n  - name: analyze\n    action: &quot;Analyze the diff against VBD standards and coding conventions&quot;\n    output: analysis\n\n  - name: format\n    action: &quot;Format the analysis as a structured review comment&quot;\n    input: analysis\n    output: review_comment\n\noutput: review_comment<\/code><\/pre>\n<p>Task blueprints inherit the full CCR capability set: compiled context injection, knowledge references, gates, and memory recording. They are not reduced-capability processes \u2014 they are full processes executing within a containment boundary.<\/p>\n<h3>5.3 Behavior Hints<\/h3>\n<p>Process and task definitions may include behavior hints that inform the runtime&#8217;s scheduling and resource allocation decisions:<\/p>\n<pre><code class=\"\" data-line=\"\">behavior:\n  swarm_eligible: true\n  containment: strict\n  event_topics:\n    - task.lifecycle\n    - artifact.produced\n  estimated_duration: short\n  model_requirements:\n    reasoning_depth: moderate\n    code_generation: true<\/code><\/pre>\n<p>Behavior hints are advisory, not prescriptive. The runtime uses them for optimization \u2014 routing short tasks to faster models, pre-allocating resources for large fan-outs, selecting appropriate event channels \u2014 but the process definition&#8217;s semantic meaning does not depend on them.<\/p>\n<hr>\n<h2>6. Coordination<\/h2>\n<h3>6.1 Correlation IDs<\/h3>\n<p>Every entity created during a swarm&#8217;s execution \u2014 worker instances, memory records, artifacts, log entries, execution records \u2014 carries the swarm&#8217;s correlation ID. This produces a complete, traceable execution graph:<\/p>\n<pre><code class=\"\" data-line=\"\">Swarm: swarm-a1b2c3d4\n  \u251c\u2500 Worker: swarm-a1b2c3d4\/worker-001\n  \u2502    \u251c\u2500 Memory: mem-xxx (chain: review, corr: swarm-a1b2c3d4)\n  \u2502    \u2514\u2500 Artifact: art-xxx (corr: swarm-a1b2c3d4)\n  \u251c\u2500 Worker: swarm-a1b2c3d4\/worker-002\n  \u2502    \u251c\u2500 Memory: mem-yyy (chain: review, corr: swarm-a1b2c3d4)\n  \u2502    \u2514\u2500 Artifact: art-yyy (corr: swarm-a1b2c3d4)\n  \u2514\u2500 Convergence: swarm-a1b2c3d4\/converge\n       \u2514\u2500 Artifact: art-zzz (merged result, corr: swarm-a1b2c3d4)<\/code><\/pre>\n<p>Correlation IDs enable three capabilities:<\/p>\n<p>1. <strong>Audit<\/strong> \u2014 Given a swarm&#8217;s correlation ID, the runtime can reconstruct the complete execution history: which workers ran, what each produced, how results were merged, what the final output was.<\/p>\n<p>2. <strong>Cost attribution<\/strong> \u2014 Token usage, API calls, and execution time are attributed to the swarm and, through the correlation ID, to the parent process step that initiated it.<\/p>\n<p>3. <strong>Memory scoping<\/strong> \u2014 Memory chain queries can be scoped to a swarm&#8217;s correlation ID, allowing the convergence step to access the collective observations of all workers without pollution from unrelated memory.<\/p>\n<h3>6.2 Event-Driven Lifecycle<\/h3>\n<p>Swarm lifecycle events are published to event topics, allowing the parent process, monitoring systems, and the learning loop to observe swarm execution without polling:<\/p>\n<p><em>[table]<\/em><\/p>\n<p>|&#8212;&#8212;-|&#8212;&#8212;&#8212;|&#8212;&#8212;&#8212;&#8212;&#8212;-|<\/p>\n<p><em>[table]<\/em><\/p>\n<p><em>[table]<\/em><\/p>\n<p><em>[table]<\/em><\/p>\n<p><em>[table]<\/em><\/p>\n<p><em>[table]<\/em><\/p>\n<p><em>[table]<\/em><\/p>\n<p><em>[table]<\/em><\/p>\n<p>Events are published to the topics declared in the task blueprint&#8217;s <code class=\"\" data-line=\"\">behavior.event_topics<\/code>. The parent process may subscribe to these events for progress reporting, but the events are informational \u2014 they do not affect execution flow.<\/p>\n<hr>\n<h2>7. Failure and Recovery<\/h2>\n<h3>7.1 Worker Failure Modes<\/h3>\n<p>Swarm workers can fail in three categories:<\/p>\n<p><strong>Transient failures<\/strong> \u2014 API rate limits, network timeouts, model overload. These are retryable. The runtime resubmits the failed input to a new worker instance, up to a configurable retry limit.<\/p>\n<p><strong>Input failures<\/strong> \u2014 The input item is malformed, too large for the context window, or references content that doesn&#8217;t exist. These are not retryable with the same input. The convergence strategy&#8217;s <code class=\"\" data-line=\"\">failure_action<\/code> determines the response: skip the item, abort the swarm, or flag for manual review.<\/p>\n<p><strong>Structural failures<\/strong> \u2014 The task blueprint itself is flawed: a step references nonexistent knowledge, a gate condition is unsatisfiable, or the output schema doesn&#8217;t match the convergence strategy&#8217;s expectations. These indicate a process definition error and always abort the swarm. The error is recorded in the execution history for the learning loop to analyze.<\/p>\n<h3>7.2 Swarm-Level Recovery<\/h3>\n<p>When a swarm is aborted, the parent process&#8217;s error handler receives:<\/p>\n<ul>\n<li>The partial results from workers that succeeded<\/li>\n<li>The error details from workers that failed<\/li>\n<li>The swarm&#8217;s execution metadata (duration, token usage, worker count)<\/li>\n<\/ul>\n<p>The parent process may retry the entire swarm, proceed with partial results, fall back to sequential execution, or escalate to the user. The decision logic is expressed in the process definition&#8217;s error handling steps \u2014 not in the swarm infrastructure.<\/p>\n<h3>7.3 Idempotency Requirement<\/h3>\n<p>Task blueprints used in swarms must be idempotent \u2014 executing the same input twice must produce the same result without side effects. This is required because the retry mechanism may resubmit inputs, and the system must guarantee that retried work does not corrupt state.<\/p>\n<p>In practice, this means swarm tasks should:<\/p>\n<ul>\n<li>Read from compiled context and input parameters only<\/li>\n<li>Write to memory chains (which are append-only and thus naturally idempotent)<\/li>\n<li>Produce artifacts as output (which are captured by the convergence step, not written to external systems)<\/li>\n<li>Defer side effects (file writes, API calls, notifications) to the parent process&#8217;s post-convergence steps<\/li>\n<\/ul>\n<hr>\n<h2>8. Cost Model<\/h2>\n<h3>8.1 Sequential Baseline<\/h3>\n<p>For a task with N independent items, sequential execution requires:<\/p>\n<ul>\n<li>N \u00d7 (context compilation cost + inference cost + memory recording cost)<\/li>\n<li>Total wall-clock time: N \u00d7 average_step_duration<\/li>\n<li>Total tokens: N \u00d7 average_tokens_per_step<\/li>\n<\/ul>\n<h3>8.2 Swarm Execution<\/h3>\n<p>The same task with swarm execution requires:<\/p>\n<ul>\n<li>N \u00d7 (context compilation cost + inference cost + memory recording cost) \u2014 the total token cost is identical<\/li>\n<li>1 \u00d7 convergence cost \u2014 an additional inference call to merge results<\/li>\n<li>Total wall-clock time: max(worker_durations) + convergence_duration \u2248 average_step_duration + convergence_duration<\/li>\n<\/ul>\n<p>The critical insight: <strong>swarms do not reduce token cost. They reduce wall-clock time.<\/strong><\/p>\n<p>For a twelve-file code review, the token cost is approximately the same whether the files are reviewed sequentially or in parallel. The difference is time: twelve sequential reviews might take six minutes; twelve parallel reviews with convergence might take forty-five seconds.<\/p>\n<h3>8.3 When Swarms Are Worth It<\/h3>\n<p>Swarm execution adds overhead: worker lifecycle management, convergence processing, correlation tracking, and the convergence inference call. This overhead is justified when:<\/p>\n<p>1. <strong>N is large enough<\/strong> \u2014 Below approximately four items, the coordination overhead exceeds the time savings. The exact threshold depends on the task duration and the convergence strategy.<\/p>\n<p>2. <strong>Items are truly independent<\/strong> \u2014 If worker outputs depend on each other (worker 3 needs worker 1&#8217;s result), the task is not suitable for swarm execution. Dependencies require sequential execution or a more complex coordination model that is outside this architecture&#8217;s scope.<\/p>\n<p>3. <strong>Wall-clock time matters<\/strong> \u2014 If the parent process is executing autonomously and the user is not waiting, sequential execution may be acceptable. Swarms are most valuable in interactive workflows where latency directly impacts the user experience.<\/p>\n<hr>\n<h2>9. Distributed Execution<\/h2>\n<h3>9.1 Containment Enables Distribution<\/h3>\n<p>The three containment rules \u2014 no joining other swarms, no initiating new swarms, no lateral communication \u2014 were introduced in Section 3 as safety constraints. They prevent the unbounded execution expansion that makes multi-agent systems unreliable. But these same constraints produce a second, more consequential property: they make workers location-independent.<\/p>\n<p>Consider what a worker requires to execute:<\/p>\n<ul>\n<li>A task blueprint (a YAML document)<\/li>\n<li>A compiled context package (a text payload)<\/li>\n<li>An input item (a data structure)<\/li>\n<\/ul>\n<p>Consider what a worker does not require:<\/p>\n<ul>\n<li>Access to the coordinator&#8217;s memory chains<\/li>\n<li>Knowledge of other workers&#8217; existence<\/li>\n<li>A shared filesystem, database, or message bus with sibling workers<\/li>\n<li>Physical proximity to the coordinator or to other workers<\/li>\n<\/ul>\n<p>The containment rules guarantee that a worker&#8217;s execution boundary is closed. It reads its input, executes its task, and produces its output. It does not reach outside its boundary for anything. This means the boundary can be located anywhere \u2014 on the same machine as the coordinator, on a server across the network, on a cloud instance across the continent, or on a device on the other side of the planet.<\/p>\n<p>This is not a deployment convenience. It is an architectural property that emerges from the containment model. Distribution is not something added to swarms \u2014 it is something the containment rules make structurally possible by eliminating every requirement for co-location.<\/p>\n<h3>9.2 Execution Topologies<\/h3>\n<p>A swarm&#8217;s workers can be distributed across any combination of execution environments. The coordinator dispatches task instances to workers based on a placement strategy; the workers execute and return results; the convergence step collects results regardless of origin. The coordinator does not care where a worker runs. It cares that the worker started, that the worker finished, and what the worker produced.<\/p>\n<p>This produces several natural topologies:<\/p>\n<p><strong>Local swarm.<\/strong> All workers execute on the same machine as the coordinator. This is the simplest topology \u2014 workers are threads or processes on the local runtime. Appropriate for development, testing, and workloads where the machine has sufficient resources.<\/p>\n<p><strong>Cloud-burst swarm.<\/strong> The coordinator runs locally; workers execute on cloud instances. When a fan-out is large \u2014 fifty files to review, a hundred records to process \u2014 the local machine may not have the compute, memory, or API rate limits to run fifty workers simultaneously. Cloud-burst swarms dispatch workers to cloud instances that spin up for the duration of the swarm and shut down after convergence. The coordinator manages the lifecycle; the workers are ephemeral.<\/p>\n<p><strong>Edge swarm.<\/strong> Workers execute on edge devices or remote machines. A swarm analyzing sensor data from twelve factory floors dispatches one worker per floor, executing on local infrastructure close to the data. The compiled context package travels to the edge; the result travels back. The raw data never leaves the floor.<\/p>\n<p><strong>Federated swarm.<\/strong> Workers execute on machines owned by different participants. A research swarm analyzing datasets held by different institutions dispatches workers to each institution&#8217;s infrastructure. Each worker sees only its local dataset through the compiled context scoping. No institution&#8217;s data leaves its network. The convergence step operates on results \u2014 summaries, classifications, extracted features \u2014 not on raw data.<\/p>\n<p><strong>Hybrid swarm.<\/strong> Workers execute across a mixture of local, cloud, and edge environments based on input characteristics. A worker processing a small text file runs locally. A worker processing a large image dataset routes to a cloud GPU instance. A worker processing sensitive financial data routes to an on-premises secure enclave. The placement strategy makes the routing decision; the worker executes identically regardless of location.<\/p>\n<h3>9.3 The Compiled Context Boundary as Security Boundary<\/h3>\n<p>In a distributed swarm, the compiled context package is the only information that crosses a network boundary on the way in. The worker&#8217;s result is the only information that crosses on the way out. This is not a coincidence \u2014 it is a direct consequence of the CCR&#8217;s compilation model.<\/p>\n<p>The compiled context package is precision-scoped to the current task step. It does not contain the coordinator&#8217;s full memory. It does not contain other workers&#8217; inputs. It does not contain the process definition&#8217;s internal metadata. It contains exactly what the worker needs to execute its task \u2014 nothing more.<\/p>\n<p>This scoping produces a security property that is absent from most distributed agent systems: <strong>the worker cannot leak what it was never given.<\/strong> A worker dispatched to a remote environment to analyze a single file receives the compiled context for that file. It does not receive the contents of other files, the PR&#8217;s broader context, or the organizational knowledge that informed the process definition. If the remote environment is compromised, the exposure is limited to one compiled context package and one input item.<\/p>\n<p>In the federated topology, this property becomes essential. When workers execute on infrastructure controlled by different parties, each party must trust that the dispatched work does not carry unauthorized information. The compiled context boundary provides that guarantee structurally \u2014 not through access control lists, not through encryption alone, but through the architecture&#8217;s fundamental design: the worker receives a minimal, scoped payload because the CCR&#8217;s compilation pipeline produces minimal, scoped payloads. The security property is not bolted on. It is intrinsic.<\/p>\n<h3>9.4 Model Selection Across Geographies<\/h3>\n<p>The CCR&#8217;s dynamic model selection, described in the CCR whitepaper, takes on new dimensions in distributed swarms. When workers can execute anywhere, the model selection decision becomes a joint optimization across three variables:<\/p>\n<p><strong>Capability.<\/strong> The task requires a specific level of reasoning depth, code generation ability, or domain knowledge. Not all models satisfy the requirement.<\/p>\n<p><strong>Locality.<\/strong> The input data may have residency requirements. Financial data must be processed in-jurisdiction. Healthcare data must remain within HIPAA-compliant infrastructure. A model running in the right geography may be preferable to a more capable model running in the wrong one.<\/p>\n<p><strong>Cost.<\/strong> Cloud GPU instances in different regions have different pricing. Local models have zero marginal inference cost but limited capability. The optimal routing minimizes total cost while satisfying capability and locality constraints.<\/p>\n<p>In a distributed swarm, these three variables are evaluated per worker, not per swarm. Worker 1, processing a small text file, routes to a local model at zero marginal cost. Worker 2, processing a complex architectural analysis, routes to a cloud-hosted reasoning model. Worker 3, processing data subject to EU data residency rules, routes to a model hosted in the EU region. All three participate in the same swarm. All three produce results that converge through the same merge step. The convergence step does not know or care which model each worker used \u2014 it operates on results, not on execution metadata.<\/p>\n<h3>9.5 Latency and the Geography of Work<\/h3>\n<p>Sequential execution has a fixed latency profile: total time equals the sum of individual step durations. Distributed swarm execution introduces a different profile: total time equals the maximum worker duration plus network round-trip time plus convergence duration.<\/p>\n<p>For local swarms, network time is negligible. For cloud-burst swarms, network time is measurable but small relative to inference time \u2014 a compiled context package is kilobytes, not gigabytes. For edge and federated swarms, network time can be significant, particularly when workers are geographically distant.<\/p>\n<p>The architecture handles this through deadline-aware scheduling. The process definition&#8217;s <code class=\"\" data-line=\"\">fan_out<\/code> block may specify a deadline:<\/p>\n<pre><code class=\"\" data-line=\"\">fan_out:\n  over: input_items\n  task: analyze_item\n  concurrency: 20\n  deadline: 30s\n  convergence:\n    strategy: collect\n    min_success_ratio: 0.8\n    failure_action: skip<\/code><\/pre>\n<p>The runtime uses the deadline to make placement decisions. If a worker dispatched to a remote location is unlikely to complete within the deadline (based on historical latency data), the runtime places it closer \u2014 on a cloud instance in a nearer region, or on the local machine \u2014 even if that placement is suboptimal on other dimensions. The deadline constrains the placement strategy, ensuring that distribution does not sacrifice responsiveness beyond the process definition&#8217;s tolerance.<\/p>\n<h3>9.6 The Implications of Location-Independent Execution<\/h3>\n<p>The architectural consequence of location-independent workers extends beyond performance optimization. It changes what swarms can be used for.<\/p>\n<p><strong>Global-scale analysis.<\/strong> A swarm can dispatch workers to every continent simultaneously. A compliance review that must evaluate operations under twelve different regulatory frameworks dispatches twelve workers, each executing in the relevant jurisdiction, each using models trained on or fine-tuned for local regulatory language. The convergence step produces a unified compliance report from twelve jurisdiction-specific analyses, none of which required data to cross jurisdictional boundaries.<\/p>\n<p><strong>Collaborative execution without shared infrastructure.<\/strong> Two organizations working on a joint project can participate in the same swarm without sharing infrastructure, credentials, or raw data. Organization A runs workers on its infrastructure; Organization B runs workers on its infrastructure. The coordinator (running on either side, or on neutral infrastructure) dispatches inputs and collects results. The containment rules guarantee that neither organization&#8217;s workers access the other&#8217;s data or systems.<\/p>\n<p><strong>Hardware-aware routing.<\/strong> Some tasks benefit from specific hardware. A worker analyzing a large codebase benefits from fast local storage. A worker generating images benefits from GPU acceleration. A worker performing symbolic reasoning benefits from high-memory CPU instances. The placement strategy routes workers to hardware that matches their task profile, turning a homogeneous swarm (same task blueprint) into a hardware-heterogeneous execution with performance characteristics optimized per worker.<\/p>\n<p><strong>Resilience through geographic distribution.<\/strong> A swarm distributed across three cloud regions survives the failure of any single region. When workers are location-independent and the task is idempotent, the retry mechanism can resubmit failed inputs to workers in surviving regions. The swarm completes \u2014 slower, perhaps, but completely \u2014 even under partial infrastructure failure.<\/p>\n<p><strong>Progressive capability deployment.<\/strong> When a new model is deployed in one region but not yet available globally, distributed swarms can route specific workers to the new model while others continue using the existing model. The convergence step does not distinguish between results from different models. This enables gradual rollout of model upgrades without requiring global synchronization.<\/p>\n<h3>9.7 Trust and Verification in Distributed Swarms<\/h3>\n<p>When workers execute on infrastructure you do not control, the question of trust becomes concrete. Can you trust a worker&#8217;s result? Can you verify that the worker executed the task faithfully?<\/p>\n<p>Swarm Architecture addresses this through three mechanisms:<\/p>\n<p><strong>Result validation.<\/strong> The convergence strategy can include validation logic that checks worker results against expected schemas, value ranges, or consistency conditions. A worker that returns a result outside expected bounds is flagged \u2014 its result can be excluded from the merge, retried on trusted infrastructure, or escalated for review.<\/p>\n<p><strong>Redundant execution.<\/strong> For high-stakes tasks, the same input can be dispatched to multiple workers on different infrastructure. If two workers produce consistent results, confidence is high. If they diverge, the convergence step can apply a tiebreaker (third worker, human review, or conservative default). This is the same principle as consensus in distributed systems, applied at the task level rather than the protocol level.<\/p>\n<p><strong>Execution attestation.<\/strong> Workers can produce signed execution records \u2014 cryptographic attestations of what input they received, what model they used, what output they produced, and what timestamp they completed. These attestations are collected during convergence and stored with the swarm&#8217;s execution history. They do not prevent a compromised worker from producing a false result, but they provide an audit trail that makes falsification detectable after the fact.<\/p>\n<p>These mechanisms are not required for all swarms. A local swarm on trusted infrastructure needs none of them. A federated swarm across organizational boundaries may require all three. The process definition specifies the appropriate level of verification for each swarm, matching the trust model to the deployment topology.<\/p>\n<hr>\n<h2>10. Limitations and Future Work<\/h2>\n<h3>10.1 Deliberate Constraints<\/h3>\n<p>Swarm Architecture deliberately excludes several capabilities that might seem natural extensions:<\/p>\n<p><strong>No nested swarms.<\/strong> A swarm worker cannot initiate a sub-swarm. If a task requires nested parallelism, the process definition must express it as sequential swarm steps in the parent process. This constraint preserves containment and bounds the total parallelism to what is explicitly specified.<\/p>\n<p><strong>No inter-worker communication.<\/strong> Workers cannot share intermediate results, coordinate strategies, or negotiate resource allocation. If a task requires coordination between parallel workers, it is not suitable for swarm execution \u2014 it requires a different architectural pattern.<\/p>\n<p><strong>No dynamic work distribution.<\/strong> The input set is fixed at fan-out time. Workers cannot discover additional work items during execution. If the total work is not known at fan-out time, the process must use a different pattern (e.g., a loop with dynamic termination conditions).<\/p>\n<h3>10.2 Areas for Future Investigation<\/h3>\n<p><strong>Hierarchical swarms.<\/strong> Some tasks have natural two-level parallelism: fan out across files, and within each file fan out across functions. The current architecture handles this through sequential swarm steps, but a hierarchical model could express it more naturally while maintaining containment guarantees.<\/p>\n<p><strong>Adaptive concurrency.<\/strong> The current model uses a fixed <code class=\"\" data-line=\"\">concurrency<\/code> parameter. An adaptive model could monitor worker performance and adjust concurrency dynamically \u2014 scaling up when workers complete quickly, scaling down when API rate limits are hit.<\/p>\n<p><strong>Cross-swarm learning.<\/strong> Currently, each swarm&#8217;s execution is independent. A learning mechanism that analyzes patterns across swarms \u2014 which tasks benefit from parallelism, what concurrency levels produce the best cost\/latency trade-offs, which convergence strategies produce the highest-quality results \u2014 could inform future process definitions.<\/p>\n<p><strong>Heterogeneous workers.<\/strong> The current model requires all workers to execute the same task blueprint. A heterogeneous model could assign different blueprints to different workers based on input characteristics, enabling specialization within a swarm.<\/p>\n<hr>\n<h2>11. Conclusion<\/h2>\n<p>Swarm Architecture extends the Compiled Context Runtime with a model for bounded parallel agent execution. It solves one problem precisely: naturally parallel work should execute in parallel. It does not attempt to solve general multi-agent coordination, emergent agent collaboration, or distributed consensus.<\/p>\n<p>The architecture&#8217;s value lies in its constraints as much as its capabilities. Containment rules prevent the unbounded execution expansion that plagues multi-agent systems. Convergence protocols make parallel result merging explicit and auditable. Correlation IDs preserve the traceability that makes the CCR trustworthy. And \u2014 perhaps most significantly \u2014 the same containment rules that make swarms safe also make them distributable. A worker that cannot reach outside its boundary can execute anywhere without risk.<\/p>\n<p>This is the paper&#8217;s central architectural insight. Constraints designed for safety produce a property \u2014 location independence \u2014 that transforms the scope of what agent systems can do. A local developer parallelizing a code review and a multinational organization distributing compliance analysis across twelve jurisdictions use the same architecture, the same containment model, the same convergence protocols. The difference is topology, not mechanism.<\/p>\n<p>The compiled context boundary reinforces this at the security layer. Workers receive precisely what they need and nothing more \u2014 not because of access control lists or network segmentation, but because the CCR&#8217;s compilation pipeline produces minimal, scoped payloads by construction. Security is not a feature added to distribution. It is a property inherited from the context model.<\/p>\n<p>One process definition. One execution model. Workers that can run anywhere \u2014 on your laptop, in your cloud, on your partner&#8217;s infrastructure, on a device across the planet \u2014 because the architecture guarantees they need nothing from each other and can leak nothing they were never given.<\/p>\n<hr>\n<h2>References<\/h2>\n<p>1. Anderson, W.C. (2026). <em>Compiled Context Runtime: Process-Driven Agent Execution with Unbounded Local Memory.<\/em> Version 1.0.<\/p>\n<p>2. Anderson, W.C. (2026). <em>Volatility-Based Decomposition in Software Architecture: A Practitioner-Oriented Articulation.<\/em> Version 1.0.<\/p>\n<p>3. Anderson, W.C. (2026). <em>Harmonic Design: A Unified Software Engineering Framework.<\/em> Version 1.0.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Swarm Architecture extends the Compiled Context Runtime with bounded parallel agent execution. Workers execute independently under containment rules that prevent scope creep \u2014 and the same containment that ensures safety makes workers location-independent, enabling distribution across cloud, edge, and federated infrastructure.<\/p>","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_uag_custom_page_level_css":"","footnotes":""},"methodology":[],"class_list":["post-366","page","type-page","status-publish"],"uagb_featured_image_src":{"full":false,"thumbnail":false,"medium":false,"medium_large":false,"large":false,"1536x1536":false,"2048x2048":false,"trp-custom-language-flag":false,"post-thumbnail":false,"hf-card":false,"hf-hero":false},"uagb_author_info":{"display_name":"admin","author_link":"https:\/\/dev.harmonic-framework.com\/es\/author\/admin\/"},"uagb_comment_info":0,"uagb_excerpt":"Swarm Architecture extends the Compiled Context Runtime with bounded parallel agent execution. Workers execute independently under containment rules that prevent scope creep \u2014 and the same containment that ensures safety makes workers location-independent, enabling distribution across cloud, edge, and federated infrastructure.","_links":{"self":[{"href":"https:\/\/dev.harmonic-framework.com\/es\/wp-json\/wp\/v2\/pages\/366","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dev.harmonic-framework.com\/es\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/dev.harmonic-framework.com\/es\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/dev.harmonic-framework.com\/es\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/dev.harmonic-framework.com\/es\/wp-json\/wp\/v2\/comments?post=366"}],"version-history":[{"count":2,"href":"https:\/\/dev.harmonic-framework.com\/es\/wp-json\/wp\/v2\/pages\/366\/revisions"}],"predecessor-version":[{"id":368,"href":"https:\/\/dev.harmonic-framework.com\/es\/wp-json\/wp\/v2\/pages\/366\/revisions\/368"}],"wp:attachment":[{"href":"https:\/\/dev.harmonic-framework.com\/es\/wp-json\/wp\/v2\/media?parent=366"}],"wp:term":[{"taxonomy":"methodology","embeddable":true,"href":"https:\/\/dev.harmonic-framework.com\/es\/wp-json\/wp\/v2\/methodology?post=366"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}