The ampersand is not a logo. It's a composition operator.
A language-agnostic protocol specification where &memory,
&time, &space, and &reason
compose into verified, deployable agent systems — defined by a formal
grammar, validated against a JSON schema, and implemented in any language.
Part A defines the [&] protocol independent of any programming language or specific capability provider. Any conforming implementation must satisfy the grammar, schema, algebra, and constraint rules described in this section.
The current agent protocol landscape has three established layers: MCP (agent-to-tool, vertical), A2A (agent-to-agent, horizontal), and emerging UI protocols (AG-UI, A2UI). There is a missing layer: capability composition — how an agent declares, validates, and binds the cognitive capabilities it needs before it ever touches MCP or A2A.
The [&] protocol occupies that layer. It does not replace MCP or A2A — it generates configurations for them.
The key insight: MCP defines how agents call tools. A2A defines how agents call agents. Neither defines how capabilities compose into a coherent cognitive system — what memory, reasoning, temporal, and spatial capabilities an agent needs, whether they're compatible, and how context flows between them with provenance. [&] fills that gap.
The [&] protocol defines a formal grammar for capability composition. Any conforming implementation MUST be able to parse and validate expressions that satisfy this grammar. The grammar is intentionally small — four primitive types, two operators, and namespaced subtypes.
# [&] Capability Grammar — BNF AgentSpec := "agent" Identifier "{" CapabilityBlock GovernanceBlock? "}" CapabilityBlock := "capabilities" "[" CapabilityList "]" CapabilityList := Capability ("," Capability)* Capability := "&" PrimitiveType ( "." Subtype )? "(" ProviderExpr? ("," Config)? ")" PrimitiveType := "memory" | "reason" | "time" | "space" # IMPROVEMENT 4: Capability namespaces Subtype := Identifier # e.g. &memory.graph, &memory.vector, &memory.episodic # &time.forecast, &time.anomaly, &time.pattern # &space.fleet, &space.geofence, &space.route # &reason.argument, &reason.vote, &reason.plan ProviderExpr := ":" Identifier # explicit provider | ":" "auto" # registry-resolved Config := KeyValue ("," KeyValue)* KeyValue := Identifier ":" Value Value := String | Number | Boolean | List | Map # Composition operators CapabilitySet := Capability ( "&" Capability )* Pipeline := Expression ( "|>" CapabilityOp )* CapabilityOp := "&" PrimitiveType ("." Subtype)? "." Operation "(" Args? ")" # Governance (protocol-level, not language-specific) GovernanceBlock := "governance" "{" Constraint* "}" Constraint := HardConstraint | SoftConstraint | EscalationRule HardConstraint := "hard" String SoftConstraint := "soft" String EscalationRule := "escalate_when" Condition
The grammar produces two operators:
& (composition — combines capabilities into a set)
and |> (pipeline — flows data through capability
operations). These are the only two primitives needed to express
any agent cognitive architecture.
The [&] protocol defines four fundamental capability primitives. Each primitive has a namespace of subtypes that specialize its behavior. Primitives map to cognitive science models of intelligence: what (memory), how (reasoning), when (time), where (space).
&memory
What the agent knows.
&reason
How the agent decides.
&time
When things happen.
&space
Where things are.
Each primitive expands into a namespace of specialized capabilities.
The flat form (&memory) is shorthand for the default
subtype. Explicit subtypes enable fine-grained provider selection:
| Primitive | Subtypes | Description |
|---|---|---|
&memory | .graph · .vector · .episodic · .semantic | Knowledge graphs, vector stores, episode recall, semantic search |
&reason | .argument · .vote · .plan · .chain | Structured argumentation, consensus, planning, chain-of-thought |
&time | .anomaly · .forecast · .pattern · .baseline | Anomaly detection, forecasting, pattern learning, baseline drift |
&space | .fleet · .geofence · .route · .region | Fleet tracking, geofencing, route optimization, regional state |
Custom subtypes are permitted. A provider MAY register any subtype under a primitive as long as it satisfies the primitive's capability contract (§9). The registry (§5) tracks all known subtypes and their providers.
The [&] protocol defines a three-layer architecture. Layer 1 (Canonical Schema) is the normative specification. Layers 2 and 3 are implementation concerns.
ampersand.schema.jsonhex / npm / crates.io
Any agent can declare its capabilities in ampersand.json,
regardless of implementation language. This is the canonical form:
{
"$schema": "https://protocol.ampersandboxdesign.com/v0.1/schema.json",
"agent": "InfraOperator",
"version": "1.0.0",
"capabilities": {
"&memory.graph": { "provider": "graphonomous", "config": { "instance": "infra-ops" } },
"&time.anomaly": { "provider": "ticktickclock", "config": { "streams": ["cpu", "mem"] } },
"&space.fleet": { "provider": "geofleetic", "config": { "regions": ["us-east"] } },
"&reason.argument": { "provider": "deliberatic", "config": { "governance": "constitutional" } }
},
"governance": {
"hard": ["Never scale beyond 3x in a single action"],
"soft": ["Prefer gradual scaling over spikes"],
"escalate_when": { "confidence_below": 0.7, "cost_exceeds_usd": 1000 }
},
"provenance": true
}
Capabilities are interfaces, not products. Any MCP-compatible service that satisfies a capability contract (§9) can be used as a provider:
&memory.graph → Graphonomous, Neo4j
&memory.vector → pgvector, Weaviate, Pinecone
&memory.episodic → Graphonomous, custom
&time.anomaly → TickTickClock, InfluxDB
&time.forecast → TickTickClock, Prophet
&time.pattern → TickTickClock, custom
Providers register themselves in a discoverable catalog. The registry is the equivalent of A2A Agent Cards but for cognitive capabilities rather than agents.
// GET registry.ampersandboxdesign.com/v1/capabilities { "&memory": { "subtypes": { "graph": { "ops": ["recall","learn","consolidate","enrich"] }, "vector": { "ops": ["search","upsert","enrich"] }, "episodic": { "ops": ["recall","store","replay","enrich"] } }, "providers": [ { "id": "graphonomous", "subtypes": ["graph","episodic"], "protocol": "mcp_v1" }, { "id": "neo4j-memory", "subtypes": ["graph"], "protocol": "mcp_v1" }, { "id": "pgvector", "subtypes": ["vector"], "protocol": "mcp_v1" }, { "id": "weaviate", "subtypes": ["vector"], "protocol": "mcp_v1" } ] }, "&time": { /* ... anomaly, forecast, pattern providers ... */ }, "&space": { /* ... fleet, geofence, route providers ... */ }, "&reason": { /* ... argument, vote, plan providers ... */ } }
Capability composition MUST satisfy the following algebraic properties. Conforming implementations MUST produce equivalent results regardless of declaration order, grouping, or duplication:
| Property | Rule | Implication |
|---|---|---|
| Commutative | &memory & &time ≡ &time & &memory | Order of declaration has no effect |
| Associative | (&memory & &time) & &space ≡ &memory & (&time & &space) | Grouping has no effect |
| Idempotent | &memory & &memory ≡ &memory | Duplicate declarations collapse |
| Identity | &none & &memory ≡ &memory | Empty set composes cleanly |
| Compatibility | &memory & &reason ≡ :ok | Checked at validation time |
Why ACI properties: These are the same algebraic properties that CRDTs use for conflict-free convergence. Capability sets converge to the same result regardless of order, duplication, or grouping — just as delta-CRDTs converge distributed state. This means multiple agents or processes can independently compose capabilities and arrive at the same validated set.
MCP does not track where context came from. A2A does not track it. The [&] protocol requires that every capability operation append a provenance record to the pipeline context:
// Provenance record (protocol-level, language-agnostic) { "source": "&time.anomaly", "provider": "ticktickclock", "operation": "detect", "timestamp": "2026-03-14T14:23:07Z", "input_hash": "sha256:a3f8...", "output_hash": "sha256:7b2c...", "parent_hash": "sha256:0000...", "mcp_trace_id": "ttc-inv-9f3a..." }
Provenance records form a hash-linked chain. Each
record's parent_hash points to the previous record in
the pipeline. The full chain is available for audit after any
decision:
// "Why did the agent scale us-east?" // Provenance trace returns: [ { "source": "&time.anomaly", "summary": "3 anomalies in cpu_load" }, { "source": "&memory.graph", "summary": "2 similar incidents in 48h" }, { "source": "&space.fleet", "summary": "us-east at 87% capacity" }, { "source": "&reason.argument", "summary": "scale_up, confidence 0.91", "evidence": ["sha256:a3f8...", "sha256:7b2c...", "sha256:e1d0..."] } ]
The [&] protocol defines governance as declarative constraints in the canonical schema — not as language-specific syntax. Any conforming implementation MUST enforce these constraints at validation and/or runtime:
// In ampersand.json — governance is protocol-level, not Elixir-specific { "governance": { "hard": [ "Never scale beyond 3x current capacity in a single action", "Never delete data without human approval", "Always preserve audit trail for scaling decisions" ], "soft": [ "Prefer gradual scaling over sudden spikes", "Prefer cost-aware decisions during off-peak hours" ], "escalate_when": { "confidence_below": 0.7, "cost_exceeds_usd": 1000, "hard_boundary_approached": true } } }
Hard constraints are inviolable — conforming implementations MUST prevent execution paths that violate them. Soft constraints are preferences passed to reasoning capabilities that MAY be overridden with evidence. Escalation rules define when the agent MUST defer to a human operator.
Each capability provider declares a typed contract: the operations it supports, the inputs it accepts, the outputs it produces, and what it can follow or feed into in a pipeline. Contracts are defined in the registry and validated at composition time:
// Contract for &time.anomaly (protocol-level, language-agnostic) { "capability": "&time.anomaly", "operations": { "detect": { "in": "stream_data", "out": "anomaly_set" }, "enrich": { "in": "context", "out": "enriched_context" }, "learn": { "in": "observation", "out": "ack" } }, "accepts_from": ["&memory.*", "&space.*", "raw_data"], "feeds_into": ["&memory.*", "&reason.*", "&space.*", "output"], "a2a_skills": ["temporal-anomaly-detection"] }
The accepts_from and feeds_into
declarations enable pipeline validation. A conforming
implementation MUST reject pipelines where a capability's
input type does not match the preceding capability's output type.
Wildcard .* matches any subtype of a primitive.
The [&] protocol is designed for both human developers and autonomous agents. The JSON schema is strict enough for machine validation but semantic enough for LLM reasoning. An agent MAY generate, discover, validate, and evolve its own compositions:
// Natural language goal → valid ampersand.json // Input: "Monitor fleet, predict demand, remember routes, audit decisions" { "agent": "FleetManager", "capabilities": { "&memory.episodic": { "provider": "auto", "need": "route performance history" }, "&time.forecast": { "provider": "auto", "need": "demand spike prediction" }, "&space.fleet": { "provider": "auto", "need": "US regional fleet tracking" }, "&reason.argument": { "provider": "auto", "need": "auditable scaling decisions" } }, "governance": { "infer_from_goal": true } }
"provider": "auto" delegates resolution to the
registry. "infer_from_goal": true directs the
runtime to generate governance constraints from the goal description
and agent context. The LLM does not need to know which providers
exist — it declares needs, the protocol resolves bindings.
An agent MAY propose composition mutations based on its provenance history. Mutations are subject to governance constraints and MAY require human approval:
// After 30 days, the agent's memory contains: // "84% of incidents correlate with latency spikes not in current streams" { "mutation": "add_capability", "add": { "&time.baseline": { "provider": "auto", "need": "latency baseline drift" } }, "evidence": "prov-chain:sha256:2a8f...→sha256:9c3d...", "requires_approval": true }
Part B describes the Elixir reference implementation of the [&] protocol. Other language bindings (TypeScript, Rust) implement the same canonical schema from Part A using language-appropriate validation mechanisms.
The [&] protocol can be implemented in approximately 30 lines of Elixir. This minimal implementation supports capability declaration, composition, and validation — enough to evaluate the protocol and build on it:
defmodule Ampersand do defstruct [:type, :subtype, :provider, :config] def cap(type, subtype \\ :default, provider \\ :auto, config \\ %{}) do %Ampersand{type: type, subtype: subtype, provider: provider, config: config} end def compose(caps) when is_list(caps) do caps |> Enum.uniq_by(& {&1.type, &1.subtype}) # idempotent |> Enum.sort_by(& &1.type) # commutative |> validate_compatibility() end def validate_compatibility(caps) do types = Enum.map(caps, & &1.type) if length(types) == length(Enum.uniq(types)) or Enum.all?(types, &(&1 in [:memory, :reason, :time, :space])), do: {:ok, caps}, else: {:error, "unknown primitive type"} end def to_json({:ok, caps}) do map = for c <- caps, into: %{} do key = "{c.type}" <> if(c.subtype != :default, do: ".#{c.subtype}", else: "") {key, %{"provider" => c.provider, "config" => c.config}} end Jason.encode!(%{"capabilities" => map}, pretty: true) end end
# In iex: alias Ampersand, as: A [ A.cap(:memory, :graph, "graphonomous"), A.cap(:time, :anomaly, "ticktickclock", %{streams: ["cpu"]}), A.cap(:space, :fleet), A.cap(:reason, :argument, "deliberatic") ] |> A.compose() |> A.to_json() # Output: valid ampersand.json with all four capabilities
That's the entire core. 30 lines, no dependencies
beyond Jason. Capability declaration, ACI-compliant composition,
validation, and JSON serialization. Everything else — provenance,
governance, MCP binding, OTP supervision — layers on top of this
foundation. If you can run this in iex, you can
evaluate the [&] protocol.
The full Elixir reference implementation extends the minimal core with macros for compile-time validation, pipeline composition, and OTP supervision tree generation:
defmodule InfraOperator do use Ampersand.Agent capabilities [ &memory.graph(:graphonomous, instance: "infra-ops"), &time.anomaly(:ticktickclock, streams: [:cpu, :mem, :net]), &space.fleet(:geofleetic, regions: [:us_east, :eu_west]), &reason.argument(:deliberatic, governance: :constitutional) ] end
handle :health_check, %{servers: servers} do servers |> &time.anomaly.detect(window: minutes(5)) |> &memory.graph.enrich(recall: "similar-incidents") |> &space.fleet.enrich(query: :affected_regions) |> &reason.argument.deliberate(governance: :constitutional) |> &memory.graph.learn(tag: "incident-response") end
ampersand.json (canonical schema)
↓
[Schema Validation] ← grammar + contract check
↓
[Provider Resolution] ← registry lookup for "auto" providers
↓
[Governance Compilation] ← constraint rules generated
↓
[Provenance Injection] ← hash-linked context tracking
↓
[MCP Binding Generation] ← tool endpoints resolved
↓
[A2A Agent Card] ← capability discovery published
↓
[Language Binding] ← Elixir OTP / TS runtime / Rust binary
↓
[Deployment] ← any conforming host
↓
[Signed Telemetry] ← metrics with provenance
# Validate an ampersand.json against the schema $ mix ampersand.check # Resolve "auto" providers from the registry $ mix ampersand.resolve # Generate A2A Agent Card $ mix ampersand.card # Deploy to a conforming host $ mix ampersand.deploy --target webhost.systems --runtime cloudflare
Part C describes the default capability providers maintained by Ampersand Box Design. These are not required by the protocol — any MCP-compatible service that satisfies a capability contract from Part A can be used as a provider. The [&] ecosystem is one implementation of the protocol, not the only one.
| Capability | Default Provider | Subtypes | Hex Package |
|---|---|---|---|
&memory | Graphonomous | .graph · .episodic | ampersand_memory |
&reason | Deliberatic | .argument · .vote | ampersand_reason |
&time | TickTickClock | .anomaly · .forecast · .pattern | ampersand_time |
&space | GeoFleetic | .fleet · .geofence · .route | ampersand_space |
| Product | Role | Protocol Interaction |
|---|---|---|
| Delegatic | Orchestration | Uses &delegate extension primitive for multi-agent coordination |
| AgenTroMatic | Workflow engine | Executes pipelines composed via [&] protocol |
| FleetPrompt | Skill marketplace | Publishes packaged ampersand.json as installable skills |
| SpecPrompt | Spec generator | Generates ampersand.json from natural language goals |
| Agentelic | Visual editor | Drag-and-drop UI that outputs valid ampersand.json |
| WebHost.Systems | Deployment host | Conforming host for deployed agents — metering, secrets, telemetry |
| OpenSentience | Research | Explores cognitive architecture foundations of the primitive model |
The protocol is the product.
&memory & &time & &space & &reason is a capability
declaration that any conforming runtime can validate, bind, deploy,
and meter. The [&] ecosystem provides default providers. The protocol
enables anyone to provide alternatives. Capabilities are interfaces,
not products. The ampersand is a composition operator, not a brand.