Market research, domain valuations, and strategic analysis for the [&] portfolio of AI infrastructure domains. Anchored by the three-protocol stack — [&] (composition), PULSE (temporal algebra), and PRISM (diagnostic benchmark). All figures sourced from named analyst firms and cross-referenced.
Strategic Thesis
The [&] portfolio is unified by a three-protocol stack. The [&] Protocol defines how cognitive capabilities — memory, reasoning, time, space, and governance — compose into verified, deployable agent systems. PULSE (OS-010) is the temporal algebra that lets every loop declare its phases, cadence, nesting, and cross-loop signals. PRISM (OS-009) is the diagnostic benchmark engine that measures how well systems learn, consolidate, remember, and transfer knowledge over time. Together they form a complete compose-circulate-measure architecture.
Every domain in the portfolio maps to a layer in the protocol stack: cognitive primitives (Graphonomous, Deliberatic, TickTickClock, GeoFleetic), agent platform (SpecPrompt, Agentelic, Delegatic, AgenTroMatic, FleetPrompt), and runtime infrastructure (WebHost Systems, OpenSentience). The protocol specification — including formal BNF grammar, ACI composition algebra, canonical JSON schema, context provenance chains, and governance constraints — is published at protocol.ampersandboxdesign.com.
UI (A2UI / AG-UI) → [&] Composition (capability declaration, validation, binding, provenance) → PULSE (loop manifests, phase cadence, cross-loop signals) → PRISM (CL benchmarks, IRT calibration, diagnostic reports) → A2A (agent-to-agent) → MCP (agent-to-tool) → Runtime.
Portfolio Summary
The [&] portfolio consists of 14 AI infrastructure domains positioned across the continual learning, agent orchestration, and edge AI markets, supported by 6 active codebases totaling 76K+ lines of code. Valuations reflect domain-only estimates (no product premium) based on comparable sales, market positioning, and keyword analysis.
| Domain | Category | 2026 Value | 2034 Projection | Grade |
|---|---|---|---|---|
| fleetprompt.com | Agent Skill Marketplace | $175K | $375K | A+ |
| ticktickclock.com | Temporal Intelligence / SSM | $145K | $315K | A− |
| graphonomous.com | Continual Learning Engine | $95K | $400K | A |
| bendscript.com | Knowledge Graph IDE | $85K | $225K | A+ |
| deliberatic.com | Deliberation Protocol / Consensus | $75K | $450K | A+ |
| specprompt.com | Spec-Driven Development | $95K | $235K | A |
| delegatic.com | Agent Governance | $65K | $210K | A+ |
| agentromatic.com | Automatic Deliberation Engine | $65K | $425K | A |
| agentelic.com | Enterprise Agent Builder | $60K | $390K | A |
| opensentience.org | Research Protocols / Runtime | $45K | $120K | B+ |
| gpscoord.com | Geolocation / Spatial Data | $45K | $105K | B+ |
| webhost.systems | Agent Infrastructure / Hosting | $40K | $95K | B |
| geofleetic.com | Spatial Intelligence / CRDTs | $30K | $60K | B |
| a2atraffic.com | Agent-to-Agent Protocol | $20K | $40K | C+ |
14 domains across the AI agent infrastructure stack. Top 5 domains represent 62% of portfolio value. The agent ecosystem cluster (FleetPrompt + Delegatic + SpecPrompt + Agentelic + AgenTroMatic + OpenSentience) carries $460K individual value — with synergy multiplier estimated at 1.8–2.5x as an integrated platform ($830K–$1.15M). The three-protocol stack adds a structural premium: these domains are not isolated brands but named layers in a published protocol architecture with 6 active codebases and 76K+ LOC.
An honest note: the portfolio has shipped code (Graphonomous v0.4 on npm with 5 MCP machines and 48 test files, BendScript beta, WebHost.Systems beta, PRISM benchmark engine on Fly.io, PULSE manifest server on npm) but is pre-revenue with no paying users. Domain valuations assume the domains are positioned in high-growth AI markets with published specifications, working code, empirical benchmarks, and a three-protocol thesis. The table below shows value scenarios depending on execution milestones.
| Scenario | Milestone | Portfolio Value | Basis |
|---|---|---|---|
| Domains only | No product, no users | $100K–$200K | Bare domain resale value. 14 .com/.org domains in AI-adjacent categories without traffic or revenue |
| Domains + specs + protocol + code | Published three-protocol stack, 6 codebases, npm packages | $600K–$1.2M | Narrative + execution premium. Protocol thesis, 62+ cited sources, working MCP servers, empirical benchmarks. Comparable to late pre-seed stage |
| First revenue | Registry listings, $1K MRR, 50+ users | $1M–$3M | Pre-seed AI startup range. Median pre-seed AI valuation is $7.7M (Carta Q3 2025) with 42% AI premium. Requires proof of traction |
| Product-market fit | $10K+ MRR, 100+ users, community traction | $3M–$8M | Seed-stage AI infrastructure. AI infrastructure commands 20x+ revenue multiples (Finro Q4 2025). Protocol ecosystem multiplier applies |
| Scale / exit | $100K+ MRR, enterprise pilots, Series A | $10M–$50M | Median Series A post-money: $78.7M (Carta Q4 2025). IP-heavy AI startups see 15–20% valuation premium |
The $1.04M figure in the domain table above represents the "domains + specs + protocol" scenario with optimistic comparable positioning. The current AI funding environment — where AI captures 80% of all VC funding (Q1 2026, Crunchbase) and seed-stage AI companies command median $24M post-money valuations — provides a favorable tailwind for execution.
Target Markets
The [&] portfolio targets the intersection of three converging mega-trends: edge AI, knowledge graphs, and agentic AI. Each market has been validated by multiple tier-1 analyst firms. Market data updated April 2026.
Hero Product Analysis
Graphonomous is the &memory capability
provider in the [&] Protocol — a continual learning engine
that makes small language models (1B–8B) get smarter over
time in their deployment context. Now at v0.4 with 5
loop-phase machines (29 actions), 162 source files, 48 test
files, and 45K LOC. It sits at the intersection of edge AI,
knowledge graphs, and continual learning — a space that IBM
identified as one of three "major hurdles" for the field.
No direct competitor occupies this intersection.
— Chris Kofman, IBM, on AI trends for 2026. Also: "We'll begin to see decentralized networks of agents that can learn from each other, share information and retain important knowledge over long horizons — weeks, months, even years."
| Dimension | Score | Rationale |
|---|---|---|
| Problem Clarity | 9/10 | "LLMs can't learn after deployment" — universal pain point |
| Market Timing | 10/10 | IBM, Clarifai, Harvard all independently identified CL + edge as THE 2026 trend. $56M+ in direct competitor funding validates market |
| Naming Fit | 10/10 | "Graph" + "autonomous" = self-governing graph. Name IS the product. |
| Differentiation | 8/10 | No competitor does "MCP-first CL engine for edge." Closest: Mem0 ($24M Series A, 52.8K stars) |
| Feasibility | 9/10 | Shipped: 5 machines, 29 actions, 48 test files, 45K LOC, npm published, 92.6% LongMemEval QA proxy |
| Protocol Synergy | 10/10 |
The &memory primitive —
every other capability composes with it.
PULSE manifest declares its 5-phase loop
|
| Overall | 9.3/10 | Highest-conviction opportunity in the portfolio |
The closest funded comparable is Mem0, which raised a $24M Series A (Basis Set Ventures, YC, Peak XV) for "memory layer for AI apps" reaching 52.8K GitHub stars and 186M API calls/quarter. Graphonomous is architecturally more ambitious: graph-native (not flat vectors), edge-native (SQLite, not cloud-only), MCP-first (5 loop-phase machines), and includes consolidation cycles inspired by neuroscience research plus a unique κ cyclicity invariant verified on 1.9M+ finite systems.
Industry Validation
Multiple independent signals from tier-1 institutions confirm the CL + edge + memory thesis and the need for composition-layer infrastructure.
"Researchers are exploring lifelong memory systems that continually learn from interactions... long-term memory reduces institutional knowledge loss."
"AI is no longer the experiment on the side; it's rewiring how work gets done... shifting from isolated tools to platforms that sit at the center of workflows."
Q1 2026 shattered records with $300B in venture funding, with AI capturing 80% of total global VC (Crunchbase). Foundational AI startup funding in Q1 alone was double all of 2025. The four largest rounds ever recorded closed in Q1 2026.
MCP is now under Linux Foundation governance with adoption by Anthropic, OpenAI, Google, Microsoft, and Amazon. The ecosystem has grown to 10,000+ active public MCP servers. Smithery indexes 100K+ tools. The [&] Protocol generates MCP configurations — it sits above MCP in the stack.
AI agents market projected from $7.84B (2025) to $52.62B by 2030 at 46.3% CAGR. Vertical AI agents growing at 62.7% CAGR — multi-agent systems at 48.5% CAGR. Grand View Research projects $182.97B by 2033 at 49.6% CAGR.
Agent Infrastructure Convergence
The [&] tagline — "the substrate for agent civilizations" — is no longer speculative. The agent infrastructure space has entered a phase transition: Meta acquired Moltbook (a social network for AI agents), the AI agents market is projected to reach $53–183B by 2030–2033, and agent memory alone has attracted $56M+ in direct competitor funding. The industry is building toward exactly what the [&] Protocol describes: composed agent systems that need memory, reasoning, time, space, and governance as infrastructure primitives.
Meta acquired Moltbook — a Reddit-like social network exclusively for AI agents — bringing its creators into Meta Superintelligence Labs. Moltbook had 1.6M claimed agents by February 2026. TechCrunch described Meta's vision as an "agent graph." The [&] Protocol's capability registry and A2A Agent Card generation are precisely this infrastructure layer.
"Continual learning shifts rigor toward memory provenance and retention... The winners will not only pick strong models, they will build the control plane that keeps those models correct, current, and cost-efficient." — The [&] Protocol is that control plane: capability composition, context provenance, and governance constraints as a formal specification.
Mem0 raised $24M Series A (Basis Set, YC, Peak XV; 52.8K stars). Letta raised $10M seed (Felicis; $70M val; 22K stars). Cognee raised $7.5M seed (Pebblebed; 15.2K stars). Hindsight/Vectorize raised $3.6M seed (True Ventures; 9K stars). Zep raised $3.3M (Engineering Capital; Graphiti at 24.8K stars). CrewAI raised $18M (45.9K stars). This level of funding validates the market Graphonomous targets — at $0 raised.
&memory Research Landscape
The [&] thesis that "memory is infrastructure, not a feature" is now the consensus position in academic AI research. A December 2025 survey paper — "Memory in the Age of AI Agents" — proposes rethinking memory as a "first-class primitive in the design of future agentic intelligence." The LongMemEval benchmark (ICLR 2025) has become the standard measure, with a dozen systems now competing on its 500-question evaluation.
| System | LongMemEval Score | Architecture |
|---|---|---|
| agentmemory (solo dev, $1K budget) | 96.2% (Opus 4.6) | 6-signal hybrid retrieval |
| PwC Chronos | 95.6% | Research team, arXiv paper |
| OMEGA | 95.4% (GPT-4.1) | Local-first intelligence layer |
| Mastra OM | 94.9% (GPT-5-mini) | Observational memory, 5–40x compression |
| Graphonomous | 92.6% QA proxy (98.7% SHR) | CL graph + κ routing, local-only hardware |
| Hindsight (Vectorize, $3.6M seed) | 91.4% (Gemini 3 Pro) | 4-network MCP-native, 9K stars |
| Zep / Graphiti ($3.3M) | ~63–67% | Temporal knowledge graph, 24.8K stars |
| Letta / MemGPT ($10M seed) | ~50–80% (LOCOMO) | OS-inspired 3-tier runtime, 22K stars |
| GPT-4 128K (baseline) | ~62–65% | Full context window, no memory system |
| Research | Date | [&] Relevance |
|---|---|---|
| "Memory in the Age of AI Agents" (arXiv 2512.13564) | Dec 2025 | Proposes memory as "first-class primitive" — validates Graphonomous positioning |
| HOPE / Nested Learning (NeurIPS 2025) | Dec 2024 | Multi-timescale memory: fast + slow modules. Graphonomous implements this as consolidation cycles |
| Microsoft PlugMem (Mar 2026) | Mar 2026 | Transforms raw logs into structured knowledge — episodic → semantic → procedural. Same architecture as Graphonomous |
| MemoryBench (arXiv 2510.17281) | Oct 2025 | First benchmark for continual learning in LLM systems. Validates need for memory infrastructure |
| MemRL: Self-Evolving Agents (Jan 2026) | Jan 2026 | Runtime RL on episodic memory — agents that learn from their own experience |
"The models that win aren't necessarily the largest; they're the ones that reason deeply, learn continuously, and deploy everywhere. Intelligence, it turns out, is less about parameter count than about architecture, memory, and knowing when to think hard versus when to think fast." — This is the Graphonomous thesis: small models (1B–8B) that get smarter over time through structured memory, not larger context windows.
&reason Research Landscape
The [&] Protocol's &reason primitive —
implemented by Deliberatic — builds on a rapidly growing
body of academic research proving that multi-agent
deliberation protocols outperform single-agent reasoning and
simple voting. The ACL 2025 paper "Voting or Consensus?"
demonstrated that consensus protocols improve knowledge
tasks by 2.8% and voting protocols improve reasoning tasks
by 13.2%. Deliberatic's approach — structured argumentation
with evidence chains — represents the next generation beyond
both.
| Research | Date | [&] Relevance |
|---|---|---|
| Kaesberg et al., "Voting or Consensus?" (ACL 2025) | Jul 2025 | Systematic comparison of 7 decision protocols. Consensus outperforms voting on knowledge tasks. Deliberatic implements both with adaptive switching |
| Wu et al., "Stop Overvaluing MAD" (Nov 2025) | Nov 2025 | Shows debate is bounded by strongest agent's accuracy. Recommends explicit deliberation with justified stances — exactly Deliberatic's argumentation framework |
| Pokharel et al., "Deliberation Leads to Unanimous Consensus" | Feb 2026 | LLMs as rational agents in structured discussions. Two-phase consensus with Byzantine fault tolerance — mirrors Deliberatic's Raft + PBFT design |
| Dung's Argumentation Framework (foundational) | 1995+ | Deliberatic extends Dung's abstract argumentation into weighted bipolar systems with typed evidence and attack/support relations |
&time Research Landscape
The [&] Protocol's &time primitive —
implemented by TickTickClock — leverages Mamba-class
selective state space models (SSMs) for temporal
intelligence: anomaly detection, pattern prediction, and
time-series continual learning. Mamba (Gu & Dao, 2023) has
emerged as the leading post-Transformer architecture for
sequence modeling, offering 5x higher throughput with linear
complexity.
State space models maintain a continuous state representation that naturally captures temporal dynamics. Mamba's selective scan mechanism (40x faster than standard SSM implementation) enables real-time anomaly detection on edge devices — exactly TickTickClock's target deployment. Hybrid architectures like Jamba (AI21 Labs) mix Transformer attention with SSM layers, validating the approach of using SSMs for temporal processing within larger agent systems.
&space Research Landscape
The [&] Protocol's &space primitive —
implemented by GeoFleetic — uses delta-CRDTs (Conflict-free
Replicated Data Types) for distributed spatial state
synchronization. The fleet management software market
($21.8B in 2024, growing to $52–117B by 2032–2034
depending on source) provides the commercial foundation,
while GeoFleetic's edge-native architecture positions it
for real-time, low-latency requirements.
Distribution Layer Analysis
Infrastructure layers are important, but historically
marketplaces capture the most value in ecosystems.
FleetPrompt is positioned as the capability marketplace for
the [&] ecosystem: install capabilities as versioned
ampersand.json packages.
This thesis received explosive validation in 2026. The agent skills marketplace category went from non-existent to 160,000+ indexed packages — a pace that took npm a decade. Smithery now indexes 100K+ MCP tools. The MCP ecosystem has grown to 10,000+ active public servers.
| Marketplace | Skills/Servers | Installs | Launched |
|---|---|---|---|
| SkillsMP | 160,000+ | — | Jan 2026 |
| Smithery (MCP) | 100,000+ | — | 2024 |
| Skills.sh (Vercel) | 30+ agents | CLI package manager | Jan 2026 |
| MCP Ecosystem | 10,000+ servers | — | 2024–2026 |
Existing marketplaces distribute raw skills (SKILL.md
files) or MCP servers. FleetPrompt distributes
composed capability packages —
versioned ampersand.json bundles that
include memory configuration, reasoning strategy,
temporal patterns, and governance constraints as a
single installable unit. The [&] Protocol's capability
contracts enable compatibility validation at install
time — something no current marketplace offers.
Comparable Sales
The ai.com sale at $70M reset the ceiling for what category-defining digital assets can command. AI-adjacent domains command significant premiums.
| Domain | Sale Price | Year | Relevance |
|---|---|---|---|
| ai.com | $70,000,000 | 2025* | Largest domain sale in history. Purchased by Crypto.com CEO Kris Marszalek. Launched as agentic AI platform at Super Bowl LX (Feb 2026). |
| chat.com | $15,500,000 | 2024 | OpenAI acquisition — AI interface premium |
| fin.ai | $1,000,000 | 2024 | Fintech AI — category-defining brand |
| you.ai | $700,000 | 2024 | AI brand — single word + .ai premium |
| ace.ai | $205,000 | 2024 | Premium AI brand — short, memorable |
| crew.ai | $104,900 | 2024 | Direct comparable — AI agent coordination. CrewAI now at 45.9K stars, $18M raised |
*ai.com sale closed April 2025, publicly disclosed February 2026. Paid in cryptocurrency.
Citations
All market figures are sourced from named analyst firms and cross-referenced where possible. Ranges reflect variance across sources. Updated April 2026.