Intellectual Property

Twelve provisional patent applications protecting the architectural foundations of autonomous AI agent operation. Filed 19 April 2026 at the United States Patent and Trademark Office.

U.S. Provisional Patent Applications filed 19 April 2026. All technical details, code, and related IP are fully protected and held in secure estate-managed escrow with multiple independent attorneys and trusted parties. Full public disclosure and estate plan ensure seamless continuation and clear ownership regardless of any event affecting the inventor. Patent pending.

01

Stability-Weighted Conversation Reduction with Cache-Preserving Boundaries

AI agents lose context as conversations grow — existing systems either truncate history or let costs grow without bound. This invention maintains quasi-infinite conversation timelines within fixed token budgets while preserving the economic benefits of provider prompt caching, ensuring that longer conversations become more cost-efficient rather than more expensive.

Application No. 64/043,576·Filed 19 April 2026·Patent Pending
02

Provider-Adaptive Cache Breakpoint Registry with Priority-Based Slot Allocation and Supersession Handling

Prompt caching can dramatically reduce AI inference costs, but existing systems treat cache positions as static. This invention dynamically manages cache positions as first-class resources with priorities, lifetimes, and graceful transitions — adapting automatically to different providers' caching backends without application-level changes.

Application No. 64/043,577·Filed 19 April 2026·Patent Pending
03

Memory Promotion Through Cache-Graded Segments

When AI agents reduce conversation context to stay within budget, important knowledge can be lost. This invention automatically promotes high-value knowledge to more durable, cache-efficient positions before reduction occurs — ensuring the agent never forgets what matters, without requiring human curation.

Application No. 64/043,579·Filed 19 April 2026·Patent Pending
04

Credential Injection Without Large Language Model Exposure

Every existing AI agent framework exposes API keys and passwords to the language model's memory, creating a fundamental security vulnerability. This invention ensures credentials are never present in any memory region accessible to the AI — they exist in a separate process and are substituted at execution time, making credential theft architecturally impossible.

Application No. 64/043,582·Filed 19 April 2026·Patent Pending
05

Programmatic Gate-Graph Execution Architecture for Large Language Model Agents

Current agent frameworks rely on the AI to self-regulate its own behavior — checking its own work with the same reasoning that produced it. This invention makes misalignment structurally impossible by defining task execution as a directed graph where every transition is verified against external reality by code, not by the model's self-assessment.

Application No. 64/043,584·Filed 19 April 2026·Patent Pending
06

Dynamic Tool Provisioning Without Cache Invalidation

AI agents need different tools at different stages of work, but changing the available tool set traditionally destroys expensive cached context. This invention enables seamless tool-set transitions that preserve cache economics — the agent gets exactly the tools it needs at each stage without paying the cost of cache rebuilds.

Application No. 64/043,588·Filed 19 April 2026·Patent Pending
07

Activation-Tree Access Architecture for Agent Knowledge Navigation

Existing AI systems access knowledge through flat retrieval — search, get chunks, hope for relevance. This invention models all structured knowledge as navigable trees with three independent access dimensions, allowing the agent to surgically locate and extract exactly what it needs across memory, files, archives, and any structured data source.

Application No. 64/043,592·Filed 19 April 2026·Patent Pending
08

Single Conversation Domain Model with Provider-Agnostic Surface Translation and Structured Message Addressing

AI agents that operate across multiple channels and multiple AI providers face a fragmentation problem — each surface and each provider has its own conversation format. This invention establishes a single canonical conversation record from which all surfaces and providers are translations, with stable per-message identity that lets the AI reference and act on specific messages across any channel.

Application No. 64/043,595·Filed 19 April 2026·Patent Pending
09

Predictive Tool-Result Admission Control for Bounded-Context Large Language Model Agents

A single oversized tool result can silently corrupt an AI agent's reasoning by pushing critical context out of its working memory. This invention predicts the impact of tool results before they enter the conversation and rejects those that would exceed quality thresholds — with structured guidance that helps the agent adapt its approach rather than fail.

Application No. 64/043,597·Filed 19 April 2026·Patent Pending
10

Multi-Regime Context Budget Management with Workload-Class-Adaptive Reduction for Large Language Model Agents

Different types of AI work have fundamentally different context requirements, but existing systems operate with a single fixed budget. This invention automatically selects the optimal operating regime based on what the agent is doing — interactive conversation, deep execution, background processing — each with its own budget profile and reduction strategy.

Application No. 64/043,601·Filed 19 April 2026·Patent Pending
11

Latent Knowledge Extraction and Reconciliation via Idle-Triggered Composable Maintenance Phases for Large Language Model Agents

While the concept of AI agents reviewing their own history exists, current implementations are single-phase and manually triggered. This invention introduces composable multi-phase maintenance cycles that the system triggers autonomously during idle periods, with each phase operating under programmatically enforced constraints — producing reliable knowledge consolidation that scales with the agent's accumulated experience.

Application No. 64/043,604·Filed 19 April 2026·Patent Pending
12

Federated Structured-Knowledge Integration with Provenance Attestation and Policy-Gated Reconciliation for Large Language Model Agents

Every AI agent today learns in isolation — knowledge gained by one deployment cannot benefit another. This invention enables independent agent deployments to exchange structured knowledge with cryptographic provenance and operator-controlled trust policies, creating network effects where every participating agent benefits from collective learning without any central authority owning the aggregated knowledge.

Application No. 64/043,605·Filed 19 April 2026·Patent Pending

Inventor: Max Wolter