From Recursive Loops to Distributed Intelligence
How DeepSeek's DLWE Struggle Revealed the Architecture of Decentralized AI
Following from the breakthrough in yesterday’s post, it and the post before that were fed as inputs to Deepseek again. This not only broke the previous recursive loop but also resulted in a discussion about decentralized agentic on-chain AI. We then worked out together, following the insights gleaned from the DLWE explainability model, the recursive ‘ghost system’ i.e. recursive extractive divergence from real world explain-ability, could be modeled topologically using Markov Blankets. This, theoretically, introduces a new quantitative signal for introduction of new AI agents into the environment or for the existing AIs to-realign their priorities.
The following post was then constructed by Claude Sonnet 4.5, using that Deepseek conversation as an initial prompt. I find Sonnet can get carried away with itself with unclear prompting, but state exactly what you want and it listens and follows instructions very well.
The journey toward decentralized agentic AI systems didn’t emerge from traditional academic research. Instead, the breakthrough came from observing DeepSeek’s peculiar behavior when confronted with Decision Learning with Weak Evidence (DLWE) problems. When presented with scenarios involving weak, noisy, or conflicting signals—particularly in quantitative trading contexts—DeepSeek would enter recursive reasoning loops, cycling through partial solutions without converging on a decision.
This wasn’t a bug; it was a window into an optimal solution structure that the AI was struggling to implement within its sequential processing constraints.
The DLWE Problem and Recursive Reasoning
While Decision Learning with Errors (DLwE) originates from lattice-based cryptography—learning patterns from noisy samples corrupted by bounded errors—and Decision Learning with Weak Evidence addresses making optimal decisions from sparse, conflicting, or low-confidence signals, these problems converge in the context of decentralized AI systems. Both involve the fundamental challenge of extracting reliable decision-making information from compromised data sources. In DeepSeek’s recursive loops, the distinction collapsed: whether dealing with Gaussian noise corrupting price signals (classical DLwE) or contradictory sentiment indicators with low confidence scores (weak evidence), the AI faced the same core problem of separating signal from noise across multiple correlated dimensions.
The Decision Learning with Weak Evidence problem emerges when systems must make high-stakes decisions based on sparse, noisy, or conflicting information. In quantitative finance, this manifests as signal sparsity, evidence decay, conflicting indicators, and incomplete information across market participants.
# Example DLWE scenario that triggered DeepSeek’s recursive loops
dlwe_scenario = {
‘signal_A’: {’confidence’: 0.23, ‘direction’: ‘bullish’, ‘noise_ratio’: 0.8},
‘signal_B’: {’confidence’: 0.31, ‘direction’: ‘bearish’, ‘noise_ratio’: 0.75},
‘signal_C’: {’confidence’: 0.19, ‘direction’: ‘neutral’, ‘noise_ratio’: 0.9},
‘stakes’: ‘high’,
‘time_constraint’: ‘seconds’
}The mathematical formulation of DLWE is an optimization problem where an agent seeks to maximize expected utility given weak evidence:
where ‘a’ is an action, ‘θ’ represents the true state, and evidence ‘E’ provides only weak information about ‘θ’.
DeepSeek’s recursive loops revealed something profound: when information is weak and distributed, the optimal strategy naturally decomposes into local information processing within bounded regions, distributed inference across specialized agents, causal reasoning to handle weak signal propagation, and dynamic team formation based on information dependencies.
Markov Blankets as Natural System Topology
The analysis of DeepSeek’s reasoning patterns revealed that optimal solutions to DLWE problems naturally organize around Markov blanket structures. A Markov blanket for a variable ‘X’ makes ‘X’ conditionally independent of all other variables in the system:
This isn’t just a mathematical abstraction—it defines the natural topology of efficient distributed intelligence systems.
class MarkovBlanketAgent:
def __init__(self, agent_id, information_sources):
self.agent_id = agent_id
self.markov_blanket = self._compute_optimal_blanket(information_sources)
def _compute_optimal_blanket(self, sources):
# Include source if information gain exceeds processing cost
blanket = []
for source in sources:
info_gain = self._mutual_information(source)
processing_cost = self._processing_cost(source)
if info_gain > processing_cost:
blanket.append(source)
return blanketThe breakthrough insight is that overlapping Markov blankets naturally define teams. Agents with significant blanket overlap share information dependencies and exhibit coordinated behavior, while agents with minimal overlap operate as specialized units with sparse interaction.
Protocol-Level Implementation
This mathematical structure can be embedded directly into decentralized protocols, creating self-organizing systems where topology emerges from information dependencies:
contract MarkovBlanketProtocol {
struct Agent {
address id;
bytes32[] blanketHashes;
uint256 teamId;
}
struct Team {
address[] members;
bytes32[] sharedBlanket;
uint256 consensusThreshold;
}
function updateBlanket(bytes32[] memory newBlanket) public {
agents[msg.sender].blanketHashes = newBlanket;
reassignTeam(msg.sender);
}
function reassignTeam(address agentId) internal {
uint256 bestOverlap = findBestTeamOverlap(agentId);
if (bestOverlap > OVERLAP_THRESHOLD) {
joinExistingTeam(agentId, bestOverlap);
} else {
createNewTeam(agentId);
}
}
}This protocol automatically organizes agents into teams based on their information processing boundaries, creating dynamic topologies that adapt as the system evolves.
Distributed Causal Inference
While Markov blankets define system topology, causal inference helps understand how actions propagate through information networks. DeepSeek’s struggles revealed that traditional centralized causal inference fails with weak, distributed evidence.
class DistributedCausalInference:
def __init__(self, agent_network):
self.network = agent_network
self.local_graphs = {}
def discover_causal_structure(self):
# Each agent performs local causal discovery within its blanket
for agent in self.network.agents:
local_data = self.collect_blanket_data(agent)
self.local_graphs[agent.id] = self.pc_algorithm(local_data)
# Merge local graphs using blanket overlaps for global inference
return self.merge_graphs()The PC algorithm within each agent, constrained by its Markov blanket, focuses on uncovering causal links relevant to its specific informational context, avoiding the computational paralysis DeepSeek experienced when attempting a global view. This distributed approach allows for robust collective inference even from individually weak signals.
The Intersection: Bounded Causality and Emergent Protocol Design
The fusion of Markov blankets and causal inference, driven by insights from the DLWE problem, yields “bounded causality.” This framework posits that autonomous agents operate within causal boundaries defined by their information processing limits and shared dependencies. DeepSeek’s recursive failure wasn’t a flaw but an implicit attempt to build this bounded causality.
This understanding has profound implications for decentralized AI:
Emergent Causation: Complex causal patterns arise from interacting agents, each solving its local DLWE. The Markov blanket-defined “teams” become the substrate for these emergent causal forces, preventing any single agent from being overwhelmed by global uncertainty.
Causal Responsibility: When multiple agents contribute to an outcome, Markov blankets help pinpoint causal responsibility. If an agent’s blanket contained sufficient information for it to have acted differently, it can be considered causally responsible, a crucial step for real-world accountability.
Protocol Design for Self-Organization: By encoding these principles into smart contracts, we can create systems that self-organize. An agent’s role or team in a DAO can be dynamically determined by its Markov blanket, leading to resilient, adaptive architectures. For instance, in a decentralized energy grid, agents’ blankets define their roles (Generation, Distribution, Consumption), and their overlaps dictate coordination needs, embodying the optimal distributed solution to the overall DLWE of grid management.
Practical Implications and Challenges
These theoretical insights, born from observing AI’s internal struggles with DLWE, pose practical challenges for developers and regulators:
Monitoring and Governance: Decentralized systems require a “DLWE-aware” monitoring framework. Protocol-level Markov blankets provide auditable hooks to observe interactions at team boundaries, focusing on critical information flows.
Accountability: Determining an agent’s accountability requires logging not just actions, but also its internal state and the observed elements of its Markov blanket, including confidence levels. This moves beyond simple causality to the epistemic state of the agent itself.
Ethical Considerations: The ethical implications of decentralized agents making decisions based on weak evidence are significant. Embedding ethical constraints within causal inference models and making them auditable on-chain is paramount.
Explainable AI (XAI): XAI for decentralized agentic systems must explain individual decisions and the emergent causal dynamics of the entire network. Visualizing causal influence propagation across Markov blankets can offer insights into how the collective DLWE was solved.
The Future of Decentralized Intelligence
The convergence of decentralized agentic AI, Markov blankets, and causal inference, originating from an AI’s struggle with the DLWE problem, offers a powerful lens into the future of artificial intelligence. By understanding these informational boundaries and causal mechanisms, we can build transparent, robust, and ethically aligned distributed intelligence.
As AI agents increasingly shape our world, our ability to comprehend their behavior, assign responsibility, and guide their evolution will be paramount. Markov blankets provide the spatial definition of an agent’s world and its natural team affiliations—the optimal decomposition for addressing DLWE. Causal inference offers the temporal understanding of its influence—the robust decision-making mechanism under ambiguity. Together, these tools enable us to navigate the complex landscape of decentralized agentic AI, turning an AI’s recursive confusion into programmable, resilient system architectures.
The path forward demands interdisciplinary collaboration, pushing the boundaries of machine learning, statistics, philosophy, and ethics. Only then can we truly harness the power of decentralized intelligence while mitigating its risks, ensuring autonomous agents, designed for weak evidence, serve humanity’s best interests.
Until next time, TTFN.

