Skip to main content
BlogResourcesPodcast
🤖 Chapter 4 of 7 Phase 2: Operating

Beyond Reactive Chat

Proactive features and autonomous capabilities

Video Lesson Coming Soon

A video walkthrough for this chapter is in production. For now, dive into the written content below.

System Architecture — Chapter 4 View

This diagram reveals more of the OpenClaw architecture as you progress through chapters.

Mission Control Dashboardlocalhost:3001
LIVE
TRUST FrameworkSkill vetting active
ClawHavoc Defense1,184 threats blocked
Project > Task > Model Hierarchy
Haiku 85%Sonnet 13%Opus 2%
routes to
OpenClaw Gateway
Port 18789 • Node.js v22 • 190K+ GitHub Stars
Message → Context → LLM → Tools → Response → Memory
connects
Messaging Channels
WhatsApp
Telegram
Slack
Discord
Signal
iMessage
Memory System
SOUL.mdUSER.mdIDENTITY.mdMEMORY.md
Heartbeat Daemon
Proactive monitoring every 30 min. Can offload to Ollama ($0/mo).
5-Layer Feedback LoopExecute → Observe → Remember → Analyze → Adapt
Cron Schedulerat • every • cron expressions
$1,500 → $30/mo97% cost reduction
ClawHub5,700+ skills
MCPTool protocol
Custom Skills30 min to build
Chapter 4 of 7 — 57% Architecture Revealed

What You'll Learn

  • Proactive agent behavior
  • Scheduled tasks
  • Event-driven actions
  • Autonomous workflows
  • Multi-agent coordination
  • Human-in-the-loop patterns
In this chapter 6 sections

The Heartbeat Daemon

Understand the proactive monitoring cycle of the heartbeat daemon.

Proactive Agent Cycle
How the Heartbeat Daemon Works
1
Timer Triggers
Heartbeat fires every 30 minutes (configurable)
2
Agent Wakes
Loads HEARTBEAT.md for monitoring instructions
3
Condition Check
Scans for signals: new emails, price changes, deadlines
4
Decision
Act if conditions met, sleep if nothing to do
5
Action Taken
Sends alert, updates data, triggers workflow
6
Memory Logged
Records what happened in daily memory file
0/6

Interactive — tap to explore

Reactive chat means waiting for users to message first. OpenClaw's Heartbeat Daemon enables proactive agents that wake up on schedule and take independent action.

By default, the daemon checks in every 30 minutes—the agent wakes, observes conditions, and decides if action is needed. Maybe it checks a weather API and alerts users to storms, reviews metrics and escalates if something's wrong, or follows up on pending tasks.

The beauty is that these proactive actions use the same memory and reasoning systems as reactive messages. The Heartbeat Daemon is configurable: you can set it to 5-minute intervals for high-urgency monitoring or 24-hour intervals for daily summaries.

Importantly, heartbeat checks use only the LLM tokens needed to review conditions and decide on action—most heartbeats won't generate any output, saving costs. The daemon creates a new paradigm: your agent isn't a passive tool, but an active team member watching your interests.

Heartbeat Daemon

A background process that keeps agents alive between user interactions. It runs periodic checks, refreshes memory, and lets agents proactively take action (e.g., monitoring systems, sending alerts) without waiting for external triggers.

Offloading Heartbeat to Local Models

Compare cloud-only vs local pre-filtering for heartbeat optimization.

💰 Cloud vs Local Filtering
Two-Stage Heartbeat Cost Optimization
☁️
Cloud Only
$0.20/day
48 checks × cloud model pricing. Full reasoning each time.
🏠
Local Pre-filter
$0.02/day
Ollama does quick triage. Only escalates to cloud when needed. 90% savings.
0/2

Interactive — tap to explore

Heartbeat daemon checks can get expensive if you run them frequently— 30-minute intervals add up to 48 checks daily. OpenClaw offers a cost optimization: offload heartbeat decision-making to Ollama, a free local LLM runner.

You can run a small open-source model (Mistral, Llama 2, or similar) locally using just 2-4GB of RAM. The heartbeat daemon uses this local model to do a quick preliminary analysis—'does this situation need attention?'—and only calls your primary cloud model if the local assessment says yes. This two-stage filtering can reduce heartbeat token costs by 90%.

For example, 48 daily heartbeat checks might cost $0.20 with cloud models but only $0.02 if you pre-filter with a local model. The local model runs on your same hardware, adding minimal latency (usually 0.5-1 second). This pattern—local cheap filtering feeding into cloud expensive reasoning—is a powerful cost optimization throughout OpenClaw.

Cron Scheduling: Three Scheduling Modes

Choose the scheduling mode that matches your task cadence.

Three Scheduling Modes
Choose the Right Mode for Your Task
📌
At Mode
One-time
Run once at a specific time. Example: 'send report at 9am Tuesday'
🔁
Every Mode
Interval
Regular intervals — every 6 hours, every 3 days, etc.
📋
Cron Mode
Expression
5-field cron syntax for complex schedules: '0 9 * * 1-5' = weekdays at 9am
0/3

Interactive — tap to explore

While Heartbeat provides continuous monitoring, Cron scheduling handles specific-time automation. OpenClaw's scheduler supports three modes, each suited to different use cases.

Each mode is useful for different scenarios: 'at' mode for one-off events, 'every' mode for regular intervals, and 'cron' for complex recurring schedules. A Slack bot might use 'at' to announce a special event, 'every' to summarize daily logs, and 'cron' to archive old data weekly.

Scheduled tasks behave like heartbeat wakeups—they load the same memory, run the same reasoning loop, and can take any action. You can have dozens of scheduled tasks running on the same agent without coordination problems. The scheduler respects your agent's resource budget; if the agent is busy, tasks queue and run in order.

template
Fixed schedule: Run at specific times (e.g., 9am daily)
Event-triggered: Run when specific conditions occur (file created, API response received)
Manual trigger: Run on-demand via CLI or API call

The Five-Layer Feedback Loop

Follow the five layers of continuous feedback and improvement.

🔄 Feedback Loop
Five Layers of Continuous Improvement
Execute
👁️
Observe
🧠
Remember
📊
Analyze
🔄
Adapt
0/5

Interactive — tap to explore

Real autonomy requires more than single-shot decisions; it requires learning from outcomes. OpenClaw's five-layer feedback loop captures this. Layer 1 is Execution—the agent takes action.

Layer 2 is Observation—the agent measures results (did the email send? how many users engaged?). Layer 3 is Memory—the agent updates its knowledge with what it learned.

Layer 4 is Analyze—the agent revises its mental models (maybe certain message types work better in mornings). Layer 5 is Adapt—the agent adjusts its future behavior accordingly.

These five layers cycle continuously, both within a single session and across days. This is how your agent improves: not from batch training, but from continuous real-world interaction.

The feedback loop requires accurate measurement—your skills and tools should return detailed results, not just 'success/failure'. Rich feedback data enables rich learning.

1
Execute

The agent takes action.

2
Observe

The agent measures results (did the email send? how many users engaged?).

3
Remember

The agent updates its knowledge with what it learned.

4
Analyze

The agent revises its mental models (maybe certain message types work better in mornings).

5
Adapt

The agent adjusts its future behavior accordingly.

Feedback Lag Timing and Expectations

Learn appropriate feedback loop timing for different domains.

Feedback Timing
Match Your Feedback Loop to the Domain
📧
Cold Outreach
24-72 hours
Let campaigns run before measuring. Don't optimize after 2 hours — that's noise.
📝
Content
7 days
Blog posts need a week for engagement metrics to stabilize.
🎯
Lead Qualification
30 days
Sales cycles take time. Monthly agent adjustments prevent over-optimization.
0/3

Interactive — tap to explore

Different types of feedback have different timing requirements. Cold outreach (like customer acquisition emails) has a feedback lag of 24-72 hours—you can't evaluate success immediately, you need to let the campaign run and measure responses.

Content feedback (like blog posts) has a 7-day lag—you need a week to see engagement metrics. Lead feedback (qualifying prospects) has a 30-day lag—the sales cycle takes time. Understanding these lags prevents premature optimization.

If you change your cold-email strategy and measure after 2 hours, you're measuring noise, not signal. Set your feedback loop intervals to match the feedback lag: cold outreach agents might adjust daily, content agents adjust weekly, lead agents adjust monthly. This alignment prevents whiplash-style over-optimization.

Your agent should be patient, collecting data across the appropriate time window before revising strategies. This maturity—understanding timing—separates novice agents from experienced ones.

Truly autonomous agents don't ask permission; they ask for forgiveness. But they also report every action for audit.

Multi-Agent Coordination Patterns

Visualize multi-agent coordination with routing and specialists.

🌳 Multi-Agent Architecture
How Multiple Agents Coordinate
🎯Primary Agent (Coordinator)ROUTER
├─💰Billing AgentSPECIALIST
└─🧠Domain-specific memory + tools
├─🔧Tech Support AgentSPECIALIST
└─📖Technical docs + debug tools
├─🔥Escalation AgentSPECIALIST
└─👤Priority handling + human handoff
0/7

Interactive — tap to explore

Complex problems sometimes require orchestrating multiple agents. OpenClaw supports multi-agent deployments with a clear hierarchy: a primary agent coordinates, sub-agents handle specialized tasks.

Imagine a customer support system: the primary agent triages incoming messages, routing billing questions to a Billing Agent, technical issues to a Tech Agent, and complaints to an Escalation Agent. Each sub-agent specializes in its domain, using domain-specific memory and tools.

The primary agent waits for responses from sub-agents, synthesizes them, and responds to the user. Sub-agents can have their own heartbeat schedules and cron tasks.

Coordination happens through message queues (typically Redis or similar) and a Mission Control dashboard (at localhost:3001) where you can monitor all agents' activities in real-time. Multi-agent setups are more complex to debug but handle sophisticated workflows that single agents struggle with. You typically start with one agent and only split into multi-agent setups when you outgrow a single agent's scope.

Key Takeaways

📝 My Notes
← The Engine Room Cost and Tools →