Beyond Reactive Chat
Proactive features and autonomous capabilities
Video Lesson Coming Soon
A video walkthrough for this chapter is in production. For now, dive into the written content below.
System Architecture — Chapter 4 View
This diagram reveals more of the OpenClaw architecture as you progress through chapters.
What You'll Learn
- ✓ Proactive agent behavior
- ✓ Scheduled tasks
- ✓ Event-driven actions
- ✓ Autonomous workflows
- ✓ Multi-agent coordination
- ✓ Human-in-the-loop patterns
In this chapter 6 sections
The Heartbeat Daemon
Understand the proactive monitoring cycle of the heartbeat daemon.
Interactive — tap to explore
Reactive chat means waiting for users to message first. OpenClaw's Heartbeat Daemon enables proactive agents that wake up on schedule and take independent action.
By default, the daemon checks in every 30 minutes—the agent wakes, observes conditions, and decides if action is needed. Maybe it checks a weather API and alerts users to storms, reviews metrics and escalates if something's wrong, or follows up on pending tasks.
The beauty is that these proactive actions use the same memory and reasoning systems as reactive messages. The Heartbeat Daemon is configurable: you can set it to 5-minute intervals for high-urgency monitoring or 24-hour intervals for daily summaries.
Importantly, heartbeat checks use only the LLM tokens needed to review conditions and decide on action—most heartbeats won't generate any output, saving costs. The daemon creates a new paradigm: your agent isn't a passive tool, but an active team member watching your interests.
A background process that keeps agents alive between user interactions. It runs periodic checks, refreshes memory, and lets agents proactively take action (e.g., monitoring systems, sending alerts) without waiting for external triggers.
Offloading Heartbeat to Local Models
Compare cloud-only vs local pre-filtering for heartbeat optimization.
Interactive — tap to explore
Heartbeat daemon checks can get expensive if you run them frequently— 30-minute intervals add up to 48 checks daily. OpenClaw offers a cost optimization: offload heartbeat decision-making to Ollama, a free local LLM runner.
You can run a small open-source model (Mistral, Llama 2, or similar) locally using just 2-4GB of RAM. The heartbeat daemon uses this local model to do a quick preliminary analysis—'does this situation need attention?'—and only calls your primary cloud model if the local assessment says yes. This two-stage filtering can reduce heartbeat token costs by 90%.
For example, 48 daily heartbeat checks might cost $0.20 with cloud models but only $0.02 if you pre-filter with a local model. The local model runs on your same hardware, adding minimal latency (usually 0.5-1 second). This pattern—local cheap filtering feeding into cloud expensive reasoning—is a powerful cost optimization throughout OpenClaw.
Cron Scheduling: Three Scheduling Modes
Choose the scheduling mode that matches your task cadence.
Interactive — tap to explore
While Heartbeat provides continuous monitoring, Cron scheduling handles specific-time automation. OpenClaw's scheduler supports three modes, each suited to different use cases.
Each mode is useful for different scenarios: 'at' mode for one-off events, 'every' mode for regular intervals, and 'cron' for complex recurring schedules. A Slack bot might use 'at' to announce a special event, 'every' to summarize daily logs, and 'cron' to archive old data weekly.
Scheduled tasks behave like heartbeat wakeups—they load the same memory, run the same reasoning loop, and can take any action. You can have dozens of scheduled tasks running on the same agent without coordination problems. The scheduler respects your agent's resource budget; if the agent is busy, tasks queue and run in order.
The Five-Layer Feedback Loop
Follow the five layers of continuous feedback and improvement.
Interactive — tap to explore
Real autonomy requires more than single-shot decisions; it requires learning from outcomes. OpenClaw's five-layer feedback loop captures this. Layer 1 is Execution—the agent takes action.
Layer 2 is Observation—the agent measures results (did the email send? how many users engaged?). Layer 3 is Memory—the agent updates its knowledge with what it learned.
Layer 4 is Analyze—the agent revises its mental models (maybe certain message types work better in mornings). Layer 5 is Adapt—the agent adjusts its future behavior accordingly.
These five layers cycle continuously, both within a single session and across days. This is how your agent improves: not from batch training, but from continuous real-world interaction.
The feedback loop requires accurate measurement—your skills and tools should return detailed results, not just 'success/failure'. Rich feedback data enables rich learning.
The agent takes action.
The agent measures results (did the email send? how many users engaged?).
The agent updates its knowledge with what it learned.
The agent revises its mental models (maybe certain message types work better in mornings).
The agent adjusts its future behavior accordingly.
Feedback Lag Timing and Expectations
Learn appropriate feedback loop timing for different domains.
Interactive — tap to explore
Different types of feedback have different timing requirements. Cold outreach (like customer acquisition emails) has a feedback lag of 24-72 hours—you can't evaluate success immediately, you need to let the campaign run and measure responses.
Content feedback (like blog posts) has a 7-day lag—you need a week to see engagement metrics. Lead feedback (qualifying prospects) has a 30-day lag—the sales cycle takes time. Understanding these lags prevents premature optimization.
If you change your cold-email strategy and measure after 2 hours, you're measuring noise, not signal. Set your feedback loop intervals to match the feedback lag: cold outreach agents might adjust daily, content agents adjust weekly, lead agents adjust monthly. This alignment prevents whiplash-style over-optimization.
Your agent should be patient, collecting data across the appropriate time window before revising strategies. This maturity—understanding timing—separates novice agents from experienced ones.
Truly autonomous agents don't ask permission; they ask for forgiveness. But they also report every action for audit.
Multi-Agent Coordination Patterns
Visualize multi-agent coordination with routing and specialists.
Interactive — tap to explore
Complex problems sometimes require orchestrating multiple agents. OpenClaw supports multi-agent deployments with a clear hierarchy: a primary agent coordinates, sub-agents handle specialized tasks.
Imagine a customer support system: the primary agent triages incoming messages, routing billing questions to a Billing Agent, technical issues to a Tech Agent, and complaints to an Escalation Agent. Each sub-agent specializes in its domain, using domain-specific memory and tools.
The primary agent waits for responses from sub-agents, synthesizes them, and responds to the user. Sub-agents can have their own heartbeat schedules and cron tasks.
Coordination happens through message queues (typically Redis or similar) and a Mission Control dashboard (at localhost:3001) where you can monitor all agents' activities in real-time. Multi-agent setups are more complex to debug but handle sophisticated workflows that single agents struggle with. You typically start with one agent and only split into multi-agent setups when you outgrow a single agent's scope.