Temporal Agent Runtime and Profiling

Chronologue is designed as a temporal compiler for agent behavior. Inspired by CUDA’s host-device execution and profiling stack, the system allows users and agents to define structured, time-aware plans, coordinate action execution, and record performance over time using trace-based profiling.

The runtime turns the calendar into an execution surface. It doesn’t just track when things happen—it controls how agents behave, learn, and adapt across time.


1. Introduction

Chronologue’s runtime operates as the execution layer of the Agent DSL. It takes in structured agent plans, schedules them via a planner queue, dispatches execution through an agent interface, and logs performance metadata for auditing, learning, and optimization.

This enables:

  • Reliable delegation of tasks to agents
  • Auditability of execution timelines
  • Feedback-driven planning through time-aware profiling

2. Host-Device Runtime Model

Chronologue mirrors CUDA’s execution architecture:

Chronologue RoleCUDA AnalogyDescription
agent_planKernel LaunchDefines task intent and timing
executorGPU threadsExecutes plan as structured action
schedulerGrid/blockMaps time-based scheduling to planner queue
tempo_tokenStream configEncodes timing granularity and alignment
profilerCUPTI/NsightLogs execution metadata and feedback
MemPortcudaMemcpyTransfers memory context between agents

The user or LLM planner acts as the host, while the executor represents the agent-side runtime.


3. Compiler and Runtime Phases

Chronologue’s runtime architecture mirrors a DSL compiler stack:

  • Plan: DSL input via prompt or structured agent_plan schema
  • Schedule: Resolve timing constraints (scheduled_for, tempo_token)
  • Execute: Run agent behavior and persist results
  • Profile: Capture runtime data for performance modeling
  • Reflect: Optionally summarize, tag, or revise based on outcome

Each phase is implemented as a module in the runtime/ directory.


4. Agent Execution Lifecycle

The runtime pipeline consists of the following steps:

  1. Queueing
    The agent_plan enters the planner queue (scheduler.py) with intent, constraints, and optional conditions.

  2. Execution
    The executor (executor.py) dispatches the action, logging executed_at, duration_ms, and status. Memory traces are created if applicable.

  3. Profiling
    The profiler (profiler.py) compares actual execution to plan. It logs:

    • Tempo alignment
    • User feedback
    • Latency and deviation
  4. Reflection or Revision
    The system may generate a reflection, suggest a follow-up, or annotate misalignment.


5. Profiling Metadata

Each executed trace includes a nested profiling object:

Example:

profiling: {
  schema_version: "v1",
  executed_at: "2025-05-11T20:03:12Z",
  duration_ms: 182000,
  deviation_ms: 192000,
  tempo_token: "<tempo:EveningReview>",
  tempo_alignment: "partial",
  feedback_score: 4,
  agent_latency_ms: 2300,
  reward_signal: 0.7
}

The runtime writes this block once execution completes and optionally includes additional fields like:

  • attempts[]: fallback or retry logs
  • execution_result: return_type, status, output trace

See the Trace Profiling Schema for full specification.


6. Filesystem and Module Structure

Chronologue’s runtime architecture is modular and testable.

chronologue/ ├── schemas/ │ ├── agent_plan.py │ ├── feedback_trace.py │ └── profiling.py ├── runtime/ │ ├── scheduler.py # queues and time resolution │ ├── executor.py # triggers agent actions │ ├── profiler.py # computes tempo and latency metrics │ └── queue.py # interfaces with planner queue ├── tempo/ │ └── tokens.py # standardizes tempo parsing and alignment ├── api/ │ ├── routes_agent.py │ └── routes_feedback.py ├── frontend/ │ ├── AgentQueuePanel.tsx │ ├── FeedbackModal.tsx │ └── ProfilerTimeline.tsx


7. Developer Guidelines for Extending Runtime

  • Use schema-bound inputs and outputs (agent_plan, profiling)
  • Wrap all agent execution in a traceable event with UID
  • Log profiling data in a structured, versioned format
  • Store failed and successful attempts with audit reasons
  • Include trace_id and planned_by metadata for every run
  • Validate tempo alignment using registered token windows

8. Execution Trace Example

{
  uid: "agent-plan-2025-05-11-reflect",
  type: "agent_plan",
  scheduled_for: "2025-05-11T20:00:00Z",
  content: "Reflect on the day",
  profiling: {
    executed_at: "2025-05-11T20:03:12Z",
    duration_ms: 182000,
    deviation_ms: 192000,
    tempo_token: "<tempo:EveningReview>",
    tempo_alignment: "partial",
    feedback_score: 4,
    reward_signal: 0.7,
    agent_latency_ms: 2300,
    execution_result: {
      status: "completed",
      trace_id: "trace-456",
      return_type: "trace"
    }
  }
}

This execution log enables system tuning, plan evolution, and agent self-improvement.


  • Agent DSL and Execution Model
  • Trace Profiling Schema
  • Tempo Token Specification
  • Memory Trace Schema
  • MemPort Context Export