Getting Started with Agent-Based Simulation
This post is a high-level roadmap for building your first agent-based simulation with Prorok. We’ll walk through the main concepts and where to look in the platform so you can define entities, attach behavior, and run a simulation.
Concepts you need
Agent-based simulation in Prorok rests on a few ideas:
- World (multiverse) — A simulation run lives in a world: a single instance of a model running on a cluster. The cluster is the runtime (e.g. a local node started by the Portal or a remote node you connect to).
- Model / blueprint — The model (or blueprint) defines what exists in the world: entities (e.g. “Boid”, “Generator”, “Order”), components and variables on those entities, and behaviors that run each step or on events.
- Entities — The “agents”: individual objects with identity, state (variables), and optional behavior. They can be created and destroyed; they can interact via shared state or messages.
- Behaviors — Code or logic that runs in response to triggers (e.g. every step, on a timer, on an entity or data change). Behaviors read and write world state; the engine advances time and applies them.
- Step and time — Simulation advances in steps. Each step can run triggers (e.g. step events), update entity state, and produce a new snapshot. You can query state at any step and replay runs later.
This is similar in spirit to multimethod tools like AnyLogic (where agent-based models have active objects with state and behavior), but implemented in Prorok’s stack: a multiverse runtime, gRPC/QUIC for clusters and clients, and a Portal UI to load examples, run nodes, and inspect worlds.
Where to start in the repo
- Portal — The desktop app that starts a local multiverse node, loads examples, and lets you open worlds and view state. Running the Portal and loading an example (e.g. Boids, Factory, Infrastructure) is the fastest way to see a simulation in action.
- Examples — Under
prorok/examples/you’ll find:- Boids — Flocking with predators and obstacles; good for understanding entities, variables, and a step-driven loop.
- Factory — Manufacturing-style flows.
- Retail — Demand and operations.
- Infrastructure — Grid simulation with generators, lines, loads, and repair teams.
- Model structure — Each example has a model (entities, components, behaviors) and often a driver and/or viewer. The model defines the blueprint; the driver or Portal steps the simulation; the viewer visualizes state (e.g. 3D or charts).
- Multiverse API — Worlds are created and controlled via the multiverse RPCs: create cluster, replace model, initialize, step. The protobuf definitions live under
multiverse/protobuf/; clients use them to connect to a node and drive a world.
Defining entities and behavior
In general you will:
- Define entity types — What kinds of agents exist (e.g. Boid, Generator, Order).
- Add variables (state) — Per-entity or global: position, capacity, status, etc.
- Attach behaviors — Logic that runs on a trigger (e.g. “every step” or “when this entity is accessed”). Behaviors can read/write entity and global state; they implement the “rules” of your system.
- Initialize the world — Set initial entity count and state (or load from data).
- Step — Advance the simulation; optionally query state each step or at the end for analysis or visualization.
The exact APIs depend on whether you’re authoring a model in the multiverse format (e.g. blueprint + dynlib or scripted behaviors) or driving from an external client. The Boids and Infrastructure examples are the best reference for “entity + variable + step loop” in this codebase.
Running your first simulation
- Build and run the Portal (see repo README / crate docs for your setup).
- Start the local node (Portal does this when it launches).
- Load an example (e.g. Boids or Infrastructure) so the Portal creates a cluster, loads the model, and initializes the world.
- Step or run the simulation from the UI (or via a script that calls the multiverse step RPC).
- Inspect state in the viewport(s) and any data views; use queries or replays to analyze outcomes.
From here you can modify an existing example (add entities, variables, or behaviors) or define a new model that matches your domain. For digital twins, you’ll eventually bind initial or ongoing state from your data sources; for AI grounding, you’ll feed scenario parameters from an LLM and compare simulation results to the model’s predictions.
For more on the “why” behind simulation-based prediction and where Prorok sits in the landscape, see Introducing Prorok AI. For grounding LLM outputs in simulation, see Grounding the Hallucinations.