Enable model-based training acceleration


If you experience slow performance or limited simulator availability, you can accelerate concept training in Bonsai by enabling model-based training. With model-based training, Bonsai uses a neural-network model to learn the system dynamics of your training simulator. Data generated from the learned model and data from the simulator are then used in combination to improve the concept training.

The fidelity of the neural network model improves over time as Bonsai trains with the connected simulators and generates more data. During training, Bonsai automatically adjusts the mix of data generated by the learned model and your simulation. The mix of data is based on the accuracy of the learned model and its contribution to the assessment performance. Eventually, the trained model becomes the main source of concept-training data, and the original simulator need not be used anymore outside of assessments.

A learned model can be shared across different concepts and brains if the targeted components use the same managed simulator within the same workspace and the simulator state, simulator action, and simulator config type definitions are consistent. Bonsai can use the simulator data generated across brains and concepts to continually improve the accuracy of the learned model.


Model-based training acceleration is only supported for online training with simulators. Model-based training is not currently supported for dataset training. Additionally, sharing data and learned models across brains and concepts for training acceleration is only supported for training with managed simulators.

Enable model-based training acceleration

To enable neural network support, use the Acceleration parameter in the ModelBasedTraining algorithm clause:

algorithm {
  ModelBasedTraining: {
    Acceleration: "on"

Supported modes of model-based training

Model-based training supports three modes: off, auto, and on.

Value Description
off DEFAULT Acceleration is turned off and training only uses simulation
auto Acceleration is enabled but only used when needed
on Acceleration is enabled and always used

When acceleration is enabled and set to auto, Bonsai decides whether to use model-based training for a given concept at the beginning of the first training session of that concept. This decision is based on whether the simulation is slower than 10 seconds/iteration per simulator instance, and the system can predict with high confidence that a learned model will improve training time. The decision to use (or not use) acceleration does not change if you stop and resume training.

When acceleration feature is enabled, Bonsai displays the following message at the beginning of training:

"Model-based training acceleration is enabled for this concept. This functionality is currently in beta testing."

Best practices

To ensure better training stability and acceleration, consider the following:

  • Maintain a consistent simulator schema: The neural network model, and the simulator data used to train it, can be shared across different brains using the same managed simulator when the simulator state, action, and config types are consistent across the different brain specifications. Every time Bonsai encounters variations of the same simulator schema, the preexisting simulator data is segmented to create a new learned model. Maintaining a consistent simulator schema enhances the applicable scope of model-based training.
  • Decouple reward and termination: Decouple your reward and terminal functions from the simulation dynamics by defining the functions in Inkling or using Inkling goals. Decoupling reward and termination increases the reusability of the simulator and the generalizability of the learned model.
  • Use accurate type definitions: Using accurate types to define the state, action, and config spaces is important for effectively guiding the learned model in making state predictions. For example, if a particular state field is an integer inside the simulator, also defining it as an integer in the brain specification forces the accelerator model to predict integer outcomes. Accuracy is particularly important when matching data between ordinal and nominal (discrete and continuous) types.
  • Constrain ranges appropriately: Use tight ranges for your state, action, and config types. Limiting the set of possible values improves normalization for the model input and increases model accuracy.
  • Using Inkling to define your sim config ranges: Defining your config ranges in Inkling (as opposed to within the simulator) helps with the fidelity of the accelerator model. For example, in a Moab scenario, you may choose to randomize the configured ball size to improve training and generalize to a broader range of real-world scenarios. Expressing the ball size range in the Inkling config type constraint, rather than forcing the simulator to do the randomization, will guide the accelerator model more effectively.