Keyword: concept
The concept
keyword defines a component of a brain that must be learned, imported or programmed.
For example:
- Control a heater based on sensor inputs, to keep a room comfortable.
- Decide how much raw material to order to meet anticipated demand.
- Balance a pole on a moving cart.
- Pick the appropriate skill to use to accomplish a complex task with a robotic arm.
Each concept represents a translation from one or more inputs to an output.
Usage
The concept
statement specifies the input sources and an output type. The input
data
stream from the simulator always acts as the initial input to the concept graph.
concept AbstractConceptName(Antecedent1, Antecedent2): OutputType {
# Concept definition omitted
}
Input types are not required because their types are provided in their definitions.
The input stream type is specified in the graph
definition, and the output types
of other concepts are provided in their own definitions.
If your Inkling code includes more than one concept, you must specify which concept will generate
the output of the trained brain. To mark the output concept, add the output
keyword to the concept
definition. For example:
output concept ControlTheSystem(PreviousConcept): ActionType
Concepts can be imported, learned, or programmed. The following sections give more details on each.
Learned concepts
Learned concepts specify a curriculum that defines how the AI should be taught. Standard concepts learn to produce an action based on their inputs. To define a learned concept, use the curriculum statement as follows:
concept Bar (input): Move {
curriculum {
# Curriculum definition goes here
}
}
Selector concepts
Selectors learn to pick a concept from a list of options. The output of the chosen concept becomes the output of the selector. The options represent skills or strategies that should be applied in different situations, and the selector learns to choose the most applicable one. Define a selector concept using the select
keyword as follows:
concept PickOne(input): Action {
select GoLeft
select GoRight
curriculum {
# Curriculum definition goes here
}
}
The output types of the options must match the output type of the selector concept.
Important
The concepts being selected should not be included in the input to the selector concept unless the values of the actions should be part of the decision of which to select. In most cases, the state of the environment is used to make the selection.
Masking selector options
You can specify a condition to enable or disable selector options based on the state. Preventing the system from choosing skills you know are inappropriate in certain situations can speed up learning and make the result more robust. Specify a mask
function as a property of the select
statement as follows:
concept PickOne(input): Action {
select GoLeft {
mask function(s: BrainInputState): number {
# return 1 if this option should be prohibited, 0 otherwise
return s.is_going_left_clearly_a_bad_idea_now
}
}
select GoStraight {
# The mask function can be inline as above, or global like this:
mask MaskGoingStraight
}
select GoRight # if no mask function is specified, the option is always available.
curriculum {
# Curriculum definition goes here
}
}
The input to mask
functions is the brain input state. The output is a number that will be interpreted as a boolean. A value of 0 means the option should be allowed. Any other value means the option is not allowed. These conditions will be used during training and assessement as well as in exported brains.
Important
There must always be at least one unmasked option — if all options are masked out, the system will report a runtime error.
Programmed concepts
Programmed concepts are defined using the programmed
keyword
followed by a function definition or reference. The function parameters must match the order
and types of the concept inputs, and the function output type must match
the concept output type.
The function can be specified inline, or defined and named as a top-level function. A top-level function:
type State {x: number, y: number}
type Move {speed: number<0..10>}
function computeMove(State: StateType): Move {
var distance = Math.Hypot(State.x, State.y)
return {speed: distance * 0.2}
}
concept Bar (input): Move {
programmed computeMove
}
An inline function:
concept Bar (input): Move {
programmed function(State: StateType): Move {
var distance = Math.Hypot(State.x, State.y)
return {speed: distance * 0.2}
}
}
Imported concepts
Imported concepts let you use TensorFlow v1.15.2 compatible models trained on other platforms to train Bonsai brains. Bonsai currently supports the following TensorFlow compatible formats for imported concepts:
To use imported concepts, import the model
and use the import
keyword:
concept ImportedConcept(input): Action {
import { Model: "MLModelName" }
}
Important
Imported concepts:
- can only have one input.
- cannot use image inputs.
- must have an input state with the same dimensions as the Inkling object it maps to.
If you prefer not to name the model explicitly in your concept definition, you can define the ML model as a constant and import the constant instead:
const ExternalModel = {
import { Model: "MLModelName" }
}
graph (input: SimState): Action {
concept ConceptA(input): Action {
import ExternalModel
}
output ConceptA
}
Tip
Using named functions allows you to debug them using the Inkling Debug Console in the Bonsai UI.
Input and output validation and interpretation
The Inkling compiler validates that the graph, concept, and simulator types in a concept graph are all consistent. In addition, the system will check that states sent to the brain during training or after export match the graph input type. If fields are missing, or have values that are not compatible with the specified type, the system will report an error.
For learned concepts, the type of the output determines whether the learning problem is treated as a regression or a classification, as well as how the output value may need to be rounded. Additionally, the output of a learned concept cannot contain a mix of nominal and non-nominal types. If the output type of a learned concept contains a nominal (categorical) variable then all variables in the output structure must be nominal.
number constraint |
Example | Interpretation | Input use | Output use |
---|---|---|---|---|
none | number |
Continuous | Floating-point scalar | Regression. Round to nearest int |
range | number<1..7> |
Continuous | Floating-point scalar | Regression. Round to nearest value in range |
step range | number<1..7 step 1> |
Ordinal | Floating-point scalar | Regression. Round to nearest value (honoring step) |
unnamed enumeration | number<1,3,5> |
Ordinal | Floating-point scalar | Regression. Round to nearest enumerated value |
named enumeration | number<Left=0, Right=1> |
Nominal | Categorical. | Classification. |
Important
Choosing appropriate variable types is an important part of modeling your problem that affects the statistical operations performed by a brain as it learns:
- Use continuous variables in actions and states that can change smoothly
within a range of values. For example,
power_consumption: number<0..2500>
can be any floating-point value in the range betweeen0
and2500
. - Use ordinal variables in actions and states that have discrete values
with a clear ordering. For example,
transmission_gear: number<1, 2, 3, 4>
can be one of four discrete choices (1st, 2nd, 3rd, or 4th gear) for increasing speeds. - Use nominal (categorical) variables in actions and states that are
separate options with no clear ordering. For example,
paint_color: number<Blue=0, Red=1, Green=2, Yellow=3>
can be any of the provided color choices, but the numerical values assigned to each choice are arbitrary. The fact that yellow is3
and blue is0
does not mean that yellow is "greater than" blue.
Examples
Assume you have an AI you are training to get a high score in a game. To get a high score the AI has to act on the current state of the game and decide on an appropriate move.
The following examples define a concept GetHighScore
, which takes in the
current state of the game as input (input
) and outputs a valid move
(PlayerMove
). The first example is learned, the second implements a simple
programmed heuristic.
Example 1: learned concept
# A learned concept that will learn to play the game
type State {x: number, y: number}
type Move {speed: number<0..10>}
graph(input: State) {
concept GetHighScore (input): Move {
curriculum {
source simulator GameSim(action: Move): State {}
goal(s: State) { ... } # Goal details omitted
}
}
}
Example 2: programmed concept
# A programmed concept that implements a simple heuristic
type State {x: number, y: number}
type Move {speed: number<0..10>}
function computeMove(State: StateType): Move {
var distance = Math.Hypot(State.x, State.y)
return {speed: distance * 0.2}
}
graph(input: State) {
concept GetHighScore(input): Move {
programmed computeMove
}
}
Example 3: transform brain input before learning
inkling "2.0"
using Goal
type State {x: number, y: number}
type Action {speed: number<0..10>}
function transformInput(State: StateType): number {
var distance = Math.Hypot(State.x, State.y)
return distance
}
graph(input: State) {
concept TransformInput(input): Move {
programmed transformInput
}
output concept ControlTheSystem(State: number): Action {
curriculum {
# Note that the simulator state output must match the overall graph input, not the
# concept input.
source simulator SystemSim(action: number): State {}
goal(s: number) { ... } # Goal details omitted
}
}
}
Example 4: Programmed heuristic to select one of several proposed actions
# Control a system one way when particles are small, and differently when they're large
inkling "2.0"
using Goal
type State {
particleSizes: number[3],
conveyorSpeed: number,
throughput: number,
}
type Action {
conveyorSpeed: number<1..10>,
gap: number<45..145>
}
type SimConfig {
bigParticles: number,
smallParticles: number
}
simulator EnvSim(action: Action, config: SimConfig): State {
}
function selectionStrategy(state: State, smallAction: Action, largeAction: Action) {
# a programmed rule specifying when to use each strategy.
var meanSize = (state.particleSizes[0] + state.particleSizes[1] + state.particleSizes[2])/3
# If particles are small, use strategy one. Otherwise, use strategy two.
if meanSize < 10 {
return smallAction
}
return largeAction
}
graph (input: State) {
concept SmallParticleStrategy(input): Action {
curriculum {
source EnvSim
goal (s: State) {
maximize Throughput: s.throughput in Goal.RangeAbove(20)
}
lesson One {
scenario {
# Learn how to act when most particles are small
bigParticles: 10,
smallParticles: 100
}
}
}
}
concept LargeParticleStrategy(input): Action {
curriculum {
source EnvSim
goal (s: State) {
maximize Throughput: s.throughput in Goal.RangeAbove(20)
}
lesson One {
scenario {
# Learn how to act when most particles are large
bigParticles: 100,
smallParticles: 10
}
}
}
}
output concept ChooseStrategy(input, SmallParticleStrategy, LargeParticleStrategy): Action {
programmed selectionStrategy
}
}
Example 5: Learned selector choosing one of several proposed actions
# Control a system one way when particles are small, and differently when they're large
inkling "2.0"
using Goal
type State {
particleSizes: number[3],
conveyorSpeed: number,
throughput: number,
}
type Action {
conveyorSpeed: number<1..10>,
gap: number<45..145>
}
type SimConfig {
bigParticles: number,
smallParticles: number
}
simulator EnvSim(action: Action, config: SimConfig): State {
}
graph (input: State) {
concept SmallParticleStrategy(input): Action {
curriculum {
source EnvSim
goal (s: State) {
maximize Throughput: s.throughput in Goal.RangeAbove(20)
}
lesson One {
# Learn how to act when most particles are small
scenario {
bigParticles: 10,
smallParticles: 100
}
}
}
}
concept LargeParticleStrategy(input): Action {
curriculum {
source EnvSim
goal (s: State) {
maximize Throughput: s.throughput in Goal.RangeAbove(20)
}
lesson One {
# Learn how to act when most particles are big
scenario {
bigParticles: 100,
smallParticles: 10
}
}
}
}
output concept selectStrategy(input): Action {
select SmallParticleStrategy
select LargeParticleStrategy
curriculum {
source EnvSim
goal (s: State) {
maximize Throughput: s.throughput in Goal.RangeAbove(20)
}
lesson One {
scenario {
# The selector needs to learn to act across any mix of particles
bigParticles: number<10..100>,
smallParticles: number<10..100>
}
}
}
}
}
Example 6: Learned selector choosing one of several actions, with masking
# Control a system one way when particles are small, and differently when they're large
# Use selector concept masking to learn the fuzzy boundary in between.
inkling "2.0"
using Goal
type State {
particleSizes: number[3],
conveyorSpeed: number,
throughput: number,
}
# smaller than this: definitely small
const smallSizeLimit = 10
# larger than this: definitely big
const largeSizeLimit = 40
type Action {
conveyorSpeed: number<1..10>,
gap: number<45..145>
}
type SimConfig {
bigParticles: number,
smallParticles: number
}
simulator EnvSim(action: Action, config: SimConfig): State {
}
function DisallowSmallParticleStrategy(s: State): number {
# If particles are too big, don't use this strategy
var particlesAreTooBig = (s.particleSizes[0] > largeSizeLimit)
return particlesAreTooBig
}
graph (input: State) {
concept SmallParticleStrategy(input): Action {
curriculum {
source EnvSim
goal (s: State) {
maximize Throughput: s.throughput in Goal.RangeAbove(20)
}
lesson One {
# Learn how to act when most particles are small
scenario {
bigParticles: 10,
smallParticles: 100
}
}
}
}
concept LargeParticleStrategy(input): Action {
curriculum {
source EnvSim
goal (s: State) {
maximize Throughput: s.throughput in Goal.RangeAbove(20)
}
lesson One {
# Learn how to act when most particles are big
scenario {
bigParticles: 100,
smallParticles: 10
}
}
}
}
output concept selectStrategy(input): Action {
select SmallParticleStrategy {
# reference a global mask function
mask DisallowSmallParticleStrategy
}
select LargeParticleStrategy {
# define an inline mask function
mask function(s: State): number {
# If particles are too small, don't use this strategy
var particlesAreTooSmall = (s.particleSizes[0] < smallSizeLimit)
return particlesAreTooSmall
}
}
curriculum {
source EnvSim
goal (s: State) {
maximize Throughput: s.throughput in Goal.RangeAbove(20)
}
lesson One {
scenario {
# The selector needs to learn to act across any mix of particles
bigParticles: number<10..100>,
smallParticles: number<10..100>
}
}
}
}
}