BeamNG.tech provides various positional sensors (Camera, LiDAR, Ultrasonic, RADAR, IMU, and GPS). The properties for each of these sensors are editable. They will be discussed in turn below.
To see the Edit Window for the sensor, the user should select the Edit button beside the sensor in the list box of the Attached Sensors Window.
Position Properties:
For all positional sensors, the corresponding Edit Window has a section which sets/displays the sensor position. This position is relative to the center of the vehicle (the vehicle’s first node, to be specific). Note that the position is not in the global world space of the map, although the axes are aligned to the world space (the only difference is a translation). The user can change the values for each axis by inputting them here manually or with the +/- buttons beside each input box.
Directly below the input boxes for each axis, there are various preset positions which can be chosen by the user. These positions are common vehicle-related positions and are provided for convenience. Selecting one of these preset positions will directly move the sensor to the location indicated by the preset button. These positions are as follows:
Center Of Gravity (Wheels Included) is the vehicle’s center of gravity, including the wheels in the computation. This is a common location for IMU sensors.
Center Of Gravity (Without Wheels Included) is the vehicle’s center of gravity, without including the wheels in the computation.
Front Axle Midpoint is the lateral midpoint of the vehicle’s front axle.
Rear Axle Midpoint is the lateral midpoint of the vehicle’s rear axle.
Vehicle Front Bumper Midpoint is the lateral midpoint of the vehicle’s front bumper (should be the front-most position on the vehicle).
Vehicle Rear Bumper Midpoint is the lateral midpoint of the vehicle’s rear bumper (should be the rear-most position on the vehicle).
Update Properties:
The Sensor Refresh Rate or Sensor Update Time (name dependent on sensor) sets how often the sensor will be polled by BeamNG to fetch the latest reading. Small values will provide data more regularly, but will be more computationally expensive (this becomes noticeable when many sensors are used). The user should take this into consideration when creating ADAS sensor configurations, and use the largest possible value which will give them a reasonable rate for readings.
Data Collect Time is used by some sensors as a rate by which to actually send all the readings which have been taken and to make them available. Used in conjunction with the Sensor Update Time, these properties describe the size of the data packet which will be available to the user when they collect the data. For example, if the update time is set to 0.1 second and the collect time is set to 1 second, a bulk package of 10 readings will arrive to user every second. The user should be aware of the dependence of these two rates.
Note that if the user requires the readings data as soon as possible, the collect time should be as small as the update rate. In some cases, the user will not require data readings to be made immediately available - perhaps they just need to be collected so that post-processing can be computed with it later (once the simulation has finished).
The following image highlights two cases for the two rate values.
Other Common Properties:
The Visualise On Map checkbox toggles whether or not to render a small sphere at the sensor location, when the sensor is live. This image below shows how this will appear (while in Live Mode).
The Snap To Vehicle checkbox toggles whether or not the force the sensor on to the vehicle mesh when the sensor is live. This will be the closest point on the vehicle mesh to the sensor, providing the sensor with a position different to that which has been selected. This is only relevant for sensors which are placed at positions away from the vehicle mesh.
The Update Priority value is in the range [0, 1] and appears for some sensors (those which run on the GPU - Camera, LiDAR, Ultrasonic, and RADAR). The update priority is a suggestion to the GPU scheduler algorithm used to prioritise sensor updates. Smaller values mean higher priorities. The GPU loads are balanced so as to avoid spiking in the simulation, hence why this is performed. It means that sometimes a sensors update can be delayed by a frame or two in order to balance this. On average, the sensor will still update at the rate which the user has set.
Each positional sensor also has editable properties which are specific to that sensor type. These are also discussed below.
Camera Sensors:
Camera Resolution sets the horizontal and vertical resolution of the camera images.
The camera frustum parameters are as follows:
Field Of View (FOV) sets the vertical field of view angle in degrees.
Note: the horizontal field of view is determined from this and the camera resolution (produces an aspect ratio).
Near/Far Plane Distances are cutoff distances for the camera images. If the near plane is set to 1.0 meters, no information closer to the camera than 1.0 meter will appear on the images. Likewise, if the far plane is set to 100.0 meters, anything more distant than this (from the camera position) will not appear in the image.
The image below defines these parameters geometrically, by showing how they relate to the frustum of the camera.
The camera data aquisition parameters are as follows:
Render Color Image will provide the standard color image to the user.
Render Class Annotations will provide the user with an additional image, containing segmentations (annotated by class, using a color mapping which can be set by the user).
Render Instance Annotations will provide the user with an additional image, containing segmentations (annotated by unique instance rather than class this time).
Render Depth Image woll provide the user with an additional image, containing depth data. Instead of an RGBA colour per pixel, a single floating-point value representing the depth to the detection at that camera-world ray will be returned instead. This is a very useful feature for various applications.
Note: The user should not (currently) attempt to render both types of annotations, and choose either one or the other. This is a known problem which is awaiting a fix. Please be aware of this for now.
LiDAR Sensors:
At the top of the LiDAR’s Edit Window, there are a choice of modes:
Full 360 Degrees Mode will return LiDAR data from the full 360 degree horizontal range. This is the default mode for LiDAR sensors.
LFO Mode will slowly rotate the LiDAR (at a low frequency).
Static Mode will set the LiDAR to a fixed horizontal aperture, which will not rotate.
Vertical Resolution sets the number of vertical LiDAR layers (inside the vertical aperture) at which rays are cast.
Vertical Field Of View sets the vertical aperture, in degrees. The center is along the sensor’s forward direction, with half-angle limits vertically on either side.
Horizontal Field Of View sets the horizontal aperture, in degrees. This is not used with the Full 360 Degrees Mode.
Rotation Frequency sets the frequency of rotation, when the LiDAR is operating in LFO mode (it is not applicable for the other modes).
Max Detection Range sets the maximum distance limit, after which the LiDAR will not detect any objects.
The Include Segmentation Data checkbox toggles whether to also return segmentation (annotation) information. This is a separate array of readings.
The LiDAR visualisation will appear if the sensor is put into Live Mode. Iterating between Edit Mode and Live Mode may help the user with positioning, since it will reveal unforeseen problems which cannot be seen only from the property values, such as when the LiDAR apertures clip the vehicle.
Ultrasonic Sensors:
Ultrasonic Resolution sets the horizontal and vertical resolution of the sensor. The sensor uses the depth image (similar to what is available in the Camera sensor) to collect its initial data. So this resolution can be thought of in a similar way.
Since the depth camera is used, frustum parameters are also available for Ultrasonic sensors:
Field Of View sets the vertical field of view angle in degrees. The horizontal field of view is computed from this and the sensor resolution (produces an aspect ratio).
Near/Far Plane Distances are cutoff distances for the depth values. If the near plane is set to 1.0 meters, no information closer to the sensor than 1.0 meter will appear on the images. Likewise, if the far plane is set to 100.0 meters, anything more distant than this (from the sensor position) will not appear in the image.
The Ultrasonic sensor contains six parameters which are used to shape its beam. These are Range Roundness, Range Cutoff Sensitivity, Range Shape, Range Focus, Range Min Cutoff, and Range Direct Max Cutoff. These are directly related to variables which are used in the formula for computing the sensor beam shape. Fortunately, the result of these parameter values are linked to the visualisation of the Ultrasonic sensor, such that if one of them is changed, the on-screen visualisation changes to accomodate the adjusted beam shape. The visualisation presents itself as a grey bulb-like shape, and these parameters can be used to widen it, make it longer/shorter etc. We recommend the user experiments with the values if they require a specific shape, otherwise to choose on of the preset values.
There are various buttons directly below the Ultrasonic Beam Properties section of the window. These are preset beam shapes. Selecting one of these will update the six beam shape parameters. Note that any changes the user has made will be lost by this process.
The following image shows some examples of different Ultrasonic beam shapes which can be generated by adjusting the six parameters.
The Ultrasonic sensor also contains two parameters related to its detection properties. The Sensitivity parameter provides a general sensitivity threshold for the sensor, where smaller values describe a more-sensitive sensor and larger values describe a less-sensitive sensor. The Window Width parameter relates to how large (area) the object must be in order to be detected by the sensor. The user should experiment with these two parameters if they wish to aim for specific sensor behaviour, otherwise use the default values.
The Ultrasonic visualisation will appear if the sensor is put into Live Mode. This will replace the grey bulb-like beam shape visualisation which is used in Edit Mode.
RADAR Sensors:
RADAR Resolution sets the horizontal and vertical resolution of the sensor. The sensor uses the depth image (similar to what is available in the Camera sensor) to collect its initial data. So this resolution can be thought of in a similar way.
Since the depth camera is used, frustum parameters are also available for RADAR sensors:
Field Of View sets the vertical field of view angle in degrees. The horizontal field of view is computed from this and the sensor resolution (produces an aspect ratio).
Near/Far Plane Distances are cutoff distances for the depth values. If the near plane is set to 1.0 meters, no information closer to the sensor than 1.0 meter will appear on the images. Likewise, if the far plane is set to 100.0 meters, anything more distant than this (from the sensor position) will not appear in the image.
The RADAR sensor contains six parameters which are used to shape its beam. These are Range Roundness, Range Cutoff Sensitivity, Range Shape, Range Focus, Range Min Cutoff, and Range Direct Max Cutoff. These are directly related to variables which are used in the formula for computing the sensor beam shape. Fortunately, the result of these parameter values are linked to the visualisation of the RADAR sensor, such that if one of them is changed, the on-screen visualisation changes to accomodate the adjusted beam shape. The visualisation presents itself as a grey bulb-like shape, and these parameters can be used to widen it, make it longer/shorter etc. We recommend the user experiments with the values if they require a specific shape, otherwise to choose on of the preset values.
Note that the six beam shape parameters for the RADAR sensor are the same used with the Ultrasonic sensor, but the beams used with RADAR are typical much larger in volume. The formula used to compute the beam is the same for both sensor types.
There are various buttons directly below the RADAR Beam Properties section of the window. These are preset beam shapes. Selecting one of these will update the six beam shape parameters. Note that any changes the user has made will be lost by this process.
The following image shows some examples of different RADAR beam shapes which can be generated by adjusting the six parameters.
The RADAR sensor is distinguished from other sensors (such as the Ultrasonic or LiDAR) by its post-processing. This is done to organise the raw returns from the simulator into structures which behave more like the way true RADAR sensors would present themselves to the operator, such as through Range-Doppler or Plan-Position-Indicator plots. To control the post-processing, there are various parameters which can be set in the RADAR’s Edit Window. These are as follows:
Range Bins, Azimuth Bins and Velocity Bins describe the resolution of the final data structures. In short, the returns are placed into bins in either 2D or 3D space, then the final sizes of the bins are represented with a size/colour on the plot. Range is typically shown along the Y-Axis, although depending on the plot, the axes can vary a bit.
Min Range and Max Range describe the range interval which will be considered for post-processing. Anything outside this range will be ommitted from any plots or data structures, so will not appear in the final data available to the user.
Similarly, Min Velocity and Max Velocity provide a range for the velocity values. Any velocities outside of this range will be snapped to the lower or upper limit of this range respectively (unlike what happens to range values outside the range interval).
Azimuth Half Angle describes the horizontal (azimuthal) aperature half-angle, from the sensor direction to the left and right edges. This is mostly used when scope plots are to be produced.
The RADAR visualisation will appear if the sensor is put into Live Mode. This will replace the grey bulb-like beam shape visualisation which is used in Edit Mode.
IMU Sensors:
The IMU sensor contains two parameters for smoothing the output. These are Acceleration Smoothing and Gyroscopic Smoothing respectively. The former relates only to the accelerometer readings and the latter relates only to the gyroscopic (angular velocity) readings. These values represent the window size used in the smoothing. Larger values will smooth the signals more.
The Include Gravity checkbox toggles whether or not to include acceleration due to gravity in the accelerometer computations. This will add an acceleration vector of magnitude 9.81 downwards (world space).
The Allow Wheel Nodes (On Snap) checkbox is only considered if the user has set the Snap To Vehicle checkbox. In this case, the wheel mesh will also be considered. This allows the IMU sensor to be attached to the wheels. If it is left unchecked, it will not be possible to snap the sensor to the wheel.
GPS Sensors:
The GPS sensor contains two parameters for the user to supply the origin point (in Longitude, Latitude). This point maps to the world space origin (point (0, 0, 0) on the map).
The Allow Wheel Nodes (On Snap) checkbox is only considered if the user has set the Snap To Vehicle checkbox. In this case, the wheel mesh will also be considered. This allows the GPS sensor to be attached to the wheels. If it is left unchecked, it will not be possible to snap the sensor to the wheel.