Spatial Memory

Spatial memory refers to human memory for spatial information, such as the geographical layout of a town or the interior of a house. As we navigate the world, we store information about our surroundings that form a coherent spatial representation of the environment in memory. Similarly, the ZED builds and updates a spatial representation of its surroundings when discovering an environment.

This representation is stored in the form of a heuristic map called Area file. Area files are compact and efficient representations of an environment that are not meant to be visualized. They are useful to create a consistent and repeatable experience in a specific area. They can also be used to localize multiple devices in the same area.


Spatial Memory is enabled by default in TrackingParameters. While moving the ZED in its environment, key images and contextual information will be saved. When the tracking detects an already-visited zone, it uses its memory to compute a robust position estimation that cancels eventual drifts. This solution is known as a loop-closure.

When detected, the drift will immediately be canceled to keep on providing the most accurate estimation. However, as this behavior can generate a sudden jump is the returned positions, TrackingParameters.pose_smoothing can be used to spread the drift correction across the upcoming estimations.

Memorizing an Area

During tracking, the ZED will incrementally build an internal representation describing the viewed area. To save this representation as an Area file, use saveCurrentArea() during tracking or end the tracking session with disableTracking() and add a path as an argument.

// Disable positional tracking and save an Area file

The Area file contains the position of the stationary World Frame created at the beginning of the session. Saving and loading an area file allows to maintain and share a fixed reference frame between different sessions.

Loading an Area

To load an area file, simply add the path of the file to TrackingParameters and enable tracking.

TrackingParameters tracking_params;
tracking_params.area_file_path = "filename.area"

When tracking is enabled, the ZED will start searching for spatial similarities between the current and recorded area. Once it recognizes the space, TRACKING_STATE will switch from SEARCHING to OK and positional tracking will start using the previously recorded World Frame as reference frame.

In certain cases, the camera may not be able to recognize its surroundings. You will need to create a new Area file if:

  • The spatial layout of the scene has changed or significant objects have moved
  • Area has been covered very quickly or with specific camera angles
  • Area has many blank walls or identical surfaces
  • Starting position is far from the trajectory used during spatial memorization

Use Cases

Spatial memory has three common use cases:

  • Ensure a consistent experience in a specific area
  • Improve positional tracking by correcting drift
  • Localize multiple devices in the same space

Ensure a consistent experience

Some applications need to map an area first before running multiple times in this specific area. This is the case of mixed-reality applications or robots that navigate a home or a warehouse. In this case, using Spatial Memory enables the application to store a spatial representation of the environment for future use.

During new runs, the device can load the Area file previously recorded and recover the World Frame position and 3D data attached to it. Every virtual object or navigation map will appear in the same physical location, enabling a consistent experience between runs.

Improve positional tracking

As with IMUs, motion estimation works well locally but small estimation errors accumulate slowly over time. Spatial Memory mode enables the ZED to learn and recognize its surroundings during use. When the device recognizes an area, it corrects its position in space and eliminates the positional drift that may have accumulated.

Drift correction can sometimes cause jumps in the absolute pose. Relative pose is not corrected. If your application requires a smooth pose, use getPosition() with Camera Frame as reference frame. If you want to distribute the correction over multiple frames, you can create a correction buffer by calculating the difference between the corrected camera path (obtained using getPosition() in the World Frame) and the relative pose.

Localize multiple devices in space

Loading an area file also allows multiple ZED cameras to share a common reference frame during a session.

The devices will localize themselves with respect to the World Frame recorded in the Area file. A user can specify the position of this reference frame using an Initial World Transform, otherwise the World Frame will be located at the initial position of the camera when tracking with spatial memorization was started. Without spatial memory, different devices will not be able to understand their absolute position in the world, and will track their position with respect to their own starting point.