Skip to main content

Prerecorded Configuration

To be able to use a prerecorded sequence of frames in the capturer, camera_type field in cameraconfig.json must be set to recorder.

Device

In the case of datasets, the device configuration is only read from general and used for all "cameras" the same.

  • input_path: Path to the main folder containing the recorded frames containing the folders in the same format as the RGBPMRecorder. This field is required.
  • loop_frames: This field is used to decide whether to go back to the first frame of the prerecorded sequence once all frames have already been reproduced.
  • allow_frame_loss: Indicates whether frames can be lost when reproducing a prerecorded sequence of frames.
  • ensure_first: Sets the internal allow_frame_loss to false until the first frame has been consumed, then it sets it back to value set by the user. Needed for audio synchronization.
  • input_config: [metadata | cameraconfig] Decides whether reconstruction parameters (postprocessing) are taken from capture_metadata.json or cameraconfig.json.
  • input_calib: [metadata | cameraconfig] Decides if the trafos are taken from capture_metadata.json or cameraconfig.json.
    • If none are given in the cameraconfig and that is selected, assume the use of all "cameras" with identity as trafos, useful for calibrating the dataset again, only possible if the calibration board is present.
    • If using a list of serials and the camera config as input, use only those cameras with the trafos in the metadata.
  • framerate_config: [metadata | cameraconfig | timestamps | none]Indicated where to read the configuration of the playback speed.
    • Framerate set in cameraconfig
    • Framerate in capture_metadata
    • Following frames' capture timestamps stored in timestamps.txt
    • Give frames as soon as they are available using none.
  • first_frame and last_frame: Fits the reproduction of the prerecorded sequence within the specified frames given by their index.

Take a look at cameraconfig-prerecorded.json for an example.

Camera Config

It can be omitted or left with the default values as the resolution will depend solely on the resolution it was recorded at. The only field that might be read is the fps if the framerate_config in the device configuration is set to cameraconfig.

Dataset types

According to the source of the geometry data we have two main dataset types: depth and XYZ.

Depth datasets use color and depth images and process the full pipeline to output the pointcloud data. XYZ datasets read color and PM data directly to produce the pointcloud data.

Depth

This dataset is the rawest pipeline. It is intended to use the prerecorded sequence as if its frames came from a camera. It computes the geometry from depth images and camera parameters.

The required data in the Pre-Recorded-Dataset is /color and /depth inside the folder for each camera plus all the camera and capture metadata stored in capture_metadata.json.

XYZ

XYZ datasets work together with XYZEncoder to generate general purpose RGBPMs from the positions given. The required data in the Pre-Recorded-Dataset is /color and /pm.

The required fields in capture_metadata.json are:

  • dScale: The scale of the positions to encode in meters.
  • bounding_box: The box containing the complete poincloud to increase encoding and decoding precision. It has to be in the same units as the input depth or pm
  • image_shape: The image shape of the largest position file in /pm.

Considerations

There are some key points to be aware of when using a prerecorded sequence in capturer to avoid possible issues:

  • Missing frames in the Pre-Recorded-Dataset. If allow_frame_loss they are dropped and that's it, otherwise, be aware that the frame indices of the input dataset and the output one won't be the same.
  • Incompatibilities between RepresentationType and dataset type. The input manager will modify its output to be compatible with the input given, so be aware to check the representation type after creating the manager object.