In the previous article, I set up a multi-camera rig with Flir Blackfly S cameras and a hardware trigger setup. The next step is to configure the cameras via spinnaker API so that the synchronized capture works correctly.
The first section gives a very basic example of how to acquire a set of synchronized images using the PySpin API. The section after that describes how to use the multi_pyspin app to collect images.
A simple PySpin example
The first thing to do is to download and install the Spinnaker SDK and python package. In this article, I’ll be using Ubuntu 18.04 with Spinnaker version 1.23.0.27 and 1804.0.113.3 firmware version.
The documentation for synchronized capture for the Blackfly S states you must:
- For the primary camera: 1) enable the 3.3V output for line 2
- For each secondary camera: 1) set the trigger source to line 3, 2) set the trigger overlap to “read out”, and 3) then set trigger mode to “on”.
Lets go ahead and start coding this up. The first step is to find the cameras and initialize them:
import PySpin # Set camera serial numbers serial_1 = '19061245' serial_2 = '16276941' serial_3 = '16276942' # Get system system = PySpin.System.GetInstance() # Get camera list cam_list = system.GetCameras() # Get cameras by serial cam_1 = cam_list.GetBySerial(serial_1) cam_2 = cam_list.GetBySerial(serial_2) cam_3 = cam_list.GetBySerial(serial_3) # Initialize cameras cam_1.Init() cam_2.Init() cam_3.Init()
Next, we follow the documentation and set up the hardware trigger:
# Set up primary camera trigger cam_1.LineSelector.SetValue(PySpin.LineSelector_Line2) cam_1.V3_3Enable.SetValue(True) # Set up secondary camera trigger cam_2.TriggerMode.SetValue(PySpin.TriggerMode_Off) cam_2.TriggerSource.SetValue(PySpin.TriggerSource_Line3) cam_2.TriggerOverlap.SetValue(PySpin.TriggerOverlap_ReadOut) cam_2.TriggerMode.SetValue(PySpin.TriggerMode_On) # Set up secondary camera trigger cam_3.TriggerMode.SetValue(PySpin.TriggerMode_Off) cam_3.TriggerSource.SetValue(PySpin.TriggerSource_Line3) cam_3.TriggerOverlap.SetValue(PySpin.TriggerOverlap_ReadOut) cam_3.TriggerMode.SetValue(PySpin.TriggerMode_On)
The final step is to acquire the sychronized images:
# Set acquisition mode to acquire a single frame, this ensures acquired images are sync'd since camera 2 and 3 are setup to be triggered cam_1.AcquisitionMode.SetValue(PySpin.AcquisitionMode_SingleFrame) cam_2.AcquisitionMode.SetValue(PySpin.AcquisitionMode_SingleFrame) cam_3.AcquisitionMode.SetValue(PySpin.AcquisitionMode_SingleFrame) # Start acquisition; note that secondary cameras have to be started first so acquisition of primary camera triggers secondary cameras. cam_2.BeginAcquisition() cam_3.BeginAcquisition() cam_1.BeginAcquisition() # Acquire images image_1 = cam_1.GetNextImage() image_2 = cam_2.GetNextImage() image_3 = cam_3.GetNextImage() # Save images image_1.Save('cam_1.png') image_2.Save('cam_2.png') image_3.Save('cam_3.png') # Release images image_1.Release() image_2.Release() image_3.Release() # end acquisition cam_1.EndAcquisition() cam_2.EndAcquisition() cam_3.EndAcquisition()
If the hardware trigger is set up properly, there should be three images in the current working directory: cam_1.png
, cam_2.png
and cam_3.png
.
A multi_pyspin example
multi_pyspin is a simple library/gui for acquiring multi images with the python Spinnaker API (PySpin):
https://github.com/justinblaber/multi_pyspin
My recommendation is to create a separate folder where you wish to acquire the images and copy the yaml configuration files into it:
mkdir -p ~/Desktop/multi_camera_test cd ~/Desktop/multi_camera_test cp ~/multi_pyspin/*.yaml .
The yaml files in the repo are how I set my cameras up. These yaml files contain the camera serial and PySpin node commands to set up the camera. For the primary camera, it contains:
--- serial: 19061245 init: - UserSetSelector: value: PySpin.UserSetDefault_Default - UserSetLoad: - LineSelector: value: PySpin.LineSelector_Line2 - V3_3Enable: value: True - AcquisitionFrameRateEnable: value: True - AcquisitionFrameRate: value: 5 - ExposureMode: value: PySpin.ExposureMode_Timed - ExposureAuto: value: PySpin.ExposureAuto_Off - ExposureTime: value: 60000 - GainSelector: value: PySpin.GainSelector_All - GainAuto: value: PySpin.GainAuto_Off - Gain: value: 6 - BlackLevelSelector: value: PySpin.BlackLevelSelector_All - BlackLevel: value: 0 - GammaEnable: value: False - PixelFormat: value: PySpin.PixelFormat_Mono16 - AdcBitDepth: value: PySpin.AdcBitDepth_Bit12
This sets up the hardware trigger on the primary camera (described in the previous section), sets the framerate, exposure, and gain, and then disables some on-board image processing and also sets the output to be 16 bit. This is meant to be a “high quality” image configuration. The secondary camera(s) configuration looks similar, except it sets up the secondary hardware trigger:
--- serial: 16276941 init: - UserSetSelector: value: PySpin.UserSetDefault_Default - UserSetLoad: - TriggerMode: value: PySpin.TriggerMode_Off - TriggerSource: value: PySpin.TriggerSource_Line3 - TriggerOverlap: value: PySpin.TriggerOverlap_ReadOut - TriggerMode: value: PySpin.TriggerMode_On - AcquisitionFrameRateEnable: value: True - AcquisitionFrameRate: value: 5 - ExposureMode: value: PySpin.ExposureMode_Timed - ExposureAuto: value: PySpin.ExposureAuto_Off - ExposureTime: value: 60000 - GainSelector: value: PySpin.GainSelector_All - GainAuto: value: PySpin.GainAuto_Off - Gain: value: 8 - BlackLevelSelector: value: PySpin.BlackLevelSelector_All - BlackLevel: value: 0 - GammaEnable: value: False - PixelFormat: value: PySpin.PixelFormat_Mono16 - AdcBitDepth: value: PySpin.AdcBitDepth_Bit12
This secondary camera configuration has the same exposure and fps as the primary camera. This is to ensure frames are collected together and without lag (i.e. any time dependent features should be the same among all cameras). The only difference is the gain. The gain is modulated on a per camera basis to make the overall image intensities the same. Even though the cameras and lens are the same, the image intensities may not be. I believe the main reason for this is I have set a very small aperture (for large depth of field). This small aperture, even if it differs slightly between cameras, can probably have a large effect on how much light hits the sensor. Suffice to say that I’m simply using the gain to modulate differences between camera/lens setup.
Anyway, I’d recommend starting the gui, start streaming, find the most optimal settings, and then write these updated values to the yaml files. This allows you to keep track of which settings were used for the images that were acquired.
If you followed the instructions on the github, then you can start the gui with:
~/multi_pyspin/multi_pyspin.simg
My recommendation is to leave the terminal open next to the gui while acquiring images as it will display useful debugging info. Then, enter the names of the yaml files and click the “setup” button(s). When you start the streams, begin with the primary camera first. The multi_pyspin gui is set up such that it is assumed the left most camera is the primary camera, so starting this stream first allows the primary camera to trigger the secondary cameras. Otherwise, the secondary cameras will simply wait for a trigger and won’t stream anything.
If successful, the following should appear:
You can acquire single camera images and synchronized multi camera images using the “save” buttons.
We now have our multi camera rig and an app for acquiring synchronized capture of images. The next step is to perform camera calibration, which is discussed in the next article.
Hi Justin,
Can’t properly relay my appreciation for this code you’ve made public. I’m finishing up my doctoral thesis at University of Miami. I use the polarization cameras inside a custom built water proof enclosure to image the light in the ocean and look at the distribution of polarized light . Your code has been a great asset to me.
I’m reaching out to see how you prefer to be credited in my work/dissertation. Let me know.
Thanks again,
Riley