How to control a power outlet with a raspberry pi

Doing this is actually surprisingly easy, but I couldn’t find a simple guide online on how to do this. In order to do this, you’ll need the following:

#Part
1Raspberry Pi
1IoT Relay

The “IoT relay” is the simplest (and safest) way to do this. Basically, you just need to supply power to the signal connector (green thing with wires plugged into it on the right) and you can do this directly with a GPIO pin (supplies 3.3V) on the raspberry pi:

Continue reading “How to control a power outlet with a raspberry pi”

Ellipse Detection/Localization

In this article I’ll discuss multiple ways to localize an ellipse in an image.

“DUAL CONIC” method

This method is from Hebert09. I think it’s akin to the “opencv” checker localization algorithm in that it’s a linear algorithm that operates on the image gradients.

Anyway, to understand this method, you need to understand what a conic section is. A conic section is a curve obtained as the intersection of the surface of a cone with a plane. The possible conic sections are a hyperbola, parabola, and ellipse. It turns out that a conic can be represented as a matrix:

 [Aq]=  \begin{bmatrix} A & B/2 & D/2 \\ B/2 & C & E/2 \\ D/2 & E/2 & F \end{bmatrix}

and points, represented in homogeneous coordinates as \vec{x} = [x\  y\  z]^T , lie on the conic if:

\vec{x}^T [Aq]\vec{x} = 0

Continue reading “Ellipse Detection/Localization”

Checker Detection/Localization

In this article I’ll discuss multiple ways to localize a checker in an image.

“opencv” method

The opencv method is the defacto standard for checker localization. It’s fast, robust, accurate and is the checker localization algorithm used in Bouguet’s camera calibration toolbox. It is based on the observation that a vector with its tail at the center of a checker and the tip in a region around a checker should always have a zero dot product with the intensity gradient located at the tip of the vector:

Note that the example figures in this section are for a corner, but the same holds for a checker. Anyway:

  • in “flat” regions: <\nabla I(\vec{p}),\vec{p}-\vec{q}> = 0
  • in edge regions: <\nabla I(\vec{p}),\vec{p}-\vec{q}> = 0

Continue reading “Checker Detection/Localization”

b0 to T1 atlas coregistration with FSL and ANTS

In this post, I’ll demonstrate how to coregister a b0 (non diffusion weighted EPI image) to a T1 weighted atlas.

Tools and files used in this article:

b0_atlas_coreg_inputs.zip contains a T1 and b0 from the same subject, as well as a T1 weighted MNI atlas, all in the nifti format.

Continue reading “b0 to T1 atlas coregistration with FSL and ANTS”

T1 intensity normalization with FreeSurfer

In this post, I’ll demonstrate how to perform a minimal FreeSurfer based T1 intensity normalization pipeline. It’s very simple, but can be daunting if you’ve never used FreeSurfer before.

Tools and files used in this article:

T1.nii.gz is an HCP T1 weighted image and the version of FreeSurfer used is freesurfer-x86_64-unknown-linux-gnu-stable6-20170118.

Continue reading “T1 intensity normalization with FreeSurfer”

Camera Calibration Theory

Single Camera Model

We assume the camera adheres to the “pin-hole” model, where points in space project as straight lines to the camera aperture (the origin of the “scene” coordinate system) and intersect through an image plane at “image points”. This image plane is supposed to represent the ideal physical location of the image sensor, and contains a 2D projection of 3D scene points.

The diagram below describes the model:

Continue reading “Camera Calibration Theory”

Multiple Camera Calibration

In this article, I’m going to calibrate a multi camera setup and I’m going to do it in a way that’s automated and reproducible.

Tools and files used in this article:

First, install the camera_calib toolbox:

git clone https://github.com/justinblaber/camera_calib.git ~/camera_calib

Next, download the example data (warning: very large file…):

mkdir -p ~/Desktop/multi_camera_calib
cd ~/Desktop/multi_camera_calib
wget https://justinblaber.org/downloads/articles/multi_camera_calib/multi_camera_calib.zip
unzip multi_camera_calib.zip

Continue reading “Multiple Camera Calibration”

Acquiring Synchronized Multiple Camera Images with Spinnaker Python API + Hardware Trigger

In the previous article, I set up a multi-camera rig with Flir Blackfly S cameras and a hardware trigger setup. The next step is to configure the cameras via spinnaker API so that the synchronized capture works correctly.

The first section gives a very basic example of how to acquire a set of synchronized images using the PySpin API. The section after that describes how to use the multi_pyspin app to collect images.

A simple PySpin example

The first thing to do is to download and install the Spinnaker SDK and python package. In this article, I’ll be using Ubuntu 18.04 with Spinnaker version 1.23.0.27 and 1804.0.113.3 firmware version.

The documentation for synchronized capture for the Blackfly S states you must:

  • For the primary camera: 1) enable the 3.3V output for line 2
  • For each secondary camera: 1) set the trigger source to line 3, 2) set the trigger overlap to “read out”, and 3) then set trigger mode to “on”.
Continue reading “Acquiring Synchronized Multiple Camera Images with Spinnaker Python API + Hardware Trigger”

Multiple Camera Setup with Blackfly S Mono USB3 Vision Cameras

The Cameras

#Part
NBlackfly S IMX 252 Mono 3.2 MP USB3 Vision Camera
N6 pins, 1m GPIO Cable, Hirose HR10 Circular Connector
NUSB 3, 1m, Type-A to Micro-B Cable
11 kiloohm resistor

I decided to go with Flir Blackfly S IMX252 Mono USB3 vision cameras; I choose them because they offer a good overall balance between image quality, resolution, price, and they use Flir’s Spinnaker SDK.

The setup I’m using for synchronized capture is a primary/secondary setup. One camera (in my case, the left-most camera) is set as the “primary” camera. This camera sends out a strobe signal when it begins exposure of an image to the secondary camera(s) to trigger them to acquire images at the same(ish) time.

Continue reading “Multiple Camera Setup with Blackfly S Mono USB3 Vision Cameras”

bedpostx with Docker and Singularity!

In the previous article, I discussed the preprocessing of diffusion data. In this article, I’ll demonstrate how to use bedpostx with Docker and Singularity!

Tools and files used in this article:

The version of FSL used in the docker is 5.0.10 and it has the cuda 8 version of bedpostx_gpu from here. PREPROCESSED.zip is preprocessed diffusion data using my dtiQA Singularity/Docker image.

Continue reading “bedpostx with Docker and Singularity!”