Photogrammetry of an Object in Motion - An attempt to confound the model
By
Brandon Mattox, Applied Science Group, NCS SubSea
 
I’ve always had a fascination with creating 3 dimensional models, either in CAD software or sometimes doodling isometric drawings on paper. Creating these help me visualize and understand the objects better than just having a few photos. Take for instance shopping for a car or truck online, you can get an OK idea of what the truck looks like, but you never really know how much head room you have until you sit in it. Enter the age of photogrammetry and we can take those few photos and extract geometry to calculate positions for generating 3D models. Now we can take measurements, estimate tolerances and just get a better idea of what we’re looking at.

Recently I stumbled across a company that makes photogrammetry software called Pix4D who offer a free software called Pix4Dmapper Discovery. I decided to give it a test at the NCS office with a DJI quadcopter. The results were impressive despite my inexperience piloting the quadcopter. I was surprised to see the buildings rendered as far away as 350 meters, though the quadcopter never left the property.
 


Figure 1. The initial results from 30 georeferenced images taken from a DJI Phantom 2 Vision + quadcopter
 
After the initial test I started looking for ways to incorporate the technology into our usual work of marine seismic and survey. Typically we don’t use photography in our work and even if we did the marine environment is quite dynamic, meaning nothing is ever stationary on the water. I decided the best use of the technology would be capturing a vessel at the dock as an external check on measurements and offsets. Of course the vessel will still likely be in motion at the dock, so I needed to find out if the software would resolve a good model even with the motion. As NCS is located in wonderful Stafford, TX I was quite a ways from the nearest potential real world vessel test so I decided to try the experiment with the only mobile object I could use within arm’s reach, a company truck.

Simply put, I pulled a Ford F-150 into the NCS parking lot and started snapping two sets of pictures with my handheld camera. The first set was two trips around the vehicle: once holding the camera near my chest; the other over my head to capture different angles (Pix4D Support Site, Designing the Image Acquisition Plan). Before shooting the second set of images I moved the truck about 12 inches backwards. I then proceeded to take an equal number of images in the same way I took the first set, two trips around with one set at chest height, the other over my head. Both datasets (after processing in the Pix4D software) are impressive, capturing very high-resolution models of the truck.




Figure 2. Animated model of truck in it's original position





Figure 3. Animated model of truck in it's second position


Once both sets of results were processed and showed similar resolution I merged both sets of data in the software. Combining projects is an option in the Pix4D software likely used for adding additional locations to existing models. The results of my merge are somewhat expected as shown in Figure 4. I’ve circled several of the discrepancies that were noticeable in the model. Obviously the software favored the first project where the truck was originally placed and decided to toss in bits of the truck when it was in the second position that didn’t fit with the surroundings.

 

Figure 4. The merged project showing duplicated parts circled in green

Overall I’m surprised how well the software handled my problem. While the model obviously has some discrepancies it also holds a lot of value for relative positions. For instance I can still use the model to determine the length of the truck, or the spacing between the wheels with a certain degree of confidence.

There are a few things I think could be done to improve the model. In the software there is an option to create manual tie points linking similar pixels to others in overlapping images. If enough of the manual tie points are created for the truck, then less confidence could be placed in image matches of the background (Pix4D Support Site, Shooting a Moving Object). The other more extreme solution is to edit the images to remove most of the background making the surrounding obsolete and forcing the software to model only the truck, though I don’t know what ramifications this might have in the software - not to mention the time consumption of editing upwards of 100 images.

My conclusion is that while a ship at dock might be moving, if we take enough images, let’s say double the number we might for a stationary object, we should be able to construct a relatively good model of a vessel at dock no matter what the conditions of movement. The apparent limitations of modeling a moving object with photogrammetry are no georeferencing of the final product and noisy data to contend with, either in the background or the model itself.
 
References
2016, Pix4D Website, Pix4D Discovery Software. Pix4D 2016 [cited March 28, 2016] Available from https://www.pix4d.com/product/pix4dmapper-discovery/
2016, Pix4D Support Site, Designing the Image Aquisition Plan. Pix4D 2016 [cited March 28, 2016]. Available from https://support.pix4d.com/hc/en-us/articles/202557459-Step-1-Before-Starting-a-Project-1-Designing-the-Image-Acquisition-Plan-a-Selecting-the-Images-Acquisition-Plan-Type#gsc.tab=0.
2016, Pix4D Support Site, Shooting a Moving Target. Pix4D 2016 [cited March 28, 2016] Available from https://support.pix4d.com/hc/en-us/community/posts/202342319-Shooting-a-Moving-Target#gsc.tab=0