This page provides an overview of the problem that motivates this project and the strategy to solve it. It also provides a timeline for selecting a stage and seeing more details of that stage.
The techniques currently available to identify operational vibration modes using video cameras rely only on one viewpoint. The problem is that the results could be badly interpreted with only one point of view since certain vibration modes look similar to others when projected to the camera's plane, causing “optical illusions.” The following GIF is a section of a motionscope output video.
(The GIF disappeared from notion. I will upload it again)
In this image, it is possible to see that, if we didn't know what to expect, it would be tough to guess if the drone's arm is moving right and left or back and forth relative to the image plane.
The main goal of this project is to develop a system that can be parallel or integrated into MotionScope and allows the combination of two or more viewpoints to obtain the 3D displacement of each point so that it is possible to determine the vibration mode with certainty. To do this, it is necessary to accomplish the following objectives: