Works on 3D motion magnification
3D Motion Magnification: Visualizing Subtle Motions with Time-Varying Neural Fields
fork_3.mp4
This approach is based on NeRF and already supports the use of a single-camera
Research gaps:
- NeRF requires lots of viewpoints.
- NeRF is computationally heavy - I can only use pre-trained weights on the author’s examples since the laptop doesn’t have enough graphical memory. The compute time is around 30 seconds per frame, even at the inference stage.
- NeRF needs the camera pose and orientation for each image in the training stage. Some authors use the COLMAP tool to retrieve this information from the images. (COLMAP works similarly to the method I used to retrieve the camera’s pose). The authors of 3D Motion Magnification: Visualizing Subtle Motions with Time-Varying Neural Fields claim that, for real-world environments, the accuracy of these methods may not be enough for motion magnification since “inaccurate pose estimation would exacerbate the ambiguity between camera motion and scene motion, which could hinder magnifying subtle scene motions or lead to false motion magnification. Therefore, real-world data should be captured under conditions where accurate camera intrinsic and extrinsic parameters are accessible, either from reliable RGB-based SfM with textured surfaces in the scene, or from cameras that support 6-DoF tracking during capture”.
Since MotionScope does the motion magnification in 2D, the methodology would be different by avoiding the motion magnification on the fly, which may allow less accuracy.
The following methods don´t have a 3D graphical representation of the motion magnification. They present quantitative displacement results on convenient points (where it is possible to match the points between two viewpoints with reliability). All of them require camera calibration.
Feasibility of extracting operating shapes using phase-based motion magnification technique and stereo-photogrammetry
- Improvement of the data quality
- 3D information
3D mode shapes characterisation using phase-based motion magnification in large structures using stereoscopic DIC
- Reduction of noise, allowing the capture of low-amplitude displacements
Target-free 3D tiny structural vibration measurement based on deep learning and motion magnification
- This method uses the Super Glue feature match algorithm to find the matches and then triangulate to get the 3D displacement.
Research gaps:
- Uncalibrated cameras
- Interactive results presentation - The idea is to generate a mesh/point cloud that we can observe from any perspective