3D Motion Magnification: Visualizing Subtle Motions with Time-Varying Neural Fields
fork_3.mp4
This approach is based on NeRF and already supports the use of a single-camera. However there is room for improvement:
- NeRF requires lots of viewpoints.
- NeRF is computationally heavy - I can only use pre-trained weights on the author’s examples since the laptop doesn’t have enough graphical memory. The compute time is around 30 seconds per frame, even at the inference stage.
- NeRF needs the camera pose and orientation for each image in the training stage. Some authors use the COLMAP tool to retrieve this information from the images. (COLMAP works similarly to the method I used to retrieve the camera’s pose). The authors of 3D Motion Magnification: Visualizing Subtle Motions with Time-Varying Neural Fields claim that, for real-world environments, the accuracy of these methods may not be enough for motion magnification since “inaccurate pose estimation would exacerbate the ambiguity between camera motion and scene motion, which could hinder magnifying subtle scene motions or lead to false motion magnification. Therefore, real-world data should be captured under conditions where accurate camera intrinsic and extrinsic parameters are accessible, either from reliable RGB-based SfM with textured surfaces in the scene, or from cameras that support 6-DoF tracking during capture”.
Since MotionScope does the motion magnification in 2D, the methodology would be different by avoiding the motion magnification on the fly, which may allow less accuracy.
Other approaches use multiple cameras to get the 3D displacement in a few points, not the entire image. In addition to that, they require prior camera calibration.
Possible review paper title: The use of multi-view systems on phase-based motion magnification
Possible paper structure:
- Introduction
- Phase-based motion magnification
- Multi-view systems
- Integration of Multi-view Systems and Phase-Based Motion Magnification
- Tests and results
- Conclusion