Typically, the process to create fake slow motion videos is a difficult one, as it requires software to stretch a video clip out by creating hundreds-and-hundreds of non-existent frames to place in-between existing frames. The result of this hard process often yields low-quality success, as it produced stuttering and unconvincing visual quality. Nvidia aims to take advantage of the incredible image-processing potential of deep learning to create a method to produce high-quality fake slow motion videos from a regular clip.
If you slow down a video clip from 30 frames per second (FPS) to 24 FPS, you’d need to manufacture 210 additional frames, or seven in-betweens for every frame captured in the original clip. The usual way was to blend or morph the before and after frames to make new interstitials, but the finished product looks rough and jittery compared to real slow motion (like what you’d see in instant replays in sports). While software exists to improve the quality of faked slow motion, complex analysis of the original video’s motion is required and that can take HOURS to render.
Nvidia is taking a different route, utilizing a variant of deep-learning AI that was trained on over 11,000 reference videos of slow motion sports videos natively filmed at 240 FPS, allowing the neural network to predict how the 210 missing in-between frames should look, solely based on the before and after frames. Nvidia is also accounting for the instance of when increased framerates can result in lower resolution due to high data bandwidth being produced on the fly. Additionally, Nvidia’s process is MUCH CHEAPER than spending tens of thousands of dollars on high-speed cameras (like the Phantom) since the slow motion process occurs AFTER the video is recorded, even though the main trade-off is that the slo-mo results are not instant as Nvidia’s high-end graphics processors powering the AI needs time to process.