In 2016 Adobe demonstrates an idea. They took a small library of a person’s public speaking and gave it a text editing interface. All of a sudden, anyone who could cut and paste text could create convincing statements in the person’s voice – statements the person never said. (As of early 2018 VoCo has not been made into a product.)
In 2018 the open source community has been accelerating the “deepfake” AI algorithm which performs face swapping in images – and by extension in the frames of video. The current output is passable to a cursory view. At the rate of progress it may soon be passable to anything short of serious analysis.
Combine VoCo and DeepFake and you would get video which easily passed as real on many internet and social media outlets.
Now let’s add in the 2017 research work of face2face which uses face tracking of one person to manipulate video of the same or another person. The results have already demonstrated passable video material.
This is a scary thought in today’s reality of cyber espionage and the manipulation of social media to instill discord and polarization.