It is an artificial intelligence using to manipulate an image or what a person is saying, Deep fake (Deepfake) is used by scammers and bullies to steal and manipulate others.
What is Deep Fake (DeepFake)?
A deep fake is a process of changing a picture or video by artificially replacing one person with another so that it appears the inserted person was always there. Artificial Intelligence has made this possible in an almost unnoticeable manner.
As they have become more and more real looking the realization has come that this is dangerous in criminal hands. You can actually get some pretty fair deep-fake apps On the google play store which replace a face or body in a video or image. Its has been on stores for years and it is absolutely amaxing what some of these apps are capable of.
Also, Deepfakes rely on a type of neural network called an autoencoder. These consist of an encoder, which reduces an image to a lower dimensional latent space, and a decoder, which reconstructs the image from the latent representation. Deepfakes utilize this architecture by having a universal encoder which encodes a person in to the latent space.
Primary aspects of their facial features and body posture are used in the latent portrayal. After that, a model trained specifically for the target will decode it. This means that the target’s detailed details will be superimposed on the latent space’s underlying facial and body features from the original film.
Deepfake is a combination of deep learning and fake is a term for videos and presentations enhanced by artificial intelligence to present falsified results.
One of the best examples of deepfakes involves videos of celebrities, politicians or others saying or doing things that they never actually said or did.
The term is named for a Reddit user known as deepfakes who, in December 2017, used deep learning technology to edit the faces of celebrities onto people in pornographic video clips.
How do DeepFake (Deep fake) work?
Deepfake (Deep Fake) video is created by using two competing AI systems one is called the generator and the other is called the discriminator. Basically, the generator creates a fake video clip and then asks the discriminator to determine whether the clip is real or fake.
Each time the discriminator accurately identifies a video clip as being fake, it gives the generator a clue about what not to do when creating the next clip.
Together, the generator and discriminator form something called a generative adversarial network (GAN
The first step in establishing a GAN is to identify the desired output and create a training dataset for the generator. Once the generator begins creating an acceptable level of output, video clips can be fed to the discriminator.
As the generator gets better at creating fake video clips, the discriminator gets better at spotting them. Conversely, as the discriminator gets better at spotting fake video, the generator gets better at creating them.
Until recently, video content has been more difficult to alter in any substantial way. Because deepfakes are created through AI, however, they don’t require the considerable skill that it would take to create a realistic video otherwise.
Unfortunately, this means that just about anyone can create a deepfake to promote their chosen agenda. One danger is that people will take such videos at face value; another is that people will stop trusting in the validity of any video content at all.