top of page
Priyanka P. Pattnaik

In between Real or Fake: Generative Adversarial Networks (GANs)


As you have started reading this post, you just looked at the above two faces. Can you guess if the faces of these two persons actually exist in the world or not?

If your answer is yes, and you thought its actually two people somewhere in the earth than you are mistaken. These faces are created by the power of Artificial Intelligence i.e. by the Generative adversarial networks(GANs)


Generative adversarial networks (GANs) are algorithmic architectures that use two neural networks, pitting one against the other (thus the “adversarial”) in order to generate new, synthetic instances of data that can pass for real data. They are used widely in image generation, video generation, and voice generation.



GANs was introduced in a paper by Ian Goodfellow and other researchers at the University of Montreal, including Yoshua Bengio, in 2014. Referring to GANs, Facebook’s AI research director Yann LeCun called adversarial training “the most interesting idea in the last 10 years in ML.”

GANs’ potential for both good and evil is huge because they can learn to mimic any distribution of data. That is, GANs can be taught to create worlds eerily similar to our own in any domain: images, music, speech, prose. They are robot artists in a sense, and their output is impressive – poignant even. But they can also be used to generate fake media content, and are the technology underpinning Deepfakes.

In a surreal turn, Christie’s sold a portrait for $432,000 that had been generated by a GAN, based on open-source code written by Robbie Barrat of Stanford. Like most true artists, he didn’t see any of the money, which instead went to the French company, Obvious.




A generative adversarial network (GAN) has two parts:

  • The generator learns to generate plausible data. The generated instances become negative training examples for the discriminator.

  • The discriminator learns to distinguish the generator's fake data from real data. The discriminator penalizes the generator for producing implausible results.

The Discriminator

The discriminator in a GAN is simply a classifier. It tries to distinguish real data from the data created by the generator. It could use any network architecture appropriate to the type of data it's classifying.


The discriminator's training data comes from two sources:

  • Real data instances, such as real pictures of people. The discriminator uses these instances as positive examples during training.

  • Fake data instances created by the generator. The discriminator uses these instances as negative examples during training.

The Generator

The generator part of a GAN learns to create fake data by incorporating feedback from the discriminator. It learns to make the discriminator classify its output as real. Generator training requires tighter integration between the generator and the discriminator than discriminator training requires. The portion of the GAN that trains the generator includes:

  • random input

  • generator network, which transforms the random input into a data instance

  • discriminator network, which classifies the generated data

  • discriminator output

  • generator loss, which penalizes the generator for failing to fool the discriminator



GAN Training

Because a GAN contains two separately trained networks, its training algorithm must address two complications:

  • GANs must juggle two different kinds of training (generator and discriminator).

  • GAN convergence is hard to identify.


Loss Functions

GANs try to replicate a probability distribution. They should, therefore, use loss functions that reflect the distance between the distribution of the data generated by the GAN and the distribution of the real data. How do you capture the difference between two distributions in GAN loss functions? This question is an area of active research, and many approaches have been proposed. We'll address two common GAN loss functions here. A GAN can have two loss functions: one for generator training and one for discriminator training. How can two loss functions work together to reflect a distance measure between probability distributions?

In the loss schemes, we'll look at here, the generator and discriminator losses derive from a single measure of the distance between probability distributions. In both of these schemes, however, the generator can only affect one term in the distance measure: the term that reflects the distribution of the fake data. So during generator training, we drop the other term, which reflects the distribution of the real data.

The generator and discriminator losses look different in the end, even though they derive from a single formula.


Minimax Loss

In the paper that introduced GANs, the generator tries to minimize the following function while the discriminator tries to maximize it:

Ex[log(D(x))]+Ez[log(1−D(G(z)))]

In this function:

  • D(x) is the discriminator's estimate of the probability that real data instance x is real.

  • Ex is the expected value over all real data instances.

  • G(z) is the generator's output when given noise z.

  • D(G(z)) is the discriminator's estimate of the probability that a fake instance is real.

  • Ez is the expected value over all random inputs to the generator (in effect, the expected value over all generated fake instances G(z)).

  • The formula derives from the cross-entropy between the real and generated distributions.

The generator can't directly affect the log(D(x)) term in the function, so, for the generator, minimizing the loss is equivalent to minimizing log(1 - D(G(z))).


Wasserstein Loss

By default, TF-GAN uses Wasserstein loss.

This loss function depends on a modification of the GAN scheme (called "Wasserstein GAN" or "WGAN") in which the discriminator does not actually classify instances. For each instance, it outputs a number. This number does not have to be less than one or greater than 0, so we can't use 0.5 as a threshold to decide whether an instance is real or fake. Discriminator training just tries to make the output bigger for real instances than for fake instances.


Because it can't really discriminate between real and fake, the WGAN discriminator is actually called a "critic" instead of a "discriminator". This distinction has theoretical importance, but for practical purposes, we can treat it as an acknowledgment that the inputs to the loss functions don't have to be probabilities.


The loss functions themselves are deceptively simple:

Critic Loss: D(x) - D(G(z))

The discriminator tries to maximize this function. In other words, it tries to maximize the difference between its output on real instances and its output on fake instances.

Generator Loss: D(G(z))

The generator tries to maximize this function. In other words, It tries to maximize the discriminator's output for its fake instances.

In these functions:

  • D(x) is the critic's output for a real instance.

  • G(z) is the generator's output when given noise z.

  • D(G(z)) is the critic's output for a fake instance.

  • The output of critic D does not have to be between 1 and 0.

  • The formulas derived from the earth mover distance between the real and generated distributions.


"When you train the discriminator, hold the generator values constant; and when you train the generator, hold the discriminator constant. Each should train against a static adversary. For example, this gives the generator a better read on the gradient it must learn by. "


You can think of a GAN as the opposition of a counterfeiter and a cop in a game of cat and mouse, where the counterfeiter is learning to pass false notes, and the cop is learning to detect them. Both are dynamic; i.e. the cop is in training, too (to extend the analogy, maybe the central bank is flagging bills that slipped through), and each side comes to learn the other’s methods in a constant escalation.

For MNIST, the discriminator network is a standard convolutional network that can categorize the images fed to it, a binomial classifier labeling images as real or fake. The generator is an inverse convolutional network, in a sense: While a standard convolutional classifier takes an image and downsamples it to produce a probability, the generator takes a vector of random noise and upsamples it to an image. The first throws away data through downsampling techniques like max-pooling, and the second generates new data.

Both nets are trying to optimize a different and opposing objective function, or loss function, in a zero-zum game. This is essentially an actor-critic model. As the discriminator changes its behavior, so does the generator, and vice versa. Their losses push against each other.





29 views2 comments

Recent Posts

See All

2 Comments


Priyanka P. Pattnaik
May 14, 2020

Yes, It is capable. You can search for MuseGAN.

Like

kalyanrisingstar
May 14, 2020

Is it applicable to distinguish between real and fake voice?

Like
bottom of page