Disentangling VAEs, KL Divergence, and Mutual Information
September 24, 2020
I’ve recently been reading about the JointVAE model proposed by Emilien Dupont in the paper Learning Disentangled Joint Continuous and Discrete Representations. The paper builds on the development of variational autoencoders (VAEs). As a quick overview, autoencoders are neural networks that take an input, generate some “secret” representation of it, then try to use that “secret” code to reconstruct the input. If it can do that well, then we have found a secret code that summarizes the important features of the original input, but is likely easier to work with.
In variational autoencoders, instead of directly generating the secret code from the input, we generate the parameters of a probability distribution (like the mean and variance) and then pick a random vector from this distribution that will be our secret vector. This could be useful if we are interested in the random variation that could be present in the secret code, or if we want a generative model which can give us fake version of the original input based on a secret code we manually pick. Thus the basic idea is
In basic VAEs, there’s no constraint on how the network learns the parameters of the secret code distribution. In fact, we could end up with a distribution in which the individual secret code values are entangled or dependent on each other. Maybe the first value could be any number from zero to one, but the second value is always the first value squared. Thus the two values are entangled.
Is this bad? Not necessarily, but it may be very unhelpful depending on our application goal. Let’s say we want a generative model. A typical example in the field is human faces. If we train our VAE on a bunch of face images, we could manually pick secret codes to generate new fake faces. Unfortunately, with a vanilla VAE approach the faces we generate would be essentially random. This is not great. It would be way better if we could “control” each major aspect of the headshot—hair color, shape, expression, etc.
So how do we make sure that the secret reflects these separate features instead of being all tangled up and seemingly random?
-VAE
I’m glad you asked. One approach is -VAE, which rests on a pretty basic principle of probability distributions. When you have more than one variable in a distribution (i.e., it is multivariate), you no longer just have a variance. Instead you have a covariance matrix which tells you the dependencies between each pair of variables in the distribution. The values running down the diagonal of the matrix give the independent variance of each variable (i.e, the variance that cannot be explained away by the values of the other variables) while the off-diagonal elements give the interdependency between different pairs of variables. With this in mind, you might say, “Hey! Is there a way we can keep the variance parameters of the secret code to be more like a diagonal matrix with ones on the diagonal and zeros everywhere else to get rid of any interdependency?” Yes! We can add a penalty term during training that compares the secret code distribution to the disentangled distribution to encourage the secret code distribution parameters to be independent (factorizable).
Returning to our idea of controlling fake facial features, we make the assumption that if we find a bunch of independent secret code values, each value will control an independent feature of the headshot, and BOOM, suddenly the code values have become semantically meaningful, which is a fancy way of saying that each value has a meaning we can pinpoint and understand like hair color, etc.
JointVAE
It turns out that -VAE isn’t bad for continuous features. For example, we might be able to use it to find features that control the angle of a headshot or zoom level. However, it doesn’t really apply for things that are discrete. In the context of the frequently used MNIST handwritten digits data set, -VAE might be good at predicting the thickness or angle of the digits, but not great at finding a feature that controls the digit value itself. Why? Well there’s not a smooth transition between each digit. They are essentially 10 disconnected classes.
JointVAE is a way to handle these discrete classes. The idea is essentially the same as -VAE, but proposes a new way to calculate the entanglement of the discrete parameters so we can train a network to disentangle them. This hinges on the use of the Gumbel softmax distribution.
And Now, … the Math
If all that kind of made sense, you can stop there and feel good about what you’ve learned, and wow your friends with your AI knowledge. The next section is rather math heavy, so be warned. Feel free to pull the ejection handle and blast out of here while you can.
If you are still here, it’s time to get into the weeds!
We’re going to start in a completely different area, but I promise we’ll work our way back to JointVAE.
Information
Intuitively, you might realize that if some event occurs pretty often, we get less information out of it than if the event occurs less frequently. For example, “NASA landed on the moon again” is a lot more informative than “Monday Night Football is happening on Monday night this week.” I could’ve guessed that last statement, so it doesn’t really carry much information. It would be handy to capture this numerically. Ideally, an event that has a probability of zero carries a ton of information, while an event that has a probability of one has zero information. With that in mind, we define
which says that the information encoded when some random state is is the negative log of the probability of occurring. If , this expression is zero. If , the expression is . Practically this is okay since we will never calculate the information of an event that will never happen.
This definition is essentially arbitrary! There’s not necessarily any theory your missing that would make this an obvious choice to you. The real benefit to this definition is that it turns out to have some practical meaning if we talk about information encoded in bits, and it makes some of our calculations easier since the log can turn multiplications and divisions into additions and subtractions, respectively.
Entropy
So if which I’ll shorten to is the information in a single event, what about the information encoded in the whole distribution ? One way to quantify this is via entropy, which is the expected information that produces if we randomly pick a sample from (I’m shortening to ):
So how exactly do we calculate the expected value? The idea is pretty simple, the expected value will be the sum of each for each possible outcome in times the probability that the outcome occurs. So the entropy is the weighted sum of all the individual event information, where the weights are the chance the event actually occurs. Think about that for a minute, and hopefully it makes sense. Even if an event has a ton of information, if it doesn’t happen very often, it really won’t contribute much information in the long run. For a discrete distribution, we can write the entropy as
where means the sum across every possible event .
Kullback-Liebler Divergence
A natural question at this point is, “Can we use entropy and information to compare two distributions?” For example, maybe we have some event and we want to find the difference in information between two distributions of , and . In others, if occurs from distribution , does this event have more or less information than the information has according to distribution ? We can write this as
We can substitute our previous definition of , and we define a new expression as the cross-entropy between and . It is the information from distribution , but weighted and summed based on the probability of the event happening according to the other distribution . Take a deep breath, and read that a couple times.
Returning to the full expression, which we term Kullback-Liebler (KL) divergence, and write
we can see that if the event has the same information in both distributions, then (remember the definition of information), and we can infer that . So if we randomly pick an from and the expected value of the information difference is zero, we can take a guess that the two distributions are pretty much identical from the perspective of . This last part is really important. It says that all the events that might reasonably occur in occur at a similar frequency in if the KL divergence is close to zero. It does not say that events that reasonably occur in will occur at a similar frequency in . Think about if is a standard bell curve, but is a bell curve with a really long tail on one side. If we pick events from , we will mainly get events near the center of the bell curve. At this point, both and are pretty similar, so the KL divergence will be near zero. In a sense, because has shorter tails, it will not explore the regions in which it is more different than . Thus, in the perspective of , it is pretty similar to .
, however, does not think so. When we flip the divergence around to find , we are taking samples from . Because it has a longer tail, it is more likely to draw samples from the region in which it is different from . Thus, the expected value of the information difference is now higher, the KL divergence is higher, and the two distributions are therefore more different from the perspective of .
Returning to Disentanglement
Now we have the information (no pun intended) we need to revisit the idea of disentangling a distribution. Reframing the problem, we have some distribution of secret code values that might be interdependent. The basic idea is to penalize the secret code distribution if it drifts too far from the ideal, disentangled distribution. Wait, do I hear you suggesting we might be able to use KL divergence to measure this?! How insightful!
is the probability distribution of the secret code , and is a distribution with our desired disentanglement. Now, in practice we can’t do this directly. Why not? Well, in reality, we feed an input into our VAE and this input generates our secret code distribution. Thus our pick of is dependent on the choice of input, and we end up with
Now the question becomes, if we minimize this across random samples drawn from the input data distribution, do we actually end up minimizing the thing we want from the previous equation? We can figure this out by looking at the expected value of this most recent equation over values we pull from the input. Buckle up for this one.
Let’s step through this carefully. First we expand the KL divergence definition and do some algebra to rearrange things. The first big thing happens when we convert
which essentially says the expected value of drawing a value from the data, then the expected value of drawing a value from the secret code distribution based on this input is the same as drawing both samples at once from the joint distribution of both the input and the secret code together.
Next we use Bayes’ Rule to convert
(I’m planning to blog about Bayes’ Rule soon.)
Next we can use the definition of KL divergence to rewrite
which as the note says is a measure of how the joint distribution of and is from the distribution in which and are independent. If the joint is very close to the distribution with separate independent factors, than and and pretty independent and the mutual information between them is very low. However, if the joint distribution is far from the independent, factorized distribution, then and are entangled in some way and share information.
This is important to catch. We want and to be entangled and share information, since that means that the secret code is capturing the same information as the original input. This is good. However, we do not want the individual parts of to be entangled with each other.
We can apply the KL divergence definition one more time to find out that the find expression is the sum of the mutual information between the input and secret code (which we want to maximize) and the divergence between the secret code and the desired disentangled goal (which we want to minimize).
All this to say, if we minimize , we end up minimizing what we want, which is the last term , but we also inadvertently minimize , which will end up hurting our ability to reconstruct the input from the secret code. Remember that reconstruction is a key objective of autoencoders.
This brings us to a key problem of -VAE. There is a tradeoff between minimizing reconstruction error and minimizing the entanglement of the secret codes.
Adjusting Capacity
One straightforward way to counteract this would be to see if we can subtract from the expression before we minimize it so we can keep the mutual information. To try this, we just need to find a way to estimate as we go through training. Intuitively, we believe that the mutual information should start close to zero at the beginning and slowly increase as the model learns to reconstruct the inputs better and better. With that in mind we can change our object to minimizing
where is a capacity parameter that gradually increases during training to counteract the increase in mutual information. Ideally, maxes out at the maximum amount of mutual information. If it goes above this, it will counteract the mutual information but also “eat into” the real entanglement metric we want to minimize. Thus, our network will think it is doing a good job minimizing entanglement when in reality we just let get too high, and the values could still be entangled.
Bringing It Back Around To JointVAE Loss
Now we’re ready to look at the JointVAE loss function that we want to minimize:
We have two secret codes here: for the continuous values and for the discrete values, and is the reconstructed input value. The hyperparameter controls the balanced between focusing on reconstruction or disentanglement. If we pick the right, gradually increasing values of and during training, we can preserve the mutual information of the continuous and discrete variables while minimizing the entanglement. According to the JointVAE paper, the max value can be the maximum capacity of the discrete variables based on the number of classes, while the max value is hard to calculate and should be set as high as possible without “fooling” the minimization algorithm into settling for lower disentanglement as mentioned in the last section.
Conclusion
Thanks for reading! Even if you don’t work much with JointVAE, I hope this post gave you an overview of VAEs, the entanglement problem, how we get KL divergence, and the potential tradeoff problems with vanilla -VAE.
Further Reading
The JointVAE paper by Emilien Dupont is Learning Disentangled Joint Continuous and Discrete Representations.
Using the capacity parameters may not be the best approach. One alternative for continuous secret codes (latent vectors) is proposed by Disentangling by Factorising.
I would be remiss for not mentioning that I drew much help from Goodfellow, Bengio, and Courville’s classic reference, Deep Learning.