diff --git a/_site/atom.xml b/_site/atom.xml index 825d2af..1392ed7 100644 --- a/_site/atom.xml +++ b/_site/atom.xml @@ -35,12 +35,39 @@
For this paper, only a superficial understanding of how the Wasserstein distance works is necessary. Optimal transport is an optimisation technique to lift a distance between points in a given metric space, to a distance between probability distributions over this metric space. The historical example is to move piles of dirt around: you know the distance between any two points, and you have piles of dirt lying around1. Now, if you want to move these piles to another configuration (fewer piles, say, or a different repartition of dirt a few metres away), you need to find the most efficient way to move them. The total cost you obtain will define a distance between the two configurations of dirt, and is usually called the earth mover’s distance, which is just an instance of the general Wasserstein metric.
More formally, if we have to sets of points \(x = (x_1, x_2, \ldots, x_n)\), and \(y = (y_1, y_2, \ldots, y_n)\), along with probability distributions \(p \in \Delta^n\), \(q \in \Delta^m\) over \(x\) and \(y\) (\(\Delta^n\) is the probability simplex of dimension \(n\), i.e. the set of vectors of size \(n\) summing to 1), we can define the Wasserstein distance as \[ - W_1(p, q) = \min_{P \in \mathbb{R}_+^{n\times m}} \sum_{i,j} C_{i,j} P_{i,j}\\ - \text{\small subject to } \sum_j P_{i,j} = p_i \text{ \small and } \sum_i P_{i,j} = q_j, +W_1(p, q) = \min_{P \in \mathbb{R}_+^{n\times m}} \sum_{i,j} C_{i,j} P_{i,j} +\] \[ +\text{\small subject to } \sum_j P_{i,j} = p_i \text{ \small and } \sum_i P_{i,j} = q_j, \] where \(C_{i,j} = d(x_i, x_j)\) are the costs computed from the original distance between points, and \(P_{i,j}\) represent the amount we are moving from pile \(i\) to pile \(j\).
Now, how can this be applied to a natural language setting? Once we have word embeddings, we can consider that the vocabulary forms a metric space (we can compute a distance, for instance the euclidean or the cosine distance, between two word embeddings). The key is to define documents as distributions over words.
Given a vocabulary \(V \subset \mathbb{R}^n\) and a corpus \(D = (d^1, d^2, \ldots, d^{\lvert D \rvert})\), we represent a document as \(d^i \in \Delta^{l_i}\) where \(l_i\) is the number of unique words in \(d^i\), and \(d^i_j\) is the proportion of word \(v_j\) in the document \(d^i\). The word mover’s distance (WMD) is then defined simply as \[ \operatorname{WMD}(d^1, d^2) = W_1(d^1, d^2). \]
If you didn’t follow all of this, don’t worry! The gist is: if you have a distance between points, you can solve an optimisation problem to obtain a distance between distributions over these points! This is especially useful when you consider that each word embedding is a point, and a document is just a set of words, along with the number of times they appear.
+Using optimal transport, we can use the word mover’s distance to define a metric between documents. However, this suffers from two drawbacks:
+To escape these issues, we will add an intermediary step using topic modelling. Once we have topics \(T = (t_1, t_2, \ldots, t_{\lvert T +\rvert}) \subset \Delta^{\lvert V \rvert}\), we get two kinds of representations:
+Since they are distributions over words, the word mover’s distance defines a metric over topics. As such, the topics with the WMD become a metric space.
+We can now define the hierarchical optimal topic transport (HOTT), as the optimal transport distance between documents, represented as distributions over topics. For two documents \(d^1\), \(d^2\), \[ +\operatorname{HOTT}(d^1, d^2) = W_1\left( \sum_{k=1}^{\lvert T \rvert} \bar{d^1_k} \delta_{t_k}, \sum_{k=1}^{\lvert T \rvert} \bar{d^2_k} \delta_{t_k} \right). +\] where \(\delta_{t_k}\) is a distribution supported on topic \(t_k\).
+Note that in this case, we used optimal transport twice:
+The first one can be precomputed once for all subsequent distances, so it is invariable in the number of documents we have to process. The second one only operates on \(\lvert T \rvert\) topics instead of the full vocabulary: the resulting optimisation problem is much smaller! This is great for performance, as it should be easy now to compute all pairwise distances in a large set of documents.
+Another interesting insight is that topics are represented as collections of words (we can keep the top 20 as a visual representations), and documents as collections of topics with weights. Both of these representations are highly interpretable for a human being who wants to understand what’s going on. I think this is one of the strongest aspects of these approaches: both the various representations and the algorithms are fully interpretable. Compared to a deep learning approach, we can make sense of every intermediate step, from the representations of topics to the weights in the optimisation algorithm to compute higher-level distances.
+For this paper, only a superficial understanding of how the Wasserstein distance works is necessary. Optimal transport is an optimisation technique to lift a distance between points in a given metric space, to a distance between probability distributions over this metric space. The historical example is to move piles of dirt around: you know the distance between any two points, and you have piles of dirt lying around1. Now, if you want to move these piles to another configuration (fewer piles, say, or a different repartition of dirt a few metres away), you need to find the most efficient way to move them. The total cost you obtain will define a distance between the two configurations of dirt, and is usually called the earth mover’s distance, which is just an instance of the general Wasserstein metric.
More formally, if we have to sets of points \(x = (x_1, x_2, \ldots, x_n)\), and \(y = (y_1, y_2, \ldots, y_n)\), along with probability distributions \(p \in \Delta^n\), \(q \in \Delta^m\) over \(x\) and \(y\) (\(\Delta^n\) is the probability simplex of dimension \(n\), i.e. the set of vectors of size \(n\) summing to 1), we can define the Wasserstein distance as \[ - W_1(p, q) = \min_{P \in \mathbb{R}_+^{n\times m}} \sum_{i,j} C_{i,j} P_{i,j}\\ - \text{\small subject to } \sum_j P_{i,j} = p_i \text{ \small and } \sum_i P_{i,j} = q_j, +W_1(p, q) = \min_{P \in \mathbb{R}_+^{n\times m}} \sum_{i,j} C_{i,j} P_{i,j} +\] \[ +\text{\small subject to } \sum_j P_{i,j} = p_i \text{ \small and } \sum_i P_{i,j} = q_j, \] where \(C_{i,j} = d(x_i, x_j)\) are the costs computed from the original distance between points, and \(P_{i,j}\) represent the amount we are moving from pile \(i\) to pile \(j\).
Now, how can this be applied to a natural language setting? Once we have word embeddings, we can consider that the vocabulary forms a metric space (we can compute a distance, for instance the euclidean or the cosine distance, between two word embeddings). The key is to define documents as distributions over words.
Given a vocabulary \(V \subset \mathbb{R}^n\) and a corpus \(D = (d^1, d^2, \ldots, d^{\lvert D \rvert})\), we represent a document as \(d^i \in \Delta^{l_i}\) where \(l_i\) is the number of unique words in \(d^i\), and \(d^i_j\) is the proportion of word \(v_j\) in the document \(d^i\). The word mover’s distance (WMD) is then defined simply as \[ \operatorname{WMD}(d^1, d^2) = W_1(d^1, d^2). \]
If you didn’t follow all of this, don’t worry! The gist is: if you have a distance between points, you can solve an optimisation problem to obtain a distance between distributions over these points! This is especially useful when you consider that each word embedding is a point, and a document is just a set of words, along with the number of times they appear.
+Using optimal transport, we can use the word mover’s distance to define a metric between documents. However, this suffers from two drawbacks:
+To escape these issues, we will add an intermediary step using topic modelling. Once we have topics \(T = (t_1, t_2, \ldots, t_{\lvert T +\rvert}) \subset \Delta^{\lvert V \rvert}\), we get two kinds of representations:
+Since they are distributions over words, the word mover’s distance defines a metric over topics. As such, the topics with the WMD become a metric space.
+We can now define the hierarchical optimal topic transport (HOTT), as the optimal transport distance between documents, represented as distributions over topics. For two documents \(d^1\), \(d^2\), \[ +\operatorname{HOTT}(d^1, d^2) = W_1\left( \sum_{k=1}^{\lvert T \rvert} \bar{d^1_k} \delta_{t_k}, \sum_{k=1}^{\lvert T \rvert} \bar{d^2_k} \delta_{t_k} \right). +\] where \(\delta_{t_k}\) is a distribution supported on topic \(t_k\).
+Note that in this case, we used optimal transport twice:
+The first one can be precomputed once for all subsequent distances, so it is invariable in the number of documents we have to process. The second one only operates on \(\lvert T \rvert\) topics instead of the full vocabulary: the resulting optimisation problem is much smaller! This is great for performance, as it should be easy now to compute all pairwise distances in a large set of documents.
+Another interesting insight is that topics are represented as collections of words (we can keep the top 20 as a visual representations), and documents as collections of topics with weights. Both of these representations are highly interpretable for a human being who wants to understand what’s going on. I think this is one of the strongest aspects of these approaches: both the various representations and the algorithms are fully interpretable. Compared to a deep learning approach, we can make sense of every intermediate step, from the representations of topics to the weights in the optimisation algorithm to compute higher-level distances.
+For this paper, only a superficial understanding of how the Wasserstein distance works is necessary. Optimal transport is an optimisation technique to lift a distance between points in a given metric space, to a distance between probability distributions over this metric space. The historical example is to move piles of dirt around: you know the distance between any two points, and you have piles of dirt lying around1. Now, if you want to move these piles to another configuration (fewer piles, say, or a different repartition of dirt a few metres away), you need to find the most efficient way to move them. The total cost you obtain will define a distance between the two configurations of dirt, and is usually called the earth mover’s distance, which is just an instance of the general Wasserstein metric.
More formally, if we have to sets of points \(x = (x_1, x_2, \ldots, x_n)\), and \(y = (y_1, y_2, \ldots, y_n)\), along with probability distributions \(p \in \Delta^n\), \(q \in \Delta^m\) over \(x\) and \(y\) (\(\Delta^n\) is the probability simplex of dimension \(n\), i.e. the set of vectors of size \(n\) summing to 1), we can define the Wasserstein distance as \[ - W_1(p, q) = \min_{P \in \mathbb{R}_+^{n\times m}} \sum_{i,j} C_{i,j} P_{i,j}\\ - \text{\small subject to } \sum_j P_{i,j} = p_i \text{ \small and } \sum_i P_{i,j} = q_j, +W_1(p, q) = \min_{P \in \mathbb{R}_+^{n\times m}} \sum_{i,j} C_{i,j} P_{i,j} +\] \[ +\text{\small subject to } \sum_j P_{i,j} = p_i \text{ \small and } \sum_i P_{i,j} = q_j, \] where \(C_{i,j} = d(x_i, x_j)\) are the costs computed from the original distance between points, and \(P_{i,j}\) represent the amount we are moving from pile \(i\) to pile \(j\).
Now, how can this be applied to a natural language setting? Once we have word embeddings, we can consider that the vocabulary forms a metric space (we can compute a distance, for instance the euclidean or the cosine distance, between two word embeddings). The key is to define documents as distributions over words.
Given a vocabulary \(V \subset \mathbb{R}^n\) and a corpus \(D = (d^1, d^2, \ldots, d^{\lvert D \rvert})\), we represent a document as \(d^i \in \Delta^{l_i}\) where \(l_i\) is the number of unique words in \(d^i\), and \(d^i_j\) is the proportion of word \(v_j\) in the document \(d^i\). The word mover’s distance (WMD) is then defined simply as \[ \operatorname{WMD}(d^1, d^2) = W_1(d^1, d^2). \]
If you didn’t follow all of this, don’t worry! The gist is: if you have a distance between points, you can solve an optimisation problem to obtain a distance between distributions over these points! This is especially useful when you consider that each word embedding is a point, and a document is just a set of words, along with the number of times they appear.
+Using optimal transport, we can use the word mover’s distance to define a metric between documents. However, this suffers from two drawbacks:
+To escape these issues, we will add an intermediary step using topic modelling. Once we have topics \(T = (t_1, t_2, \ldots, t_{\lvert T +\rvert}) \subset \Delta^{\lvert V \rvert}\), we get two kinds of representations:
+Since they are distributions over words, the word mover’s distance defines a metric over topics. As such, the topics with the WMD become a metric space.
+We can now define the hierarchical optimal topic transport (HOTT), as the optimal transport distance between documents, represented as distributions over topics. For two documents \(d^1\), \(d^2\), \[ +\operatorname{HOTT}(d^1, d^2) = W_1\left( \sum_{k=1}^{\lvert T \rvert} \bar{d^1_k} \delta_{t_k}, \sum_{k=1}^{\lvert T \rvert} \bar{d^2_k} \delta_{t_k} \right). +\] where \(\delta_{t_k}\) is a distribution supported on topic \(t_k\).
+Note that in this case, we used optimal transport twice:
+The first one can be precomputed once for all subsequent distances, so it is invariable in the number of documents we have to process. The second one only operates on \(\lvert T \rvert\) topics instead of the full vocabulary: the resulting optimisation problem is much smaller! This is great for performance, as it should be easy now to compute all pairwise distances in a large set of documents.
+Another interesting insight is that topics are represented as collections of words (we can keep the top 20 as a visual representations), and documents as collections of topics with weights. Both of these representations are highly interpretable for a human being who wants to understand what’s going on. I think this is one of the strongest aspects of these approaches: both the various representations and the algorithms are fully interpretable. Compared to a deep learning approach, we can make sense of every intermediate step, from the representations of topics to the weights in the optimisation algorithm to compute higher-level distances.
+