From 102d4bb689dcce647490c48196168252a3ab3121 Mon Sep 17 00:00:00 2001 From: Dimitri Lozeve Date: Sun, 5 Apr 2020 16:07:24 +0200 Subject: [PATCH] Small updates --- _site/atom.xml | 18 ++++++------ ...transport-for-document-classification.html | 18 ++++++------ _site/rss.xml | 18 ++++++------ bib/bibliography.bib | 29 ++++++++----------- ...-transport-for-document-classification.org | 26 ++++++++--------- 5 files changed, 52 insertions(+), 57 deletions(-) diff --git a/_site/atom.xml b/_site/atom.xml index b393616..aa6b018 100644 --- a/_site/atom.xml +++ b/_site/atom.xml @@ -21,7 +21,7 @@
-

Two weeks ago, I did a presentation for my colleagues of the paper from Yurochkin et al. (2019), from NeurIPS 2019. It contains an interesting approach to document classification leading to strong performance, and, most importantly, excellent interpretability.

+

Two weeks ago, I did a presentation for my colleagues of the paper from Yurochkin et al. (2019), from NeurIPS 2019. It contains an interesting approach to document classification leading to strong performance, and, most importantly, excellent interpretability.

This paper seems interesting to me because of it uses two methods with strong theoretical guarantees: optimal transport and topic modelling. Optimal transport looks very promising to me in NLP, and has seen a lot of interest in recent years due to advances in approximation algorithms, such as entropy regularisation. It is also quite refreshing to see approaches using solid results in optimisation, compared to purely experimental deep learning methods.

Introduction and motivation

The problem of the paper is to measure similarity (i.e. a distance) between pairs of documents, by incorporating semantic similarities (and not only syntactic artefacts), without encountering scalability issues.

@@ -31,10 +31,10 @@
  • topic modelling methods (e.g. Latent Dirichlet Allocation), to represent semantically-meaningful groups of words.
  • Background: optimal transport

    -

    The essential backbone of the method is the Wasserstein distance, derived from optimal transport theory. Optimal transport is a fascinating and deep subject, so I won’t enter into the details here. For an introduction to the theory and its applications, check out the excellent book from Peyré and Cuturi (2019), (available on ArXiv as well). There are also very nice posts by Gabriel Peyré on the CNRS maths blog (in French). Many more resources (including slides for presentations) are available at https://optimaltransport.github.io. For a more complete theoretical treatment of the subject, check out Santambrogio (2015), or, if you’re feeling particularly adventurous, Villani (2009).

    +

    The essential backbone of the method is the Wasserstein distance, derived from optimal transport theory. Optimal transport is a fascinating and deep subject, so I won’t enter into the details here. For an introduction to the theory and its applications, check out the excellent book from Peyré and Cuturi (2019), (available on ArXiv as well). There are also very nice posts (in French) by Gabriel Peyré on the CNRS maths blog. Many more resources (including slides for presentations) are available at https://optimaltransport.github.io. For a more complete theoretical treatment of the subject, check out Santambrogio (2015), or, if you’re feeling particularly adventurous, Villani (2009).

    For this paper, only a superficial understanding of how the Wasserstein distance works is necessary. Optimal transport is an optimisation technique to lift a distance between points in a given metric space, to a distance between probability distributions over this metric space. The historical example is to move piles of dirt around: you know the distance between any two points, and you have piles of dirt lying around1. Now, if you want to move these piles to another configuration (fewer piles, say, or a different repartition of dirt a few metres away), you need to find the most efficient way to move them. The total cost you obtain will define a distance between the two configurations of dirt, and is usually called the earth mover’s distance, which is just an instance of the general Wasserstein metric.

    -

    More formally, if we have to sets of points \(x = (x_1, x_2, \ldots, - x_n)\), and \(y = (y_1, y_2, \ldots, y_n)\), along with probability distributions \(p \in \Delta^n\), \(q \in \Delta^m\) over \(x\) and \(y\) (\(\Delta^n\) is the probability simplex of dimension \(n\), i.e. the set of vectors of size \(n\) summing to 1), we can define the Wasserstein distance as \[ +

    More formally, we start with two sets of points \(x = (x_1, x_2, \ldots, + x_n)\), and \(y = (y_1, y_2, \ldots, y_n)\), along with probability distributions \(p \in \Delta^n\), \(q \in \Delta^m\) over \(x\) and \(y\) (\(\Delta^n\) is the probability simplex of dimension \(n\), i.e. the set of vectors of size \(n\) summing to 1). We can then define the Wasserstein distance as \[ W_1(p, q) = \min_{P \in \mathbb{R}_+^{n\times m}} \sum_{i,j} C_{i,j} P_{i,j} \] \[ \text{\small subject to } \sum_j P_{i,j} = p_i \text{ \small and } \sum_i P_{i,j} = q_j, @@ -69,20 +69,20 @@ W_1(p, q) = \min_{P \in \mathbb{R}_+^{n\times m}} \sum_{i,j} C_{i,j} P_{i,j} Representation of two documents in topic space, along with how the distance was computed between them. Everything is interpretable: from the documents as collections of topics, to the matchings between topics determining the overall distance between the books (Yurochkin et al. 2019).

    Representation of two documents in topic space, along with how the distance was computed between them. Everything is interpretable: from the documents as collections of topics, to the matchings between topics determining the overall distance between the books (Yurochkin et al. 2019).

    Experiments

    -

    The paper is very complete regarding experiments, providing a full evaluation of the method on one particular application: document clustering. They use Latent Dirichlet Allocation to compute topics and GloVe for pretrained word embeddings (Moschitti, Pang, and Daelemans 2014), and Gurobi to solve the optimisation problems. Their code is available on Github.

    +

    The paper is very complete regarding experiments, providing a full evaluation of the method on one particular application: document clustering. They use Latent Dirichlet Allocation to compute topics and GloVe for pretrained word embeddings (Pennington, Socher, and Manning 2014), and Gurobi to solve the optimisation problems. Their code is available on GitHub.

    If you want the details, I encourage you to read the full paper, they tested the methods on a wide variety of datasets, with datasets containing very short documents (like Twitter), and long documents with a large vocabulary (books). With a simple \(k\)-NN classification, they establish that HOTT performs best on average, especially on large vocabularies (books, the “gutenberg” dataset). It also has a much better computational performance than alternative methods based on regularisation of the optimal transport problem directly on words. So the hierarchical nature of the approach allows to gain considerably in performance, along with improvements in interpretability.

    What’s really interesting in the paper is the sensitivity analysis: they ran experiments with different word embeddings methods (word2vec, (Mikolov et al. 2013)), and with different parameters for the topic modelling (topic truncation, number of topics, etc). All of these reveal that changes in hyperparameters do not impact the performance of HOTT significantly. This is extremely important in a field like NLP where most of the times small variations in approach lead to drastically different results.

    Conclusion

    All in all, this paper present a very interesting approach to compute distance between natural-language documents. It is no secret that I like methods with strong theoretical background (in this case optimisation and optimal transport), guaranteeing a stability and benefiting from decades of research in a well-established domain.

    Most importantly, this paper allows for future exploration in document representation with interpretability in mind. This is often added as an afterthought in academic research but is one of the most important topics for the industry, as a system must be understood by end users, often not trained in ML, before being deployed. The notion of topic, and distances as weights, can be understood easily by anyone without significant background in ML or in maths.

    -

    Finally, I feel like they did not stop at a simple theoretical argument, but carefully checked on real-world datasets, measuring sensitivity to all the arbitrary choices they had to take. Again, from an industry perspective, this allows to implement the new approach quickly and easily, confident that it won’t break unexpectedly without extensive testing.

    +

    Finally, I feel like they did not stop at a simple theoretical argument, but carefully checked on real-world datasets, measuring sensitivity to all the arbitrary choices they had to take. Again, from an industry perspective, this allows to implement the new approach quickly and easily, being confident that it won’t break unexpectedly without extensive testing.

    References

    -

    Mikolov, Tomas, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. “Distributed Representations of Words and Phrases and Their Compositionality.” In Advances in Neural Information Processing Systems 26, edited by C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, 3111–9. Curran Associates, Inc. http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf.

    +

    Mikolov, Tomas, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. “Distributed Representations of Words and Phrases and Their Compositionality.” In Advances in Neural Information Processing Systems 26, 3111–9. http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf.

    -
    -

    Moschitti, Alessandro, Bo Pang, and Walter Daelemans, eds. 2014. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A Meeting of Sigdat, a Special Interest Group of the ACL. ACL. https://www.aclweb.org/anthology/volumes/D14-1/.

    +
    +

    Pennington, Jeffrey, Richard Socher, and Christopher Manning. 2014. “Glove: Global Vectors for Word Representation.” In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 1532–43. Doha, Qatar: Association for Computational Linguistics. https://doi.org/10.3115/v1/D14-1162.

    Peyré, Gabriel, and Marco Cuturi. 2019. “Computational Optimal Transport.” Foundations and Trends in Machine Learning 11 (5-6): 355–206. https://doi.org/10.1561/2200000073.

    diff --git a/_site/posts/hierarchical-optimal-transport-for-document-classification.html b/_site/posts/hierarchical-optimal-transport-for-document-classification.html index b215ab2..0575ce3 100644 --- a/_site/posts/hierarchical-optimal-transport-for-document-classification.html +++ b/_site/posts/hierarchical-optimal-transport-for-document-classification.html @@ -40,7 +40,7 @@
    -

    Two weeks ago, I did a presentation for my colleagues of the paper from Yurochkin et al. (2019), from NeurIPS 2019. It contains an interesting approach to document classification leading to strong performance, and, most importantly, excellent interpretability.

    +

    Two weeks ago, I did a presentation for my colleagues of the paper from Yurochkin et al. (2019), from NeurIPS 2019. It contains an interesting approach to document classification leading to strong performance, and, most importantly, excellent interpretability.

    This paper seems interesting to me because of it uses two methods with strong theoretical guarantees: optimal transport and topic modelling. Optimal transport looks very promising to me in NLP, and has seen a lot of interest in recent years due to advances in approximation algorithms, such as entropy regularisation. It is also quite refreshing to see approaches using solid results in optimisation, compared to purely experimental deep learning methods.

    Introduction and motivation

    The problem of the paper is to measure similarity (i.e. a distance) between pairs of documents, by incorporating semantic similarities (and not only syntactic artefacts), without encountering scalability issues.

    @@ -50,10 +50,10 @@
  • topic modelling methods (e.g. Latent Dirichlet Allocation), to represent semantically-meaningful groups of words.
  • Background: optimal transport

    -

    The essential backbone of the method is the Wasserstein distance, derived from optimal transport theory. Optimal transport is a fascinating and deep subject, so I won’t enter into the details here. For an introduction to the theory and its applications, check out the excellent book from Peyré and Cuturi (2019), (available on ArXiv as well). There are also very nice posts by Gabriel Peyré on the CNRS maths blog (in French). Many more resources (including slides for presentations) are available at https://optimaltransport.github.io. For a more complete theoretical treatment of the subject, check out Santambrogio (2015), or, if you’re feeling particularly adventurous, Villani (2009).

    +

    The essential backbone of the method is the Wasserstein distance, derived from optimal transport theory. Optimal transport is a fascinating and deep subject, so I won’t enter into the details here. For an introduction to the theory and its applications, check out the excellent book from Peyré and Cuturi (2019), (available on ArXiv as well). There are also very nice posts (in French) by Gabriel Peyré on the CNRS maths blog. Many more resources (including slides for presentations) are available at https://optimaltransport.github.io. For a more complete theoretical treatment of the subject, check out Santambrogio (2015), or, if you’re feeling particularly adventurous, Villani (2009).

    For this paper, only a superficial understanding of how the Wasserstein distance works is necessary. Optimal transport is an optimisation technique to lift a distance between points in a given metric space, to a distance between probability distributions over this metric space. The historical example is to move piles of dirt around: you know the distance between any two points, and you have piles of dirt lying around1. Now, if you want to move these piles to another configuration (fewer piles, say, or a different repartition of dirt a few metres away), you need to find the most efficient way to move them. The total cost you obtain will define a distance between the two configurations of dirt, and is usually called the earth mover’s distance, which is just an instance of the general Wasserstein metric.

    -

    More formally, if we have to sets of points \(x = (x_1, x_2, \ldots, - x_n)\), and \(y = (y_1, y_2, \ldots, y_n)\), along with probability distributions \(p \in \Delta^n\), \(q \in \Delta^m\) over \(x\) and \(y\) (\(\Delta^n\) is the probability simplex of dimension \(n\), i.e. the set of vectors of size \(n\) summing to 1), we can define the Wasserstein distance as \[ +

    More formally, we start with two sets of points \(x = (x_1, x_2, \ldots, + x_n)\), and \(y = (y_1, y_2, \ldots, y_n)\), along with probability distributions \(p \in \Delta^n\), \(q \in \Delta^m\) over \(x\) and \(y\) (\(\Delta^n\) is the probability simplex of dimension \(n\), i.e. the set of vectors of size \(n\) summing to 1). We can then define the Wasserstein distance as \[ W_1(p, q) = \min_{P \in \mathbb{R}_+^{n\times m}} \sum_{i,j} C_{i,j} P_{i,j} \] \[ \text{\small subject to } \sum_j P_{i,j} = p_i \text{ \small and } \sum_i P_{i,j} = q_j, @@ -88,20 +88,20 @@ W_1(p, q) = \min_{P \in \mathbb{R}_+^{n\times m}} \sum_{i,j} C_{i,j} P_{i,j} Representation of two documents in topic space, along with how the distance was computed between them. Everything is interpretable: from the documents as collections of topics, to the matchings between topics determining the overall distance between the books (Yurochkin et al. 2019).

    Representation of two documents in topic space, along with how the distance was computed between them. Everything is interpretable: from the documents as collections of topics, to the matchings between topics determining the overall distance between the books (Yurochkin et al. 2019).

    Experiments

    -

    The paper is very complete regarding experiments, providing a full evaluation of the method on one particular application: document clustering. They use Latent Dirichlet Allocation to compute topics and GloVe for pretrained word embeddings (Moschitti, Pang, and Daelemans 2014), and Gurobi to solve the optimisation problems. Their code is available on Github.

    +

    The paper is very complete regarding experiments, providing a full evaluation of the method on one particular application: document clustering. They use Latent Dirichlet Allocation to compute topics and GloVe for pretrained word embeddings (Pennington, Socher, and Manning 2014), and Gurobi to solve the optimisation problems. Their code is available on GitHub.

    If you want the details, I encourage you to read the full paper, they tested the methods on a wide variety of datasets, with datasets containing very short documents (like Twitter), and long documents with a large vocabulary (books). With a simple \(k\)-NN classification, they establish that HOTT performs best on average, especially on large vocabularies (books, the “gutenberg” dataset). It also has a much better computational performance than alternative methods based on regularisation of the optimal transport problem directly on words. So the hierarchical nature of the approach allows to gain considerably in performance, along with improvements in interpretability.

    What’s really interesting in the paper is the sensitivity analysis: they ran experiments with different word embeddings methods (word2vec, (Mikolov et al. 2013)), and with different parameters for the topic modelling (topic truncation, number of topics, etc). All of these reveal that changes in hyperparameters do not impact the performance of HOTT significantly. This is extremely important in a field like NLP where most of the times small variations in approach lead to drastically different results.

    Conclusion

    All in all, this paper present a very interesting approach to compute distance between natural-language documents. It is no secret that I like methods with strong theoretical background (in this case optimisation and optimal transport), guaranteeing a stability and benefiting from decades of research in a well-established domain.

    Most importantly, this paper allows for future exploration in document representation with interpretability in mind. This is often added as an afterthought in academic research but is one of the most important topics for the industry, as a system must be understood by end users, often not trained in ML, before being deployed. The notion of topic, and distances as weights, can be understood easily by anyone without significant background in ML or in maths.

    -

    Finally, I feel like they did not stop at a simple theoretical argument, but carefully checked on real-world datasets, measuring sensitivity to all the arbitrary choices they had to take. Again, from an industry perspective, this allows to implement the new approach quickly and easily, confident that it won’t break unexpectedly without extensive testing.

    +

    Finally, I feel like they did not stop at a simple theoretical argument, but carefully checked on real-world datasets, measuring sensitivity to all the arbitrary choices they had to take. Again, from an industry perspective, this allows to implement the new approach quickly and easily, being confident that it won’t break unexpectedly without extensive testing.

    References

    -

    Mikolov, Tomas, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. “Distributed Representations of Words and Phrases and Their Compositionality.” In Advances in Neural Information Processing Systems 26, edited by C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, 3111–9. Curran Associates, Inc. http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf.

    +

    Mikolov, Tomas, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. “Distributed Representations of Words and Phrases and Their Compositionality.” In Advances in Neural Information Processing Systems 26, 3111–9. http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf.

    -
    -

    Moschitti, Alessandro, Bo Pang, and Walter Daelemans, eds. 2014. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A Meeting of Sigdat, a Special Interest Group of the ACL. ACL. https://www.aclweb.org/anthology/volumes/D14-1/.

    +
    +

    Pennington, Jeffrey, Richard Socher, and Christopher Manning. 2014. “Glove: Global Vectors for Word Representation.” In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 1532–43. Doha, Qatar: Association for Computational Linguistics. https://doi.org/10.3115/v1/D14-1162.

    Peyré, Gabriel, and Marco Cuturi. 2019. “Computational Optimal Transport.” Foundations and Trends in Machine Learning 11 (5-6): 355–206. https://doi.org/10.1561/2200000073.

    diff --git a/_site/rss.xml b/_site/rss.xml index 1aea45c..86c3021 100644 --- a/_site/rss.xml +++ b/_site/rss.xml @@ -17,7 +17,7 @@
    -

    Two weeks ago, I did a presentation for my colleagues of the paper from Yurochkin et al. (2019), from NeurIPS 2019. It contains an interesting approach to document classification leading to strong performance, and, most importantly, excellent interpretability.

    +

    Two weeks ago, I did a presentation for my colleagues of the paper from Yurochkin et al. (2019), from NeurIPS 2019. It contains an interesting approach to document classification leading to strong performance, and, most importantly, excellent interpretability.

    This paper seems interesting to me because of it uses two methods with strong theoretical guarantees: optimal transport and topic modelling. Optimal transport looks very promising to me in NLP, and has seen a lot of interest in recent years due to advances in approximation algorithms, such as entropy regularisation. It is also quite refreshing to see approaches using solid results in optimisation, compared to purely experimental deep learning methods.

    Introduction and motivation

    The problem of the paper is to measure similarity (i.e. a distance) between pairs of documents, by incorporating semantic similarities (and not only syntactic artefacts), without encountering scalability issues.

    @@ -27,10 +27,10 @@
  • topic modelling methods (e.g. Latent Dirichlet Allocation), to represent semantically-meaningful groups of words.
  • Background: optimal transport

    -

    The essential backbone of the method is the Wasserstein distance, derived from optimal transport theory. Optimal transport is a fascinating and deep subject, so I won’t enter into the details here. For an introduction to the theory and its applications, check out the excellent book from Peyré and Cuturi (2019), (available on ArXiv as well). There are also very nice posts by Gabriel Peyré on the CNRS maths blog (in French). Many more resources (including slides for presentations) are available at https://optimaltransport.github.io. For a more complete theoretical treatment of the subject, check out Santambrogio (2015), or, if you’re feeling particularly adventurous, Villani (2009).

    +

    The essential backbone of the method is the Wasserstein distance, derived from optimal transport theory. Optimal transport is a fascinating and deep subject, so I won’t enter into the details here. For an introduction to the theory and its applications, check out the excellent book from Peyré and Cuturi (2019), (available on ArXiv as well). There are also very nice posts (in French) by Gabriel Peyré on the CNRS maths blog. Many more resources (including slides for presentations) are available at https://optimaltransport.github.io. For a more complete theoretical treatment of the subject, check out Santambrogio (2015), or, if you’re feeling particularly adventurous, Villani (2009).

    For this paper, only a superficial understanding of how the Wasserstein distance works is necessary. Optimal transport is an optimisation technique to lift a distance between points in a given metric space, to a distance between probability distributions over this metric space. The historical example is to move piles of dirt around: you know the distance between any two points, and you have piles of dirt lying around1. Now, if you want to move these piles to another configuration (fewer piles, say, or a different repartition of dirt a few metres away), you need to find the most efficient way to move them. The total cost you obtain will define a distance between the two configurations of dirt, and is usually called the earth mover’s distance, which is just an instance of the general Wasserstein metric.

    -

    More formally, if we have to sets of points \(x = (x_1, x_2, \ldots, - x_n)\), and \(y = (y_1, y_2, \ldots, y_n)\), along with probability distributions \(p \in \Delta^n\), \(q \in \Delta^m\) over \(x\) and \(y\) (\(\Delta^n\) is the probability simplex of dimension \(n\), i.e. the set of vectors of size \(n\) summing to 1), we can define the Wasserstein distance as \[ +

    More formally, we start with two sets of points \(x = (x_1, x_2, \ldots, + x_n)\), and \(y = (y_1, y_2, \ldots, y_n)\), along with probability distributions \(p \in \Delta^n\), \(q \in \Delta^m\) over \(x\) and \(y\) (\(\Delta^n\) is the probability simplex of dimension \(n\), i.e. the set of vectors of size \(n\) summing to 1). We can then define the Wasserstein distance as \[ W_1(p, q) = \min_{P \in \mathbb{R}_+^{n\times m}} \sum_{i,j} C_{i,j} P_{i,j} \] \[ \text{\small subject to } \sum_j P_{i,j} = p_i \text{ \small and } \sum_i P_{i,j} = q_j, @@ -65,20 +65,20 @@ W_1(p, q) = \min_{P \in \mathbb{R}_+^{n\times m}} \sum_{i,j} C_{i,j} P_{i,j} Representation of two documents in topic space, along with how the distance was computed between them. Everything is interpretable: from the documents as collections of topics, to the matchings between topics determining the overall distance between the books (Yurochkin et al. 2019).

    Representation of two documents in topic space, along with how the distance was computed between them. Everything is interpretable: from the documents as collections of topics, to the matchings between topics determining the overall distance between the books (Yurochkin et al. 2019).

    Experiments

    -

    The paper is very complete regarding experiments, providing a full evaluation of the method on one particular application: document clustering. They use Latent Dirichlet Allocation to compute topics and GloVe for pretrained word embeddings (Moschitti, Pang, and Daelemans 2014), and Gurobi to solve the optimisation problems. Their code is available on Github.

    +

    The paper is very complete regarding experiments, providing a full evaluation of the method on one particular application: document clustering. They use Latent Dirichlet Allocation to compute topics and GloVe for pretrained word embeddings (Pennington, Socher, and Manning 2014), and Gurobi to solve the optimisation problems. Their code is available on GitHub.

    If you want the details, I encourage you to read the full paper, they tested the methods on a wide variety of datasets, with datasets containing very short documents (like Twitter), and long documents with a large vocabulary (books). With a simple \(k\)-NN classification, they establish that HOTT performs best on average, especially on large vocabularies (books, the “gutenberg” dataset). It also has a much better computational performance than alternative methods based on regularisation of the optimal transport problem directly on words. So the hierarchical nature of the approach allows to gain considerably in performance, along with improvements in interpretability.

    What’s really interesting in the paper is the sensitivity analysis: they ran experiments with different word embeddings methods (word2vec, (Mikolov et al. 2013)), and with different parameters for the topic modelling (topic truncation, number of topics, etc). All of these reveal that changes in hyperparameters do not impact the performance of HOTT significantly. This is extremely important in a field like NLP where most of the times small variations in approach lead to drastically different results.

    Conclusion

    All in all, this paper present a very interesting approach to compute distance between natural-language documents. It is no secret that I like methods with strong theoretical background (in this case optimisation and optimal transport), guaranteeing a stability and benefiting from decades of research in a well-established domain.

    Most importantly, this paper allows for future exploration in document representation with interpretability in mind. This is often added as an afterthought in academic research but is one of the most important topics for the industry, as a system must be understood by end users, often not trained in ML, before being deployed. The notion of topic, and distances as weights, can be understood easily by anyone without significant background in ML or in maths.

    -

    Finally, I feel like they did not stop at a simple theoretical argument, but carefully checked on real-world datasets, measuring sensitivity to all the arbitrary choices they had to take. Again, from an industry perspective, this allows to implement the new approach quickly and easily, confident that it won’t break unexpectedly without extensive testing.

    +

    Finally, I feel like they did not stop at a simple theoretical argument, but carefully checked on real-world datasets, measuring sensitivity to all the arbitrary choices they had to take. Again, from an industry perspective, this allows to implement the new approach quickly and easily, being confident that it won’t break unexpectedly without extensive testing.

    References

    -

    Mikolov, Tomas, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. “Distributed Representations of Words and Phrases and Their Compositionality.” In Advances in Neural Information Processing Systems 26, edited by C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, 3111–9. Curran Associates, Inc. http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf.

    +

    Mikolov, Tomas, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. “Distributed Representations of Words and Phrases and Their Compositionality.” In Advances in Neural Information Processing Systems 26, 3111–9. http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf.

    -
    -

    Moschitti, Alessandro, Bo Pang, and Walter Daelemans, eds. 2014. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A Meeting of Sigdat, a Special Interest Group of the ACL. ACL. https://www.aclweb.org/anthology/volumes/D14-1/.

    +
    +

    Pennington, Jeffrey, Richard Socher, and Christopher Manning. 2014. “Glove: Global Vectors for Word Representation.” In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 1532–43. Doha, Qatar: Association for Computational Linguistics. https://doi.org/10.3115/v1/D14-1162.

    Peyré, Gabriel, and Marco Cuturi. 2019. “Computational Optimal Transport.” Foundations and Trends in Machine Learning 11 (5-6): 355–206. https://doi.org/10.1561/2200000073.

    diff --git a/bib/bibliography.bib b/bib/bibliography.bib index b4b4dce..68fa7e2 100644 --- a/bib/bibliography.bib +++ b/bib/bibliography.bib @@ -160,33 +160,28 @@ https://dblp.org} } -@proceedings{moschitti2014_proceed_confer_empir_method_natur, - bibsource = {dblp computer science bibliography, - https://dblp.org}, - biburl = {https://dblp.org/rec/conf/emnlp/2014.bib}, - editor = {Alessandro Moschitti and Bo Pang and Walter - Daelemans}, - isbn = {978-1-937284-96-1}, - publisher = {{ACL}}, - timestamp = {Fri, 13 Sep 2019 13:08:45 +0200}, - title = {Proceedings of the 2014 Conference on Empirical - Methods in Natural Language Processing, {EMNLP} - 2014, October 25-29, 2014, Doha, Qatar, {A} meeting - of SIGDAT, a Special Interest Group of the {ACL}}, - url = {https://www.aclweb.org/anthology/volumes/D14-1/}, +@inproceedings{pennington2014_glove, + author = "Pennington, Jeffrey and Socher, Richard and Manning, + Christopher", + title = "{G}love: Global Vectors for Word Representation", + booktitle = "Proceedings of the 2014 Conference on Empirical + Methods in Natural Language Processing ({EMNLP})", year = 2014, + pages = "1532--1543", + doi = "10.3115/v1/D14-1162", + url = {https://doi.org/10.3115/v1/D14-1162}, + address = "Doha, Qatar", + month = oct, + publisher = "Association for Computational Linguistics", } @incollection{mikolovDistributedRepresentationsWords2013, title = {Distributed {{Representations}} of {{Words}} and {{Phrases}} and Their {{Compositionality}}}, url = {http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf}, booktitle = {Advances in {{Neural Information Processing Systems}} 26}, - publisher = {{Curran Associates, Inc.}}, urldate = {2019-08-13}, date = {2013}, pages = {3111--3119}, author = {Mikolov, Tomas and Sutskever, Ilya and Chen, Kai and Corrado, Greg S and Dean, Jeff}, - editor = {Burges, C. J. C. and Bottou, L. and Welling, M. and Ghahramani, Z. and Weinberger, K. Q.}, - file = {/home/dimitri/Nextcloud/Zotero/storage/Q4GDL59G/5021-distributed-representations-of-words-andphrases.html} } diff --git a/posts/hierarchical-optimal-transport-for-document-classification.org b/posts/hierarchical-optimal-transport-for-document-classification.org index a20b55f..59c4c37 100644 --- a/posts/hierarchical-optimal-transport-for-document-classification.org +++ b/posts/hierarchical-optimal-transport-for-document-classification.org @@ -4,10 +4,10 @@ date: 2020-04-05 --- Two weeks ago, I did a presentation for my colleagues of the paper -from cite:yurochkin2019_hierar_optim_trans_docum_repres, from -NeurIPS 2019. It contains an interesting approach to document -classification leading to strong performance, and, most importantly, -excellent interpretability. +from cite:yurochkin2019_hierar_optim_trans_docum_repres, from [[https://papers.nips.cc/book/advances-in-neural-information-processing-systems-32-2019][NeurIPS +2019]]. It contains an interesting approach to document classification +leading to strong performance, and, most importantly, excellent +interpretability. This paper seems interesting to me because of it uses two methods with strong theoretical guarantees: optimal transport and topic @@ -41,8 +41,8 @@ fascinating and deep subject, so I won't enter into the details here. For an introduction to the theory and its applications, check out the excellent book from cite:peyreComputationalOptimalTransport2019, ([[https://arxiv.org/abs/1803.00567][available on ArXiv]] as -well). There are also [[https://images.math.cnrs.fr/Le-transport-optimal-numerique-et-ses-applications-Partie-1.html?lang=fr][very nice posts]] by Gabriel Peyré on the CNRS -maths blog (in French). Many more resources (including slides for +well). There are also [[https://images.math.cnrs.fr/Le-transport-optimal-numerique-et-ses-applications-Partie-1.html?lang=fr][very nice posts]] (in French) by Gabriel Peyré on +the [[https://images.math.cnrs.fr/][CNRS maths blog]]. Many more resources (including slides for presentations) are available at [[https://optimaltransport.github.io]]. For a more complete theoretical treatment of the subject, check out @@ -70,8 +70,8 @@ examples move cannon balls, or other military equipment, along a front line. -More formally, if we have to sets of points $x = (x_1, x_2, \ldots, - x_n)$, and $y = (y_1, y_2, \ldots, y_n)$, along with probability distributions $p \in \Delta^n$, $q \in \Delta^m$ over $x$ and $y$ ($\Delta^n$ is the probability simplex of dimension $n$, i.e. the set of vectors of size $n$ summing to 1), we can define the Wasserstein distance as +More formally, we start with two sets of points $x = (x_1, x_2, \ldots, + x_n)$, and $y = (y_1, y_2, \ldots, y_n)$, along with probability distributions $p \in \Delta^n$, $q \in \Delta^m$ over $x$ and $y$ ($\Delta^n$ is the probability simplex of dimension $n$, i.e. the set of vectors of size $n$ summing to 1). We can then define the Wasserstein distance as \[ W_1(p, q) = \min_{P \in \mathbb{R}_+^{n\times m}} \sum_{i,j} C_{i,j} P_{i,j} \] @@ -162,9 +162,9 @@ optimisation algorithm to compute higher-level distances. The paper is very complete regarding experiments, providing a full evaluation of the method on one particular application: document clustering. They use [[https://scikit-learn.org/stable/modules/decomposition.html#latentdirichletallocation][Latent Dirichlet Allocation]] to compute topics and -GloVe for pretrained word embeddings -citep:moschitti2014_proceed_confer_empir_method_natur, and [[https://www.gurobi.com/][Gurobi]] to -solve the optimisation problems. Their code is available [[https://github.com/IBM/HOTT][on Github]]. +GloVe for pretrained word embeddings citep:pennington2014_glove, and +[[https://www.gurobi.com/][Gurobi]] to solve the optimisation problems. Their code is available [[https://github.com/IBM/HOTT][on +GitHub]]. If you want the details, I encourage you to read the full paper, they tested the methods on a wide variety of datasets, with datasets @@ -206,7 +206,7 @@ Finally, I feel like they did not stop at a simple theoretical argument, but carefully checked on real-world datasets, measuring sensitivity to all the arbitrary choices they had to take. Again, from an industry perspective, this allows to implement the new approach -quickly and easily, confident that it won't break unexpectedly without -extensive testing. +quickly and easily, being confident that it won't break unexpectedly +without extensive testing. * References