Add tables of contents to posts
This commit is contained in:
parent
6e31bd8eab
commit
92d759a9bf
17 changed files with 272 additions and 27 deletions
|
@ -20,6 +20,19 @@
|
||||||
|
|
||||||
</section>
|
</section>
|
||||||
<section>
|
<section>
|
||||||
|
<h2>Table of Contents</h2><ul>
|
||||||
|
<li><a href="#introduction">Introduction</a></li>
|
||||||
|
<li><a href="#why-is-it-hard-to-approach">Why is it hard to approach?</a></li>
|
||||||
|
<li><a href="#where-to-start">Where to start</a><ul>
|
||||||
|
<li><a href="#introduction-and-modelling">Introduction and modelling</a></li>
|
||||||
|
<li><a href="#theory-and-algorithms">Theory and algorithms</a></li>
|
||||||
|
<li><a href="#online-courses">Online courses</a></li>
|
||||||
|
</ul></li>
|
||||||
|
<li><a href="#solvers-and-computational-resources">Solvers and computational resources <span id="solvers"></span></a></li>
|
||||||
|
<li><a href="#conclusion">Conclusion</a></li>
|
||||||
|
<li><a href="#references">References</a></li>
|
||||||
|
</ul>
|
||||||
|
<h2 id="introduction">Introduction</h2>
|
||||||
<p><a href="https://en.wikipedia.org/wiki/Operations_research">Operations research</a> (OR) is a vast area comprising a lot of theory, different branches of mathematics, and too many applications to count. In this post, I will try to explain why it can be a little disconcerting to explore at first, and how to start investigating the topic with a few references to get started.</p>
|
<p><a href="https://en.wikipedia.org/wiki/Operations_research">Operations research</a> (OR) is a vast area comprising a lot of theory, different branches of mathematics, and too many applications to count. In this post, I will try to explain why it can be a little disconcerting to explore at first, and how to start investigating the topic with a few references to get started.</p>
|
||||||
<p>Keep in mind that although I studied it during my graduate studies, this is not my primary area of expertise (I’m a data scientist by trade), and I definitely don’t pretend to know everything in OR. This is a field too vast for any single person to understand in its entirety, and I talk mostly from an “amateur mathematician and computer scientist” standpoint.</p>
|
<p>Keep in mind that although I studied it during my graduate studies, this is not my primary area of expertise (I’m a data scientist by trade), and I definitely don’t pretend to know everything in OR. This is a field too vast for any single person to understand in its entirety, and I talk mostly from an “amateur mathematician and computer scientist” standpoint.</p>
|
||||||
<h2 id="why-is-it-hard-to-approach">Why is it hard to approach?</h2>
|
<h2 id="why-is-it-hard-to-approach">Why is it hard to approach?</h2>
|
||||||
|
@ -135,6 +148,20 @@
|
||||||
|
|
||||||
</section>
|
</section>
|
||||||
<section>
|
<section>
|
||||||
|
<h2>Table of Contents</h2><ul>
|
||||||
|
<li><a href="#the-format-of-the-virtual-conference">The Format of the Virtual Conference</a></li>
|
||||||
|
<li><a href="#speakers">Speakers</a><ul>
|
||||||
|
<li><a href="#prof.-leslie-kaelbling-doing-for-our-robots-what-nature-did-for-us">Prof. Leslie Kaelbling, <span>Doing for Our Robots What Nature Did For Us</span></a></li>
|
||||||
|
<li><a href="#dr.-laurent-dinh-invertible-models-and-normalizing-flows">Dr. Laurent Dinh, <span>Invertible Models and Normalizing Flows</span></a></li>
|
||||||
|
<li><a href="#profs.-yann-lecun-and-yoshua-bengio-reflections-from-the-turing-award-winners">Profs. Yann LeCun and Yoshua Bengio, <span>Reflections from the Turing Award Winners</span></a></li>
|
||||||
|
</ul></li>
|
||||||
|
<li><a href="#workshops">Workshops</a><ul>
|
||||||
|
<li><a href="#beyond-tabula-rasa-in-reinforcement-learning-agents-that-remember-adapt-and-generalize"><span>Beyond ‘tabula rasa’ in reinforcement learning: agents that remember, adapt, and generalize</span></a></li>
|
||||||
|
<li><a href="#causal-learning-for-decision-making"><span>Causal Learning For Decision Making</span></a></li>
|
||||||
|
<li><a href="#bridging-ai-and-cognitive-science"><span>Bridging AI and Cognitive Science</span></a></li>
|
||||||
|
<li><a href="#integration-of-deep-neural-models-and-differential-equations"><span>Integration of Deep Neural Models and Differential Equations</span></a></li>
|
||||||
|
</ul></li>
|
||||||
|
</ul>
|
||||||
<p>ICLR is one of the most important conferences in machine learning, and as such, I was very excited to have the opportunity to volunteer and attend the first fully-virtual edition of the event. The whole content of the conference has been made <a href="https://iclr.cc/virtual_2020/index.html">publicly available</a>, only a few days after the end of the event!</p>
|
<p>ICLR is one of the most important conferences in machine learning, and as such, I was very excited to have the opportunity to volunteer and attend the first fully-virtual edition of the event. The whole content of the conference has been made <a href="https://iclr.cc/virtual_2020/index.html">publicly available</a>, only a few days after the end of the event!</p>
|
||||||
<p>I would like to thank the <a href="https://iclr.cc/Conferences/2020/Committees">organizing committee</a> for this incredible event, and the possibility to volunteer to help other participants<span><label for="sn-1" class="margin-toggle sidenote-number"></label><input type="checkbox" id="sn-1" class="margin-toggle" /><span class="sidenote">To better organize the event, and help people navigate the various online tools, they brought in 500(!) volunteers, waved our registration fees, and asked us to do simple load-testing and tech support. This was a very generous offer, and felt very rewarding for us, as we could attend the conference, and give back to the organization a little bit.<br />
|
<p>I would like to thank the <a href="https://iclr.cc/Conferences/2020/Committees">organizing committee</a> for this incredible event, and the possibility to volunteer to help other participants<span><label for="sn-1" class="margin-toggle sidenote-number"></label><input type="checkbox" id="sn-1" class="margin-toggle" /><span class="sidenote">To better organize the event, and help people navigate the various online tools, they brought in 500(!) volunteers, waved our registration fees, and asked us to do simple load-testing and tech support. This was a very generous offer, and felt very rewarding for us, as we could attend the conference, and give back to the organization a little bit.<br />
|
||||||
<br />
|
<br />
|
||||||
|
@ -191,6 +218,14 @@
|
||||||
|
|
||||||
</section>
|
</section>
|
||||||
<section>
|
<section>
|
||||||
|
<h2>Table of Contents</h2><ul>
|
||||||
|
<li><a href="#introduction-and-motivation">Introduction and motivation</a></li>
|
||||||
|
<li><a href="#background-optimal-transport">Background: optimal transport</a></li>
|
||||||
|
<li><a href="#hierarchical-optimal-transport">Hierarchical optimal transport</a></li>
|
||||||
|
<li><a href="#experiments">Experiments</a></li>
|
||||||
|
<li><a href="#conclusion">Conclusion</a></li>
|
||||||
|
<li><a href="#references">References</a></li>
|
||||||
|
</ul>
|
||||||
<p>Two weeks ago, I did a presentation for my colleagues of the paper from <span class="citation" data-cites="yurochkin2019_hierar_optim_trans_docum_repres">Yurochkin et al. (<a href="#ref-yurochkin2019_hierar_optim_trans_docum_repres">2019</a>)</span>, from <a href="https://papers.nips.cc/book/advances-in-neural-information-processing-systems-32-2019">NeurIPS 2019</a>. It contains an interesting approach to document classification leading to strong performance, and, most importantly, excellent interpretability.</p>
|
<p>Two weeks ago, I did a presentation for my colleagues of the paper from <span class="citation" data-cites="yurochkin2019_hierar_optim_trans_docum_repres">Yurochkin et al. (<a href="#ref-yurochkin2019_hierar_optim_trans_docum_repres">2019</a>)</span>, from <a href="https://papers.nips.cc/book/advances-in-neural-information-processing-systems-32-2019">NeurIPS 2019</a>. It contains an interesting approach to document classification leading to strong performance, and, most importantly, excellent interpretability.</p>
|
||||||
<p>This paper seems interesting to me because of it uses two methods with strong theoretical guarantees: optimal transport and topic modelling. Optimal transport looks very promising to me in NLP, and has seen a lot of interest in recent years due to advances in approximation algorithms, such as entropy regularisation. It is also quite refreshing to see approaches using solid results in optimisation, compared to purely experimental deep learning methods.</p>
|
<p>This paper seems interesting to me because of it uses two methods with strong theoretical guarantees: optimal transport and topic modelling. Optimal transport looks very promising to me in NLP, and has seen a lot of interest in recent years due to advances in approximation algorithms, such as entropy regularisation. It is also quite refreshing to see approaches using solid results in optimisation, compared to purely experimental deep learning methods.</p>
|
||||||
<h2 id="introduction-and-motivation">Introduction and motivation</h2>
|
<h2 id="introduction-and-motivation">Introduction and motivation</h2>
|
||||||
|
@ -345,6 +380,17 @@ W_1(p, q) = \min_{P \in \mathbb{R}_+^{n\times m}} \sum_{i,j} C_{i,j} P_{i,j}
|
||||||
|
|
||||||
</section>
|
</section>
|
||||||
<section>
|
<section>
|
||||||
|
<h2>Table of Contents</h2><ul>
|
||||||
|
<li><a href="#introduction">Introduction</a></li>
|
||||||
|
<li><a href="#the-axioms">The Axioms</a></li>
|
||||||
|
<li><a href="#addition">Addition</a><ul>
|
||||||
|
<li><a href="#commutativity">Commutativity</a></li>
|
||||||
|
<li><a href="#associativity">Associativity</a></li>
|
||||||
|
<li><a href="#identity-element">Identity element</a></li>
|
||||||
|
</ul></li>
|
||||||
|
<li><a href="#going-further">Going further</a></li>
|
||||||
|
<li><a href="#references">References</a></li>
|
||||||
|
</ul>
|
||||||
<h2 id="introduction">Introduction</h2>
|
<h2 id="introduction">Introduction</h2>
|
||||||
<p>I have recently bought the book <em>Category Theory</em> from Steve Awodey <span class="citation" data-cites="awodeyCategoryTheory2010">(Awodey <a href="#ref-awodeyCategoryTheory2010">2010</a>)</span> is awesome, but probably the topic for another post), and a particular passage excited my curiosity:</p>
|
<p>I have recently bought the book <em>Category Theory</em> from Steve Awodey <span class="citation" data-cites="awodeyCategoryTheory2010">(Awodey <a href="#ref-awodeyCategoryTheory2010">2010</a>)</span> is awesome, but probably the topic for another post), and a particular passage excited my curiosity:</p>
|
||||||
<blockquote>
|
<blockquote>
|
||||||
|
@ -524,6 +570,14 @@ then <span class="math inline">\(\varphi(n)\)</span> is true for every natural n
|
||||||
|
|
||||||
</section>
|
</section>
|
||||||
<section>
|
<section>
|
||||||
|
<h2>Table of Contents</h2><ul>
|
||||||
|
<li><a href="#the-apl-family-of-languages">The APL family of languages</a><ul>
|
||||||
|
<li><a href="#why-apl">Why APL?</a></li>
|
||||||
|
<li><a href="#implementations">Implementations</a></li>
|
||||||
|
</ul></li>
|
||||||
|
<li><a href="#the-ising-model-in-apl">The Ising model in APL</a></li>
|
||||||
|
<li><a href="#conclusion">Conclusion</a></li>
|
||||||
|
</ul>
|
||||||
<h2 id="the-apl-family-of-languages">The APL family of languages</h2>
|
<h2 id="the-apl-family-of-languages">The APL family of languages</h2>
|
||||||
<h3 id="why-apl">Why APL?</h3>
|
<h3 id="why-apl">Why APL?</h3>
|
||||||
<p>I recently got interested in <a href="https://en.wikipedia.org/wiki/APL_(programming_language)">APL</a>, an <em>array-based</em> programming language. In APL (and derivatives), we try to reason about programs as series of transformations of multi-dimensional arrays. This is exactly the kind of style I like in Haskell and other functional languages, where I also try to use higher-order functions (map, fold, etc) on lists or arrays. A developer only needs to understand these abstractions once, instead of deconstructing each loop or each recursive function encountered in a program.</p>
|
<p>I recently got interested in <a href="https://en.wikipedia.org/wiki/APL_(programming_language)">APL</a>, an <em>array-based</em> programming language. In APL (and derivatives), we try to reason about programs as series of transformations of multi-dimensional arrays. This is exactly the kind of style I like in Haskell and other functional languages, where I also try to use higher-order functions (map, fold, etc) on lists or arrays. A developer only needs to understand these abstractions once, instead of deconstructing each loop or each recursive function encountered in a program.</p>
|
||||||
|
@ -730,6 +784,12 @@ then <span class="math inline">\(\varphi(n)\)</span> is true for every natural n
|
||||||
|
|
||||||
</section>
|
</section>
|
||||||
<section>
|
<section>
|
||||||
|
<h2>Table of Contents</h2><ul>
|
||||||
|
<li><a href="#mathematical-definition">Mathematical definition</a></li>
|
||||||
|
<li><a href="#simulation">Simulation</a></li>
|
||||||
|
<li><a href="#implementation">Implementation</a></li>
|
||||||
|
<li><a href="#conclusion">Conclusion</a></li>
|
||||||
|
</ul>
|
||||||
<p>The <a href="https://en.wikipedia.org/wiki/Ising_model">Ising model</a> is a model used to represent magnetic dipole moments in statistical physics. Physical details are on the Wikipedia page, but what is interesting is that it follows a complex probability distribution on a lattice, where each site can take the value +1 or -1.</p>
|
<p>The <a href="https://en.wikipedia.org/wiki/Ising_model">Ising model</a> is a model used to represent magnetic dipole moments in statistical physics. Physical details are on the Wikipedia page, but what is interesting is that it follows a complex probability distribution on a lattice, where each site can take the value +1 or -1.</p>
|
||||||
<p><img src="../images/ising.gif" /></p>
|
<p><img src="../images/ising.gif" /></p>
|
||||||
<h2 id="mathematical-definition">Mathematical definition</h2>
|
<h2 id="mathematical-definition">Mathematical definition</h2>
|
||||||
|
@ -850,6 +910,22 @@ J\sigma_i \sum_{j\sim i} \sigma_j. \]</span></p>
|
||||||
|
|
||||||
</section>
|
</section>
|
||||||
<section>
|
<section>
|
||||||
|
<h2>Table of Contents</h2><ul>
|
||||||
|
<li><a href="#what-is-an-l-system">What is an L-system?</a><ul>
|
||||||
|
<li><a href="#a-few-examples-to-get-started">A few examples to get started</a></li>
|
||||||
|
<li><a href="#definition">Definition</a></li>
|
||||||
|
<li><a href="#drawing-instructions-and-representation">Drawing instructions and representation</a></li>
|
||||||
|
</ul></li>
|
||||||
|
<li><a href="#implementation-details">Implementation details</a><ul>
|
||||||
|
<li><a href="#the-lsystem-data-type">The <code>LSystem</code> data type</a></li>
|
||||||
|
<li><a href="#iterating-and-representing">Iterating and representing</a></li>
|
||||||
|
<li><a href="#drawing">Drawing</a></li>
|
||||||
|
</ul></li>
|
||||||
|
<li><a href="#common-file-format-for-l-systems">Common file format for L-systems</a></li>
|
||||||
|
<li><a href="#variations-on-l-systems">Variations on L-systems</a></li>
|
||||||
|
<li><a href="#usage-notes">Usage notes</a></li>
|
||||||
|
<li><a href="#references">References</a></li>
|
||||||
|
</ul>
|
||||||
<p>L-systems are a formal way to make interesting visualisations. You can use them to model a wide variety of objects: space-filling curves, fractals, biological systems, tilings, etc.</p>
|
<p>L-systems are a formal way to make interesting visualisations. You can use them to model a wide variety of objects: space-filling curves, fractals, biological systems, tilings, etc.</p>
|
||||||
<p>See the Github repo: <a href="https://github.com/dlozeve/lsystems" class="uri">https://github.com/dlozeve/lsystems</a></p>
|
<p>See the Github repo: <a href="https://github.com/dlozeve/lsystems" class="uri">https://github.com/dlozeve/lsystems</a></p>
|
||||||
<h2 id="what-is-an-l-system">What is an L-system?</h2>
|
<h2 id="what-is-an-l-system">What is an L-system?</h2>
|
||||||
|
|
|
@ -52,6 +52,14 @@
|
||||||
|
|
||||||
</section>
|
</section>
|
||||||
<section>
|
<section>
|
||||||
|
<h2>Table of Contents</h2><ul>
|
||||||
|
<li><a href="#introduction-and-motivation">Introduction and motivation</a></li>
|
||||||
|
<li><a href="#background-optimal-transport">Background: optimal transport</a></li>
|
||||||
|
<li><a href="#hierarchical-optimal-transport">Hierarchical optimal transport</a></li>
|
||||||
|
<li><a href="#experiments">Experiments</a></li>
|
||||||
|
<li><a href="#conclusion">Conclusion</a></li>
|
||||||
|
<li><a href="#references">References</a></li>
|
||||||
|
</ul>
|
||||||
<p>Two weeks ago, I did a presentation for my colleagues of the paper from <span class="citation" data-cites="yurochkin2019_hierar_optim_trans_docum_repres">Yurochkin et al. (<a href="#ref-yurochkin2019_hierar_optim_trans_docum_repres">2019</a>)</span>, from <a href="https://papers.nips.cc/book/advances-in-neural-information-processing-systems-32-2019">NeurIPS 2019</a>. It contains an interesting approach to document classification leading to strong performance, and, most importantly, excellent interpretability.</p>
|
<p>Two weeks ago, I did a presentation for my colleagues of the paper from <span class="citation" data-cites="yurochkin2019_hierar_optim_trans_docum_repres">Yurochkin et al. (<a href="#ref-yurochkin2019_hierar_optim_trans_docum_repres">2019</a>)</span>, from <a href="https://papers.nips.cc/book/advances-in-neural-information-processing-systems-32-2019">NeurIPS 2019</a>. It contains an interesting approach to document classification leading to strong performance, and, most importantly, excellent interpretability.</p>
|
||||||
<p>This paper seems interesting to me because of it uses two methods with strong theoretical guarantees: optimal transport and topic modelling. Optimal transport looks very promising to me in NLP, and has seen a lot of interest in recent years due to advances in approximation algorithms, such as entropy regularisation. It is also quite refreshing to see approaches using solid results in optimisation, compared to purely experimental deep learning methods.</p>
|
<p>This paper seems interesting to me because of it uses two methods with strong theoretical guarantees: optimal transport and topic modelling. Optimal transport looks very promising to me in NLP, and has seen a lot of interest in recent years due to advances in approximation algorithms, such as entropy regularisation. It is also quite refreshing to see approaches using solid results in optimisation, compared to purely experimental deep learning methods.</p>
|
||||||
<h2 id="introduction-and-motivation">Introduction and motivation</h2>
|
<h2 id="introduction-and-motivation">Introduction and motivation</h2>
|
||||||
|
|
|
@ -52,6 +52,20 @@
|
||||||
|
|
||||||
</section>
|
</section>
|
||||||
<section>
|
<section>
|
||||||
|
<h2>Table of Contents</h2><ul>
|
||||||
|
<li><a href="#the-format-of-the-virtual-conference">The Format of the Virtual Conference</a></li>
|
||||||
|
<li><a href="#speakers">Speakers</a><ul>
|
||||||
|
<li><a href="#prof.-leslie-kaelbling-doing-for-our-robots-what-nature-did-for-us">Prof. Leslie Kaelbling, <span>Doing for Our Robots What Nature Did For Us</span></a></li>
|
||||||
|
<li><a href="#dr.-laurent-dinh-invertible-models-and-normalizing-flows">Dr. Laurent Dinh, <span>Invertible Models and Normalizing Flows</span></a></li>
|
||||||
|
<li><a href="#profs.-yann-lecun-and-yoshua-bengio-reflections-from-the-turing-award-winners">Profs. Yann LeCun and Yoshua Bengio, <span>Reflections from the Turing Award Winners</span></a></li>
|
||||||
|
</ul></li>
|
||||||
|
<li><a href="#workshops">Workshops</a><ul>
|
||||||
|
<li><a href="#beyond-tabula-rasa-in-reinforcement-learning-agents-that-remember-adapt-and-generalize"><span>Beyond ‘tabula rasa’ in reinforcement learning: agents that remember, adapt, and generalize</span></a></li>
|
||||||
|
<li><a href="#causal-learning-for-decision-making"><span>Causal Learning For Decision Making</span></a></li>
|
||||||
|
<li><a href="#bridging-ai-and-cognitive-science"><span>Bridging AI and Cognitive Science</span></a></li>
|
||||||
|
<li><a href="#integration-of-deep-neural-models-and-differential-equations"><span>Integration of Deep Neural Models and Differential Equations</span></a></li>
|
||||||
|
</ul></li>
|
||||||
|
</ul>
|
||||||
<p>ICLR is one of the most important conferences in machine learning, and as such, I was very excited to have the opportunity to volunteer and attend the first fully-virtual edition of the event. The whole content of the conference has been made <a href="https://iclr.cc/virtual_2020/index.html">publicly available</a>, only a few days after the end of the event!</p>
|
<p>ICLR is one of the most important conferences in machine learning, and as such, I was very excited to have the opportunity to volunteer and attend the first fully-virtual edition of the event. The whole content of the conference has been made <a href="https://iclr.cc/virtual_2020/index.html">publicly available</a>, only a few days after the end of the event!</p>
|
||||||
<p>I would like to thank the <a href="https://iclr.cc/Conferences/2020/Committees">organizing committee</a> for this incredible event, and the possibility to volunteer to help other participants<span><label for="sn-1" class="margin-toggle sidenote-number"></label><input type="checkbox" id="sn-1" class="margin-toggle" /><span class="sidenote">To better organize the event, and help people navigate the various online tools, they brought in 500(!) volunteers, waved our registration fees, and asked us to do simple load-testing and tech support. This was a very generous offer, and felt very rewarding for us, as we could attend the conference, and give back to the organization a little bit.<br />
|
<p>I would like to thank the <a href="https://iclr.cc/Conferences/2020/Committees">organizing committee</a> for this incredible event, and the possibility to volunteer to help other participants<span><label for="sn-1" class="margin-toggle sidenote-number"></label><input type="checkbox" id="sn-1" class="margin-toggle" /><span class="sidenote">To better organize the event, and help people navigate the various online tools, they brought in 500(!) volunteers, waved our registration fees, and asked us to do simple load-testing and tech support. This was a very generous offer, and felt very rewarding for us, as we could attend the conference, and give back to the organization a little bit.<br />
|
||||||
<br />
|
<br />
|
||||||
|
|
|
@ -52,6 +52,14 @@
|
||||||
|
|
||||||
</section>
|
</section>
|
||||||
<section>
|
<section>
|
||||||
|
<h2>Table of Contents</h2><ul>
|
||||||
|
<li><a href="#the-apl-family-of-languages">The APL family of languages</a><ul>
|
||||||
|
<li><a href="#why-apl">Why APL?</a></li>
|
||||||
|
<li><a href="#implementations">Implementations</a></li>
|
||||||
|
</ul></li>
|
||||||
|
<li><a href="#the-ising-model-in-apl">The Ising model in APL</a></li>
|
||||||
|
<li><a href="#conclusion">Conclusion</a></li>
|
||||||
|
</ul>
|
||||||
<h2 id="the-apl-family-of-languages">The APL family of languages</h2>
|
<h2 id="the-apl-family-of-languages">The APL family of languages</h2>
|
||||||
<h3 id="why-apl">Why APL?</h3>
|
<h3 id="why-apl">Why APL?</h3>
|
||||||
<p>I recently got interested in <a href="https://en.wikipedia.org/wiki/APL_(programming_language)">APL</a>, an <em>array-based</em> programming language. In APL (and derivatives), we try to reason about programs as series of transformations of multi-dimensional arrays. This is exactly the kind of style I like in Haskell and other functional languages, where I also try to use higher-order functions (map, fold, etc) on lists or arrays. A developer only needs to understand these abstractions once, instead of deconstructing each loop or each recursive function encountered in a program.</p>
|
<p>I recently got interested in <a href="https://en.wikipedia.org/wiki/APL_(programming_language)">APL</a>, an <em>array-based</em> programming language. In APL (and derivatives), we try to reason about programs as series of transformations of multi-dimensional arrays. This is exactly the kind of style I like in Haskell and other functional languages, where I also try to use higher-order functions (map, fold, etc) on lists or arrays. A developer only needs to understand these abstractions once, instead of deconstructing each loop or each recursive function encountered in a program.</p>
|
||||||
|
|
|
@ -54,6 +54,12 @@
|
||||||
|
|
||||||
</section>
|
</section>
|
||||||
<section>
|
<section>
|
||||||
|
<h2>Table of Contents</h2><ul>
|
||||||
|
<li><a href="#mathematical-definition">Mathematical definition</a></li>
|
||||||
|
<li><a href="#simulation">Simulation</a></li>
|
||||||
|
<li><a href="#implementation">Implementation</a></li>
|
||||||
|
<li><a href="#conclusion">Conclusion</a></li>
|
||||||
|
</ul>
|
||||||
<p>The <a href="https://en.wikipedia.org/wiki/Ising_model">Ising model</a> is a model used to represent magnetic dipole moments in statistical physics. Physical details are on the Wikipedia page, but what is interesting is that it follows a complex probability distribution on a lattice, where each site can take the value +1 or -1.</p>
|
<p>The <a href="https://en.wikipedia.org/wiki/Ising_model">Ising model</a> is a model used to represent magnetic dipole moments in statistical physics. Physical details are on the Wikipedia page, but what is interesting is that it follows a complex probability distribution on a lattice, where each site can take the value +1 or -1.</p>
|
||||||
<p><img src="../images/ising.gif" /></p>
|
<p><img src="../images/ising.gif" /></p>
|
||||||
<h2 id="mathematical-definition">Mathematical definition</h2>
|
<h2 id="mathematical-definition">Mathematical definition</h2>
|
||||||
|
|
|
@ -54,6 +54,22 @@
|
||||||
|
|
||||||
</section>
|
</section>
|
||||||
<section>
|
<section>
|
||||||
|
<h2>Table of Contents</h2><ul>
|
||||||
|
<li><a href="#what-is-an-l-system">What is an L-system?</a><ul>
|
||||||
|
<li><a href="#a-few-examples-to-get-started">A few examples to get started</a></li>
|
||||||
|
<li><a href="#definition">Definition</a></li>
|
||||||
|
<li><a href="#drawing-instructions-and-representation">Drawing instructions and representation</a></li>
|
||||||
|
</ul></li>
|
||||||
|
<li><a href="#implementation-details">Implementation details</a><ul>
|
||||||
|
<li><a href="#the-lsystem-data-type">The <code>LSystem</code> data type</a></li>
|
||||||
|
<li><a href="#iterating-and-representing">Iterating and representing</a></li>
|
||||||
|
<li><a href="#drawing">Drawing</a></li>
|
||||||
|
</ul></li>
|
||||||
|
<li><a href="#common-file-format-for-l-systems">Common file format for L-systems</a></li>
|
||||||
|
<li><a href="#variations-on-l-systems">Variations on L-systems</a></li>
|
||||||
|
<li><a href="#usage-notes">Usage notes</a></li>
|
||||||
|
<li><a href="#references">References</a></li>
|
||||||
|
</ul>
|
||||||
<p>L-systems are a formal way to make interesting visualisations. You can use them to model a wide variety of objects: space-filling curves, fractals, biological systems, tilings, etc.</p>
|
<p>L-systems are a formal way to make interesting visualisations. You can use them to model a wide variety of objects: space-filling curves, fractals, biological systems, tilings, etc.</p>
|
||||||
<p>See the Github repo: <a href="https://github.com/dlozeve/lsystems" class="uri">https://github.com/dlozeve/lsystems</a></p>
|
<p>See the Github repo: <a href="https://github.com/dlozeve/lsystems" class="uri">https://github.com/dlozeve/lsystems</a></p>
|
||||||
<h2 id="what-is-an-l-system">What is an L-system?</h2>
|
<h2 id="what-is-an-l-system">What is an L-system?</h2>
|
||||||
|
|
|
@ -52,6 +52,19 @@
|
||||||
|
|
||||||
</section>
|
</section>
|
||||||
<section>
|
<section>
|
||||||
|
<h2>Table of Contents</h2><ul>
|
||||||
|
<li><a href="#introduction">Introduction</a></li>
|
||||||
|
<li><a href="#why-is-it-hard-to-approach">Why is it hard to approach?</a></li>
|
||||||
|
<li><a href="#where-to-start">Where to start</a><ul>
|
||||||
|
<li><a href="#introduction-and-modelling">Introduction and modelling</a></li>
|
||||||
|
<li><a href="#theory-and-algorithms">Theory and algorithms</a></li>
|
||||||
|
<li><a href="#online-courses">Online courses</a></li>
|
||||||
|
</ul></li>
|
||||||
|
<li><a href="#solvers-and-computational-resources">Solvers and computational resources <span id="solvers"></span></a></li>
|
||||||
|
<li><a href="#conclusion">Conclusion</a></li>
|
||||||
|
<li><a href="#references">References</a></li>
|
||||||
|
</ul>
|
||||||
|
<h2 id="introduction">Introduction</h2>
|
||||||
<p><a href="https://en.wikipedia.org/wiki/Operations_research">Operations research</a> (OR) is a vast area comprising a lot of theory, different branches of mathematics, and too many applications to count. In this post, I will try to explain why it can be a little disconcerting to explore at first, and how to start investigating the topic with a few references to get started.</p>
|
<p><a href="https://en.wikipedia.org/wiki/Operations_research">Operations research</a> (OR) is a vast area comprising a lot of theory, different branches of mathematics, and too many applications to count. In this post, I will try to explain why it can be a little disconcerting to explore at first, and how to start investigating the topic with a few references to get started.</p>
|
||||||
<p>Keep in mind that although I studied it during my graduate studies, this is not my primary area of expertise (I’m a data scientist by trade), and I definitely don’t pretend to know everything in OR. This is a field too vast for any single person to understand in its entirety, and I talk mostly from an “amateur mathematician and computer scientist” standpoint.</p>
|
<p>Keep in mind that although I studied it during my graduate studies, this is not my primary area of expertise (I’m a data scientist by trade), and I definitely don’t pretend to know everything in OR. This is a field too vast for any single person to understand in its entirety, and I talk mostly from an “amateur mathematician and computer scientist” standpoint.</p>
|
||||||
<h2 id="why-is-it-hard-to-approach">Why is it hard to approach?</h2>
|
<h2 id="why-is-it-hard-to-approach">Why is it hard to approach?</h2>
|
||||||
|
|
|
@ -52,6 +52,17 @@
|
||||||
|
|
||||||
</section>
|
</section>
|
||||||
<section>
|
<section>
|
||||||
|
<h2>Table of Contents</h2><ul>
|
||||||
|
<li><a href="#introduction">Introduction</a></li>
|
||||||
|
<li><a href="#the-axioms">The Axioms</a></li>
|
||||||
|
<li><a href="#addition">Addition</a><ul>
|
||||||
|
<li><a href="#commutativity">Commutativity</a></li>
|
||||||
|
<li><a href="#associativity">Associativity</a></li>
|
||||||
|
<li><a href="#identity-element">Identity element</a></li>
|
||||||
|
</ul></li>
|
||||||
|
<li><a href="#going-further">Going further</a></li>
|
||||||
|
<li><a href="#references">References</a></li>
|
||||||
|
</ul>
|
||||||
<h2 id="introduction">Introduction</h2>
|
<h2 id="introduction">Introduction</h2>
|
||||||
<p>I have recently bought the book <em>Category Theory</em> from Steve Awodey <span class="citation" data-cites="awodeyCategoryTheory2010">(Awodey <a href="#ref-awodeyCategoryTheory2010">2010</a>)</span> is awesome, but probably the topic for another post), and a particular passage excited my curiosity:</p>
|
<p>I have recently bought the book <em>Category Theory</em> from Steve Awodey <span class="citation" data-cites="awodeyCategoryTheory2010">(Awodey <a href="#ref-awodeyCategoryTheory2010">2010</a>)</span> is awesome, but probably the topic for another post), and a particular passage excited my curiosity:</p>
|
||||||
<blockquote>
|
<blockquote>
|
||||||
|
|
|
@ -16,6 +16,19 @@
|
||||||
|
|
||||||
</section>
|
</section>
|
||||||
<section>
|
<section>
|
||||||
|
<h2>Table of Contents</h2><ul>
|
||||||
|
<li><a href="#introduction">Introduction</a></li>
|
||||||
|
<li><a href="#why-is-it-hard-to-approach">Why is it hard to approach?</a></li>
|
||||||
|
<li><a href="#where-to-start">Where to start</a><ul>
|
||||||
|
<li><a href="#introduction-and-modelling">Introduction and modelling</a></li>
|
||||||
|
<li><a href="#theory-and-algorithms">Theory and algorithms</a></li>
|
||||||
|
<li><a href="#online-courses">Online courses</a></li>
|
||||||
|
</ul></li>
|
||||||
|
<li><a href="#solvers-and-computational-resources">Solvers and computational resources <span id="solvers"></span></a></li>
|
||||||
|
<li><a href="#conclusion">Conclusion</a></li>
|
||||||
|
<li><a href="#references">References</a></li>
|
||||||
|
</ul>
|
||||||
|
<h2 id="introduction">Introduction</h2>
|
||||||
<p><a href="https://en.wikipedia.org/wiki/Operations_research">Operations research</a> (OR) is a vast area comprising a lot of theory, different branches of mathematics, and too many applications to count. In this post, I will try to explain why it can be a little disconcerting to explore at first, and how to start investigating the topic with a few references to get started.</p>
|
<p><a href="https://en.wikipedia.org/wiki/Operations_research">Operations research</a> (OR) is a vast area comprising a lot of theory, different branches of mathematics, and too many applications to count. In this post, I will try to explain why it can be a little disconcerting to explore at first, and how to start investigating the topic with a few references to get started.</p>
|
||||||
<p>Keep in mind that although I studied it during my graduate studies, this is not my primary area of expertise (I’m a data scientist by trade), and I definitely don’t pretend to know everything in OR. This is a field too vast for any single person to understand in its entirety, and I talk mostly from an “amateur mathematician and computer scientist” standpoint.</p>
|
<p>Keep in mind that although I studied it during my graduate studies, this is not my primary area of expertise (I’m a data scientist by trade), and I definitely don’t pretend to know everything in OR. This is a field too vast for any single person to understand in its entirety, and I talk mostly from an “amateur mathematician and computer scientist” standpoint.</p>
|
||||||
<h2 id="why-is-it-hard-to-approach">Why is it hard to approach?</h2>
|
<h2 id="why-is-it-hard-to-approach">Why is it hard to approach?</h2>
|
||||||
|
@ -131,6 +144,20 @@
|
||||||
|
|
||||||
</section>
|
</section>
|
||||||
<section>
|
<section>
|
||||||
|
<h2>Table of Contents</h2><ul>
|
||||||
|
<li><a href="#the-format-of-the-virtual-conference">The Format of the Virtual Conference</a></li>
|
||||||
|
<li><a href="#speakers">Speakers</a><ul>
|
||||||
|
<li><a href="#prof.-leslie-kaelbling-doing-for-our-robots-what-nature-did-for-us">Prof. Leslie Kaelbling, <span>Doing for Our Robots What Nature Did For Us</span></a></li>
|
||||||
|
<li><a href="#dr.-laurent-dinh-invertible-models-and-normalizing-flows">Dr. Laurent Dinh, <span>Invertible Models and Normalizing Flows</span></a></li>
|
||||||
|
<li><a href="#profs.-yann-lecun-and-yoshua-bengio-reflections-from-the-turing-award-winners">Profs. Yann LeCun and Yoshua Bengio, <span>Reflections from the Turing Award Winners</span></a></li>
|
||||||
|
</ul></li>
|
||||||
|
<li><a href="#workshops">Workshops</a><ul>
|
||||||
|
<li><a href="#beyond-tabula-rasa-in-reinforcement-learning-agents-that-remember-adapt-and-generalize"><span>Beyond ‘tabula rasa’ in reinforcement learning: agents that remember, adapt, and generalize</span></a></li>
|
||||||
|
<li><a href="#causal-learning-for-decision-making"><span>Causal Learning For Decision Making</span></a></li>
|
||||||
|
<li><a href="#bridging-ai-and-cognitive-science"><span>Bridging AI and Cognitive Science</span></a></li>
|
||||||
|
<li><a href="#integration-of-deep-neural-models-and-differential-equations"><span>Integration of Deep Neural Models and Differential Equations</span></a></li>
|
||||||
|
</ul></li>
|
||||||
|
</ul>
|
||||||
<p>ICLR is one of the most important conferences in machine learning, and as such, I was very excited to have the opportunity to volunteer and attend the first fully-virtual edition of the event. The whole content of the conference has been made <a href="https://iclr.cc/virtual_2020/index.html">publicly available</a>, only a few days after the end of the event!</p>
|
<p>ICLR is one of the most important conferences in machine learning, and as such, I was very excited to have the opportunity to volunteer and attend the first fully-virtual edition of the event. The whole content of the conference has been made <a href="https://iclr.cc/virtual_2020/index.html">publicly available</a>, only a few days after the end of the event!</p>
|
||||||
<p>I would like to thank the <a href="https://iclr.cc/Conferences/2020/Committees">organizing committee</a> for this incredible event, and the possibility to volunteer to help other participants<span><label for="sn-1" class="margin-toggle sidenote-number"></label><input type="checkbox" id="sn-1" class="margin-toggle" /><span class="sidenote">To better organize the event, and help people navigate the various online tools, they brought in 500(!) volunteers, waved our registration fees, and asked us to do simple load-testing and tech support. This was a very generous offer, and felt very rewarding for us, as we could attend the conference, and give back to the organization a little bit.<br />
|
<p>I would like to thank the <a href="https://iclr.cc/Conferences/2020/Committees">organizing committee</a> for this incredible event, and the possibility to volunteer to help other participants<span><label for="sn-1" class="margin-toggle sidenote-number"></label><input type="checkbox" id="sn-1" class="margin-toggle" /><span class="sidenote">To better organize the event, and help people navigate the various online tools, they brought in 500(!) volunteers, waved our registration fees, and asked us to do simple load-testing and tech support. This was a very generous offer, and felt very rewarding for us, as we could attend the conference, and give back to the organization a little bit.<br />
|
||||||
<br />
|
<br />
|
||||||
|
@ -187,6 +214,14 @@
|
||||||
|
|
||||||
</section>
|
</section>
|
||||||
<section>
|
<section>
|
||||||
|
<h2>Table of Contents</h2><ul>
|
||||||
|
<li><a href="#introduction-and-motivation">Introduction and motivation</a></li>
|
||||||
|
<li><a href="#background-optimal-transport">Background: optimal transport</a></li>
|
||||||
|
<li><a href="#hierarchical-optimal-transport">Hierarchical optimal transport</a></li>
|
||||||
|
<li><a href="#experiments">Experiments</a></li>
|
||||||
|
<li><a href="#conclusion">Conclusion</a></li>
|
||||||
|
<li><a href="#references">References</a></li>
|
||||||
|
</ul>
|
||||||
<p>Two weeks ago, I did a presentation for my colleagues of the paper from <span class="citation" data-cites="yurochkin2019_hierar_optim_trans_docum_repres">Yurochkin et al. (<a href="#ref-yurochkin2019_hierar_optim_trans_docum_repres">2019</a>)</span>, from <a href="https://papers.nips.cc/book/advances-in-neural-information-processing-systems-32-2019">NeurIPS 2019</a>. It contains an interesting approach to document classification leading to strong performance, and, most importantly, excellent interpretability.</p>
|
<p>Two weeks ago, I did a presentation for my colleagues of the paper from <span class="citation" data-cites="yurochkin2019_hierar_optim_trans_docum_repres">Yurochkin et al. (<a href="#ref-yurochkin2019_hierar_optim_trans_docum_repres">2019</a>)</span>, from <a href="https://papers.nips.cc/book/advances-in-neural-information-processing-systems-32-2019">NeurIPS 2019</a>. It contains an interesting approach to document classification leading to strong performance, and, most importantly, excellent interpretability.</p>
|
||||||
<p>This paper seems interesting to me because of it uses two methods with strong theoretical guarantees: optimal transport and topic modelling. Optimal transport looks very promising to me in NLP, and has seen a lot of interest in recent years due to advances in approximation algorithms, such as entropy regularisation. It is also quite refreshing to see approaches using solid results in optimisation, compared to purely experimental deep learning methods.</p>
|
<p>This paper seems interesting to me because of it uses two methods with strong theoretical guarantees: optimal transport and topic modelling. Optimal transport looks very promising to me in NLP, and has seen a lot of interest in recent years due to advances in approximation algorithms, such as entropy regularisation. It is also quite refreshing to see approaches using solid results in optimisation, compared to purely experimental deep learning methods.</p>
|
||||||
<h2 id="introduction-and-motivation">Introduction and motivation</h2>
|
<h2 id="introduction-and-motivation">Introduction and motivation</h2>
|
||||||
|
@ -341,6 +376,17 @@ W_1(p, q) = \min_{P \in \mathbb{R}_+^{n\times m}} \sum_{i,j} C_{i,j} P_{i,j}
|
||||||
|
|
||||||
</section>
|
</section>
|
||||||
<section>
|
<section>
|
||||||
|
<h2>Table of Contents</h2><ul>
|
||||||
|
<li><a href="#introduction">Introduction</a></li>
|
||||||
|
<li><a href="#the-axioms">The Axioms</a></li>
|
||||||
|
<li><a href="#addition">Addition</a><ul>
|
||||||
|
<li><a href="#commutativity">Commutativity</a></li>
|
||||||
|
<li><a href="#associativity">Associativity</a></li>
|
||||||
|
<li><a href="#identity-element">Identity element</a></li>
|
||||||
|
</ul></li>
|
||||||
|
<li><a href="#going-further">Going further</a></li>
|
||||||
|
<li><a href="#references">References</a></li>
|
||||||
|
</ul>
|
||||||
<h2 id="introduction">Introduction</h2>
|
<h2 id="introduction">Introduction</h2>
|
||||||
<p>I have recently bought the book <em>Category Theory</em> from Steve Awodey <span class="citation" data-cites="awodeyCategoryTheory2010">(Awodey <a href="#ref-awodeyCategoryTheory2010">2010</a>)</span> is awesome, but probably the topic for another post), and a particular passage excited my curiosity:</p>
|
<p>I have recently bought the book <em>Category Theory</em> from Steve Awodey <span class="citation" data-cites="awodeyCategoryTheory2010">(Awodey <a href="#ref-awodeyCategoryTheory2010">2010</a>)</span> is awesome, but probably the topic for another post), and a particular passage excited my curiosity:</p>
|
||||||
<blockquote>
|
<blockquote>
|
||||||
|
@ -520,6 +566,14 @@ then <span class="math inline">\(\varphi(n)\)</span> is true for every natural n
|
||||||
|
|
||||||
</section>
|
</section>
|
||||||
<section>
|
<section>
|
||||||
|
<h2>Table of Contents</h2><ul>
|
||||||
|
<li><a href="#the-apl-family-of-languages">The APL family of languages</a><ul>
|
||||||
|
<li><a href="#why-apl">Why APL?</a></li>
|
||||||
|
<li><a href="#implementations">Implementations</a></li>
|
||||||
|
</ul></li>
|
||||||
|
<li><a href="#the-ising-model-in-apl">The Ising model in APL</a></li>
|
||||||
|
<li><a href="#conclusion">Conclusion</a></li>
|
||||||
|
</ul>
|
||||||
<h2 id="the-apl-family-of-languages">The APL family of languages</h2>
|
<h2 id="the-apl-family-of-languages">The APL family of languages</h2>
|
||||||
<h3 id="why-apl">Why APL?</h3>
|
<h3 id="why-apl">Why APL?</h3>
|
||||||
<p>I recently got interested in <a href="https://en.wikipedia.org/wiki/APL_(programming_language)">APL</a>, an <em>array-based</em> programming language. In APL (and derivatives), we try to reason about programs as series of transformations of multi-dimensional arrays. This is exactly the kind of style I like in Haskell and other functional languages, where I also try to use higher-order functions (map, fold, etc) on lists or arrays. A developer only needs to understand these abstractions once, instead of deconstructing each loop or each recursive function encountered in a program.</p>
|
<p>I recently got interested in <a href="https://en.wikipedia.org/wiki/APL_(programming_language)">APL</a>, an <em>array-based</em> programming language. In APL (and derivatives), we try to reason about programs as series of transformations of multi-dimensional arrays. This is exactly the kind of style I like in Haskell and other functional languages, where I also try to use higher-order functions (map, fold, etc) on lists or arrays. A developer only needs to understand these abstractions once, instead of deconstructing each loop or each recursive function encountered in a program.</p>
|
||||||
|
@ -726,6 +780,12 @@ then <span class="math inline">\(\varphi(n)\)</span> is true for every natural n
|
||||||
|
|
||||||
</section>
|
</section>
|
||||||
<section>
|
<section>
|
||||||
|
<h2>Table of Contents</h2><ul>
|
||||||
|
<li><a href="#mathematical-definition">Mathematical definition</a></li>
|
||||||
|
<li><a href="#simulation">Simulation</a></li>
|
||||||
|
<li><a href="#implementation">Implementation</a></li>
|
||||||
|
<li><a href="#conclusion">Conclusion</a></li>
|
||||||
|
</ul>
|
||||||
<p>The <a href="https://en.wikipedia.org/wiki/Ising_model">Ising model</a> is a model used to represent magnetic dipole moments in statistical physics. Physical details are on the Wikipedia page, but what is interesting is that it follows a complex probability distribution on a lattice, where each site can take the value +1 or -1.</p>
|
<p>The <a href="https://en.wikipedia.org/wiki/Ising_model">Ising model</a> is a model used to represent magnetic dipole moments in statistical physics. Physical details are on the Wikipedia page, but what is interesting is that it follows a complex probability distribution on a lattice, where each site can take the value +1 or -1.</p>
|
||||||
<p><img src="../images/ising.gif" /></p>
|
<p><img src="../images/ising.gif" /></p>
|
||||||
<h2 id="mathematical-definition">Mathematical definition</h2>
|
<h2 id="mathematical-definition">Mathematical definition</h2>
|
||||||
|
@ -846,6 +906,22 @@ J\sigma_i \sum_{j\sim i} \sigma_j. \]</span></p>
|
||||||
|
|
||||||
</section>
|
</section>
|
||||||
<section>
|
<section>
|
||||||
|
<h2>Table of Contents</h2><ul>
|
||||||
|
<li><a href="#what-is-an-l-system">What is an L-system?</a><ul>
|
||||||
|
<li><a href="#a-few-examples-to-get-started">A few examples to get started</a></li>
|
||||||
|
<li><a href="#definition">Definition</a></li>
|
||||||
|
<li><a href="#drawing-instructions-and-representation">Drawing instructions and representation</a></li>
|
||||||
|
</ul></li>
|
||||||
|
<li><a href="#implementation-details">Implementation details</a><ul>
|
||||||
|
<li><a href="#the-lsystem-data-type">The <code>LSystem</code> data type</a></li>
|
||||||
|
<li><a href="#iterating-and-representing">Iterating and representing</a></li>
|
||||||
|
<li><a href="#drawing">Drawing</a></li>
|
||||||
|
</ul></li>
|
||||||
|
<li><a href="#common-file-format-for-l-systems">Common file format for L-systems</a></li>
|
||||||
|
<li><a href="#variations-on-l-systems">Variations on L-systems</a></li>
|
||||||
|
<li><a href="#usage-notes">Usage notes</a></li>
|
||||||
|
<li><a href="#references">References</a></li>
|
||||||
|
</ul>
|
||||||
<p>L-systems are a formal way to make interesting visualisations. You can use them to model a wide variety of objects: space-filling curves, fractals, biological systems, tilings, etc.</p>
|
<p>L-systems are a formal way to make interesting visualisations. You can use them to model a wide variety of objects: space-filling curves, fractals, biological systems, tilings, etc.</p>
|
||||||
<p>See the Github repo: <a href="https://github.com/dlozeve/lsystems" class="uri">https://github.com/dlozeve/lsystems</a></p>
|
<p>See the Github repo: <a href="https://github.com/dlozeve/lsystems" class="uri">https://github.com/dlozeve/lsystems</a></p>
|
||||||
<h2 id="what-is-an-l-system">What is an L-system?</h2>
|
<h2 id="what-is-an-l-system">What is an L-system?</h2>
|
||||||
|
|
|
@ -1,6 +1,7 @@
|
||||||
---
|
---
|
||||||
title: "Reading notes: Hierarchical Optimal Transport for Document Representation"
|
title: "Reading notes: Hierarchical Optimal Transport for Document Representation"
|
||||||
date: 2020-04-05
|
date: 2020-04-05
|
||||||
|
toc: true
|
||||||
---
|
---
|
||||||
|
|
||||||
Two weeks ago, I did a presentation for my colleagues of the paper
|
Two weeks ago, I did a presentation for my colleagues of the paper
|
||||||
|
|
|
@ -1,6 +1,7 @@
|
||||||
---
|
---
|
||||||
title: "ICLR 2020 Notes: Speakers and Workshops"
|
title: "ICLR 2020 Notes: Speakers and Workshops"
|
||||||
date: 2020-05-05
|
date: 2020-05-05
|
||||||
|
toc: true
|
||||||
---
|
---
|
||||||
|
|
||||||
ICLR is one of the most important conferences in machine learning, and
|
ICLR is one of the most important conferences in machine learning, and
|
||||||
|
|
|
@ -2,6 +2,7 @@
|
||||||
title: Ising model simulation in APL
|
title: Ising model simulation in APL
|
||||||
date: 2018-03-05
|
date: 2018-03-05
|
||||||
tags: ising simulation montecarlo apl
|
tags: ising simulation montecarlo apl
|
||||||
|
toc: true
|
||||||
---
|
---
|
||||||
|
|
||||||
* The APL family of languages
|
* The APL family of languages
|
||||||
|
|
|
@ -3,6 +3,7 @@ title: Ising model simulation
|
||||||
author: Dimitri Lozeve
|
author: Dimitri Lozeve
|
||||||
date: 2018-02-05
|
date: 2018-02-05
|
||||||
tags: ising visualization simulation montecarlo
|
tags: ising visualization simulation montecarlo
|
||||||
|
toc: true
|
||||||
---
|
---
|
||||||
|
|
||||||
The [[https://en.wikipedia.org/wiki/Ising_model][Ising model]] is a
|
The [[https://en.wikipedia.org/wiki/Ising_model][Ising model]] is a
|
||||||
|
|
|
@ -3,6 +3,7 @@ title: Generating and representing L-systems
|
||||||
author: Dimitri Lozeve
|
author: Dimitri Lozeve
|
||||||
date: 2018-01-18
|
date: 2018-01-18
|
||||||
tags: lsystems visualization algorithms haskell
|
tags: lsystems visualization algorithms haskell
|
||||||
|
toc: true
|
||||||
---
|
---
|
||||||
|
|
||||||
L-systems are a formal way to make interesting visualisations. You can
|
L-systems are a formal way to make interesting visualisations. You can
|
||||||
|
|
|
@ -1,8 +1,11 @@
|
||||||
---
|
---
|
||||||
title: "Operations Research and Optimization: where to start?"
|
title: "Operations Research and Optimization: where to start?"
|
||||||
date: 2020-05-27
|
date: 2020-05-27
|
||||||
|
toc: true
|
||||||
---
|
---
|
||||||
|
|
||||||
|
* Introduction
|
||||||
|
|
||||||
[[https://en.wikipedia.org/wiki/Operations_research][Operations research]] (OR) is a vast area comprising a lot of theory,
|
[[https://en.wikipedia.org/wiki/Operations_research][Operations research]] (OR) is a vast area comprising a lot of theory,
|
||||||
different branches of mathematics, and too many applications to
|
different branches of mathematics, and too many applications to
|
||||||
count. In this post, I will try to explain why it can be a little
|
count. In this post, I will try to explain why it can be a little
|
||||||
|
|
|
@ -1,6 +1,7 @@
|
||||||
---
|
---
|
||||||
title: "Peano Axioms"
|
title: "Peano Axioms"
|
||||||
date: 2019-03-18
|
date: 2019-03-18
|
||||||
|
toc: true
|
||||||
---
|
---
|
||||||
|
|
||||||
* Introduction
|
* Introduction
|
||||||
|
|
20
site.hs
20
site.hs
|
@ -40,7 +40,10 @@ main = hakyll $ do
|
||||||
|
|
||||||
match "posts/*" $ do
|
match "posts/*" $ do
|
||||||
route $ setExtension "html"
|
route $ setExtension "html"
|
||||||
compile $ customPandocCompiler
|
compile $ do
|
||||||
|
underlying <- getUnderlying
|
||||||
|
toc <- getMetadataField underlying "toc"
|
||||||
|
customPandocCompiler (toc == Just "yes" || toc == Just "true")
|
||||||
>>= return . fmap demoteHeaders
|
>>= return . fmap demoteHeaders
|
||||||
>>= loadAndApplyTemplate "templates/post.html" postCtx
|
>>= loadAndApplyTemplate "templates/post.html" postCtx
|
||||||
>>= saveSnapshot "content"
|
>>= saveSnapshot "content"
|
||||||
|
@ -49,14 +52,14 @@ main = hakyll $ do
|
||||||
|
|
||||||
match "projects/*" $ do
|
match "projects/*" $ do
|
||||||
route $ setExtension "html"
|
route $ setExtension "html"
|
||||||
compile $ customPandocCompiler
|
compile $ customPandocCompiler False
|
||||||
>>= loadAndApplyTemplate "templates/project.html" postCtx
|
>>= loadAndApplyTemplate "templates/project.html" postCtx
|
||||||
>>= loadAndApplyTemplate "templates/default.html" postCtx
|
>>= loadAndApplyTemplate "templates/default.html" postCtx
|
||||||
>>= relativizeUrls
|
>>= relativizeUrls
|
||||||
|
|
||||||
match (fromList ["contact.org", "cv.org", "skills.org", "projects.org"]) $ do
|
match (fromList ["contact.org", "cv.org", "skills.org", "projects.org"]) $ do
|
||||||
route $ setExtension "html"
|
route $ setExtension "html"
|
||||||
compile $ customPandocCompiler
|
compile $ customPandocCompiler False
|
||||||
>>= return . fmap demoteHeaders
|
>>= return . fmap demoteHeaders
|
||||||
>>= loadAndApplyTemplate "templates/default.html" defaultContext
|
>>= loadAndApplyTemplate "templates/default.html" defaultContext
|
||||||
>>= relativizeUrls
|
>>= relativizeUrls
|
||||||
|
@ -158,8 +161,8 @@ myReadPandocBiblio ropt csl biblio item = do
|
||||||
return $ fmap (const pandoc') item
|
return $ fmap (const pandoc') item
|
||||||
|
|
||||||
-- Pandoc compiler with KaTeX and bibliography support --------------------
|
-- Pandoc compiler with KaTeX and bibliography support --------------------
|
||||||
customPandocCompiler :: Compiler (Item String)
|
customPandocCompiler :: Bool -> Compiler (Item String)
|
||||||
customPandocCompiler =
|
customPandocCompiler withTOC =
|
||||||
let customExtensions = extensionsFromList [Ext_latex_macros]
|
let customExtensions = extensionsFromList [Ext_latex_macros]
|
||||||
defaultExtensions = writerExtensions defaultHakyllWriterOptions
|
defaultExtensions = writerExtensions defaultHakyllWriterOptions
|
||||||
newExtensions = defaultExtensions `mappend` customExtensions
|
newExtensions = defaultExtensions `mappend` customExtensions
|
||||||
|
@ -167,11 +170,16 @@ customPandocCompiler =
|
||||||
{ writerExtensions = newExtensions
|
{ writerExtensions = newExtensions
|
||||||
, writerHTMLMathMethod = KaTeX ""
|
, writerHTMLMathMethod = KaTeX ""
|
||||||
}
|
}
|
||||||
|
writerOptionsWithTOC = writerOptions
|
||||||
|
{ writerTableOfContents = True
|
||||||
|
, writerTOCDepth = 2
|
||||||
|
, writerTemplate = Just "<h1>Table of Contents</h1>$toc$\n$body$"
|
||||||
|
}
|
||||||
readerOptions = defaultHakyllReaderOptions
|
readerOptions = defaultHakyllReaderOptions
|
||||||
in do
|
in do
|
||||||
csl <- load $ fromFilePath "csl/chicago-author-date.csl"
|
csl <- load $ fromFilePath "csl/chicago-author-date.csl"
|
||||||
bib <- load $ fromFilePath "bib/bibliography.bib"
|
bib <- load $ fromFilePath "bib/bibliography.bib"
|
||||||
writePandocWith writerOptions <$>
|
writePandocWith (if withTOC then writerOptionsWithTOC else writerOptions) <$>
|
||||||
(getResourceBody >>= myReadPandocBiblio readerOptions csl bib >>= traverse (return . usingSideNotes))
|
(getResourceBody >>= myReadPandocBiblio readerOptions csl bib >>= traverse (return . usingSideNotes))
|
||||||
|
|
||||||
type FeedRenderer = FeedConfiguration
|
type FeedRenderer = FeedConfiguration
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue