Add notes on speakers

This commit is contained in:
Dimitri Lozeve 2020-05-05 10:26:19 +02:00
parent dddc5f1c39
commit 149d0a0300
4 changed files with 90 additions and 4 deletions

View file

@ -65,7 +65,19 @@
<h1 id="speakers">Speakers</h1>
<p>Overall, there were 8 speakers (two for each day of the main conference). They made a 40-minute presentation, and then there was a Q&amp;A both via the chat and via Zoom. I only saw 4 of them, but I expect I will be watching the others in the near future.</p>
<h2 id="prof.-leslie-kaelbling-doing-for-our-robots-what-nature-did-for-us">Prof. Leslie Kaelbling, <a href="https://iclr.cc/virtual_2020/speaker_2.html">Doing for Our Robots What Nature Did For Us</a></h2>
<p>This talk was fascinating. It is about robotics, and especially how to design the “software” of our robots. We want to program a robot in a way that it could work the best it can over all possible domains it can encounter.</p>
<p>This talk was fascinating. It is about robotics, and especially how to design the “software” of our robots. We want to program a robot in a way that it could work the best it can over all possible domains it can encounter. I loved the discussion on how to describe the space of distributions over domains, from the point of view of the robot factory:</p>
<ul>
<li>The domain could be very narrow (e.g. playing a specific Atari game) or very broad and complex (performing a complex task in an open world).</li>
<li>The factory could know in advance in which domain the robot will evolve, or have a lot of uncertainty around it.</li>
</ul>
<p>There are many ways to describe a policy (i.e. the software running in the robots head), and many ways to obtain them. If you are familiar with recent advances in reinforcement learning, this talk is a great occasion to take a step back, and review the relevant background ideas from engineering and control theory.</p>
<p>Finally, the most important take-away from this talk is the importance of <em>abstractions</em>. Whatever the methods we use to program our robots, we still need a lot of human insights to give them good structural biases. There are many more insights, on the cost of experience, (hierarchical) planning, learning constraints, etc, so I strongly encourage you to watch the talk!</p>
<h2 id="dr.-laurent-dinh-invertible-models-and-normalizing-flows">Dr. Laurent Dinh, <a href="https://iclr.cc/virtual_2020/speaker_4.html">Invertible Models and Normalizing Flows</a></h2>
<p>This is a talk about an area of ML research I do not know very well, but very clearly presented. I really like the approach of teaching a set of methods from a “historical”, personal point of view. Laurent Dinh shows us how he arrived at this topic, what he finds interesting, in a very personal and relatable manner. This has the double advantage of introducing us to a topic that he is passionate about, while also giving us a glimpse of a researchers process, without hiding the momentary disillusions and disappointments, but emphasising the great achievements. Normalizing flows are also very interesting because it is grounded in strong theoretical results, that brings together a lot of different methods.</p>
<h2 id="profs.-yann-lecun-and-yoshua-bengio-reflections-from-the-turing-award-winners">Profs. Yann LeCun and Yoshua Bengio, <a href="https://iclr.cc/virtual_2020/speaker_7.html">Reflections from the Turing Award Winners</a></h2>
<p>This talk was very interesting, and yet felt very familiar, as if I already saw a very similar one elsewhere. Especially for Yann LeCun, who clearly reuses the same slides for many presentations at various events. They both came back to their favourite subjects: self-supervised learning for Yann LeCun, and system 1/system 2 for Yoshua Bengio. All in all, they are very good speakers, and their presentations are always insightful. Yann LeCun gives a lot of references on recent technical advances, which is great if you want to go deeper in the approaches he recommends. Yoshua Bengio is also very good at broadening the debate around deep learning, and introducing very important concepts from cognitive science.</p>
<h2 id="prof.-michael-i.-jordan-the-decision-making-side-of-machine-learning-dynamical-statistical-and-economic-perspectives">Prof. Michael I. Jordan, <a href="https://iclr.cc/virtual_2020/speaker_8.html">The Decision-Making Side of Machine Learning: Dynamical, Statistical and Economic Perspectives</a></h2>
<p>TODO</p>
<h1 id="workshops">Workshops</h1>
<h1 id="some-interesting-papers">Some Interesting Papers</h1>
<h2 id="natural-language-processing">Natural Language Processing</h2>