Add notes on speakers
This commit is contained in:
parent
dddc5f1c39
commit
149d0a0300
4 changed files with 90 additions and 4 deletions
|
@ -81,9 +81,59 @@ I will be watching the others in the near future.
|
|||
This talk was fascinating. It is about robotics, and especially how to
|
||||
design the "software" of our robots. We want to program a robot in a
|
||||
way that it could work the best it can over all possible domains it
|
||||
can encounter.
|
||||
can encounter. I loved the discussion on how to describe the space of
|
||||
distributions over domains, from the point of view of the robot
|
||||
factory:
|
||||
- The domain could be very narrow (e.g. playing a specific Atari game)
|
||||
or very broad and complex (performing a complex task in an open
|
||||
world).
|
||||
- The factory could know in advance in which domain the robot will
|
||||
evolve, or have a lot of uncertainty around it.
|
||||
|
||||
There are many ways to describe a policy (i.e. the software running in
|
||||
the robot's head), and many ways to obtain them. If you are familiar
|
||||
with recent advances in reinforcement learning, this talk is a great
|
||||
occasion to take a step back, and review the relevant background ideas
|
||||
from engineering and control theory.
|
||||
|
||||
Finally, the most important take-away from this talk is the importance
|
||||
of /abstractions/. Whatever the methods we use to program our robots,
|
||||
we still need a lot of human insights to give them good structural
|
||||
biases. There are many more insights, on the cost of experience,
|
||||
(hierarchical) planning, learning constraints, etc, so I strongly
|
||||
encourage you to watch the talk!
|
||||
|
||||
** Dr. Laurent Dinh, [[https://iclr.cc/virtual_2020/speaker_4.html][Invertible Models and Normalizing Flows]]
|
||||
|
||||
This is a talk about an area of ML research I do not know very well,
|
||||
but very clearly presented. I really like the approach of teaching a
|
||||
set of methods from a "historical", personal point of view. Laurent
|
||||
Dinh shows us how he arrived at this topic, what he finds interesting,
|
||||
in a very personal and relatable manner. This has the double advantage
|
||||
of introducing us to a topic that he is passionate about, while also
|
||||
giving us a glimpse of a researcher's process, without hiding the
|
||||
momentary disillusions and disappointments, but emphasising the great
|
||||
achievements. Normalizing flows are also very interesting because it
|
||||
is grounded in strong theoretical results, that brings together a lot
|
||||
of different methods.
|
||||
|
||||
** Profs. Yann LeCun and Yoshua Bengio, [[https://iclr.cc/virtual_2020/speaker_7.html][Reflections from the Turing Award Winners]]
|
||||
|
||||
This talk was very interesting, and yet felt very familiar, as if I
|
||||
already saw a very similar one elsewhere. Especially for Yann LeCun,
|
||||
who clearly reuses the same slides for many presentations at various
|
||||
events. They both came back to their favourite subjects:
|
||||
self-supervised learning for Yann LeCun, and system 1/system 2 for
|
||||
Yoshua Bengio. All in all, they are very good speakers, and their
|
||||
presentations are always insightful. Yann LeCun gives a lot of
|
||||
references on recent technical advances, which is great if you want to
|
||||
go deeper in the approaches he recommends. Yoshua Bengio is also very
|
||||
good at broadening the debate around deep learning, and introducing
|
||||
very important concepts from cognitive science.
|
||||
|
||||
** Prof. Michael I. Jordan, [[https://iclr.cc/virtual_2020/speaker_8.html][The Decision-Making Side of Machine Learning: Dynamical, Statistical and Economic Perspectives]]
|
||||
|
||||
TODO
|
||||
|
||||
* Workshops
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue