From dddc5f1c39fa9750f1df1635be6e208697e85357 Mon Sep 17 00:00:00 2001
From: Dimitri Lozeve ICLR is one of the most important conferences in machine learning, and as such, I was very excited to have the opportunity to volunteer and attend the first fully-virtual edition of the event. The whole content of the conference has been made publicly available, only a few days after the end of the event! I would like to thank the organizing committee for this incredible event, and the possibility to volunteer to help other participantsTo better organize the event, and help people navigate the various online tools, they brought in 500(!) volunteers, waved our registration fees, and asked us to do simple load-testing and tech support. This was a very generous offer, and felt very rewarding for us, as we could attend the conference, and give back to the organization a little bit. The many volunteers, the online-only nature of the event, and the low registration fees also allowed for what felt like a very diverse, inclusive event. Many graduate students and researchers from industry (like me), who do not generally have the time or the resources to travel to conferences like this, were able to attend, and make the exchanges richer. In this post, I will try to give my impressions on the event, and share the most interesting events and papers I saw. As a result of global travel restrictions, the conference was made fully-virtual. It was supposed to take place in Addis Ababa, Ethiopia, which is great for people who are often the target of restrictive visa policies in Northern American countries. The thing I appreciated most about the conference format was its emphasis on asynchronous communication. Given how little time they had to plan the conference, they could have made all poster presentations via video-conference and call it a day. Instead, each poster had to record a 5-minute video summarising their research. Alongside each presentation, there was a dedicated Rocket.Chat channelRocket.Chat seems to be an open-source alternative to Slack. Overall, the experience was great, and I appreciate the efforts of the organizers to use open source software instead of proprietary applications. I hope other conferences will do the same, and perhaps even avoid Zoom, because of recent privacy concerns (maybe try Jitsi?). There were also Zoom session where authors were available for direct, face-to-face discussions, allowing for more traditional conversations. But asking questions on the channel had also the advantage of keeping a track of all questions that were asked by other people. As such, I quickly acquired the habit of watching the video, looking at the chat to see the previous discussions (even if they happened in the middle of the night in my timezone!), and then skimming the paper or asking questions myself. All of these excellent ideas were implemented by an amazing website, collecting all papers in a searchable, easy-to-use interface, and even a nice visualisation of papers as a point cloud! Overall, there were 8 speakers (two for each day of the main conference). They made a 40-minute presentation, and then there was a Q&A both via the chat and via Zoom. I only saw 4 of them, but I expect I will be watching the others in the near future. This talk was fascinating. It is about robotics, and especially how to design the “software” of our robots. We want to program a robot in a way that it could work the best it can over all possible domains it can encounter. Humans can easily deconstruct all the information available in speech (meaning, language, emotion, speaker, etc.). However, this is very hard for machines. This paper explores the capacity of algorithms to reason about all the aspects of the signal, including visual cues. Their goal is to use spoken captions of images to train a predictive model. ICLR is one of the most important conferences in machine learning, and as such, I was very excited to have the opportunity to volunteer and attend the first fully-virtual edition of the event. The whole content of the conference has been made publicly available, only a few days after the end of the event! I would like to thank the organizing committee for this incredible event, and the possibility to volunteer to help other participantsTo better organize the event, and help people navigate the various online tools, they brought in 500(!) volunteers, waved our registration fees, and asked us to do simple load-testing and tech support. This was a very generous offer, and felt very rewarding for us, as we could attend the conference, and give back to the organization a little bit. The many volunteers, the online-only nature of the event, and the low registration fees also allowed for what felt like a very diverse, inclusive event. Many graduate students and researchers from industry (like me), who do not generally have the time or the resources to travel to conferences like this, were able to attend, and make the exchanges richer. In this post, I will try to give my impressions on the event, and share the most interesting events and papers I saw. As a result of global travel restrictions, the conference was made fully-virtual. It was supposed to take place in Addis Ababa, Ethiopia, which is great for people who are often the target of restrictive visa policies in Northern American countries. The thing I appreciated most about the conference format was its emphasis on asynchronous communication. Given how little time they had to plan the conference, they could have made all poster presentations via video-conference and call it a day. Instead, each poster had to record a 5-minute video summarising their research. Alongside each presentation, there was a dedicated Rocket.Chat channelRocket.Chat seems to be an open-source alternative to Slack. Overall, the experience was great, and I appreciate the efforts of the organizers to use open source software instead of proprietary applications. I hope other conferences will do the same, and perhaps even avoid Zoom, because of recent privacy concerns (maybe try Jitsi?). There were also Zoom session where authors were available for direct, face-to-face discussions, allowing for more traditional conversations. But asking questions on the channel had also the advantage of keeping a track of all questions that were asked by other people. As such, I quickly acquired the habit of watching the video, looking at the chat to see the previous discussions (even if they happened in the middle of the night in my timezone!), and then skimming the paper or asking questions myself. All of these excellent ideas were implemented by an amazing website, collecting all papers in a searchable, easy-to-use interface, and even a nice visualisation of papers as a point cloud! Overall, there were 8 speakers (two for each day of the main conference). They made a 40-minute presentation, and then there was a Q&A both via the chat and via Zoom. I only saw 4 of them, but I expect I will be watching the others in the near future. This talk was fascinating. It is about robotics, and especially how to design the “software” of our robots. We want to program a robot in a way that it could work the best it can over all possible domains it can encounter. Humans can easily deconstruct all the information available in speech (meaning, language, emotion, speaker, etc.). However, this is very hard for machines. This paper explores the capacity of algorithms to reason about all the aspects of the signal, including visual cues. Their goal is to use spoken captions of images to train a predictive model. ICLR is one of the most important conferences in machine learning, and as such, I was very excited to have the opportunity to volunteer and attend the first fully-virtual edition of the event. The whole content of the conference has been made publicly available, only a few days after the end of the event! I would like to thank the organizing committee for this incredible event, and the possibility to volunteer to help other participantsTo better organize the event, and help people navigate the various online tools, they brought in 500(!) volunteers, waved our registration fees, and asked us to do simple load-testing and tech support. This was a very generous offer, and felt very rewarding for us, as we could attend the conference, and give back to the organization a little bit. The many volunteers, the online-only nature of the event, and the low registration fees also allowed for what felt like a very diverse, inclusive event. Many graduate students and researchers from industry (like me), who do not generally have the time or the resources to travel to conferences like this, were able to attend, and make the exchanges richer. In this post, I will try to give my impressions on the event, and share the most interesting events and papers I saw. As a result of global travel restrictions, the conference was made fully-virtual. It was supposed to take place in Addis Ababa, Ethiopia, which is great for people who are often the target of restrictive visa policies in Northern American countries. The thing I appreciated most about the conference format was its emphasis on asynchronous communication. Given how little time they had to plan the conference, they could have made all poster presentations via video-conference and call it a day. Instead, each poster had to record a 5-minute video summarising their research. Alongside each presentation, there was a dedicated Rocket.Chat channelRocket.Chat seems to be an open-source alternative to Slack. Overall, the experience was great, and I appreciate the efforts of the organizers to use open source software instead of proprietary applications. I hope other conferences will do the same, and perhaps even avoid Zoom, because of recent privacy concerns (maybe try Jitsi?). There were also Zoom session where authors were available for direct, face-to-face discussions, allowing for more traditional conversations. But asking questions on the channel had also the advantage of keeping a track of all questions that were asked by other people. As such, I quickly acquired the habit of watching the video, looking at the chat to see the previous discussions (even if they happened in the middle of the night in my timezone!), and then skimming the paper or asking questions myself. All of these excellent ideas were implemented by an amazing website, collecting all papers in a searchable, easy-to-use interface, and even a nice visualisation of papers as a point cloud! Overall, there were 8 speakers (two for each day of the main conference). They made a 40-minute presentation, and then there was a Q&A both via the chat and via Zoom. I only saw 4 of them, but I expect I will be watching the others in the near future. This talk was fascinating. It is about robotics, and especially how to design the “software” of our robots. We want to program a robot in a way that it could work the best it can over all possible domains it can encounter. Humans can easily deconstruct all the information available in speech (meaning, language, emotion, speaker, etc.). However, this is very hard for machines. This paper explores the capacity of algorithms to reason about all the aspects of the signal, including visual cues. Their goal is to use spoken captions of images to train a predictive model. Overall, there were 8 speakers (two for each day of the main conference). They made a 40-minute presentation, and then there was a Q&A both via the chat and via Zoom. I only saw 4 of them, but I expect I will be watching the others in the near future. This talk was fascinating. It is about robotics, and especially how to design the “software” of our robots. We want to program a robot in a way that it could work the best it can over all possible domains it can encounter. This talk was fascinating. It is about robotics, and especially how to design the “software” of our robots. We want to program a robot in a way that it could work the best it can over all possible domains it can encounter. I loved the discussion on how to describe the space of distributions over domains, from the point of view of the robot factory: There are many ways to describe a policy (i.e. the software running in the robot’s head), and many ways to obtain them. If you are familiar with recent advances in reinforcement learning, this talk is a great occasion to take a step back, and review the relevant background ideas from engineering and control theory. Finally, the most important take-away from this talk is the importance of abstractions. Whatever the methods we use to program our robots, we still need a lot of human insights to give them good structural biases. There are many more insights, on the cost of experience, (hierarchical) planning, learning constraints, etc, so I strongly encourage you to watch the talk! This is a talk about an area of ML research I do not know very well, but very clearly presented. I really like the approach of teaching a set of methods from a “historical”, personal point of view. Laurent Dinh shows us how he arrived at this topic, what he finds interesting, in a very personal and relatable manner. This has the double advantage of introducing us to a topic that he is passionate about, while also giving us a glimpse of a researcher’s process, without hiding the momentary disillusions and disappointments, but emphasising the great achievements. Normalizing flows are also very interesting because it is grounded in strong theoretical results, that brings together a lot of different methods. This talk was very interesting, and yet felt very familiar, as if I already saw a very similar one elsewhere. Especially for Yann LeCun, who clearly reuses the same slides for many presentations at various events. They both came back to their favourite subjects: self-supervised learning for Yann LeCun, and system 1/system 2 for Yoshua Bengio. All in all, they are very good speakers, and their presentations are always insightful. Yann LeCun gives a lot of references on recent technical advances, which is great if you want to go deeper in the approaches he recommends. Yoshua Bengio is also very good at broadening the debate around deep learning, and introducing very important concepts from cognitive science. TODO Overall, there were 8 speakers (two for each day of the main conference). They made a 40-minute presentation, and then there was a Q&A both via the chat and via Zoom. I only saw 4 of them, but I expect I will be watching the others in the near future. This talk was fascinating. It is about robotics, and especially how to design the “software” of our robots. We want to program a robot in a way that it could work the best it can over all possible domains it can encounter. This talk was fascinating. It is about robotics, and especially how to design the “software” of our robots. We want to program a robot in a way that it could work the best it can over all possible domains it can encounter. I loved the discussion on how to describe the space of distributions over domains, from the point of view of the robot factory: There are many ways to describe a policy (i.e. the software running in the robot’s head), and many ways to obtain them. If you are familiar with recent advances in reinforcement learning, this talk is a great occasion to take a step back, and review the relevant background ideas from engineering and control theory. Finally, the most important take-away from this talk is the importance of abstractions. Whatever the methods we use to program our robots, we still need a lot of human insights to give them good structural biases. There are many more insights, on the cost of experience, (hierarchical) planning, learning constraints, etc, so I strongly encourage you to watch the talk! This is a talk about an area of ML research I do not know very well, but very clearly presented. I really like the approach of teaching a set of methods from a “historical”, personal point of view. Laurent Dinh shows us how he arrived at this topic, what he finds interesting, in a very personal and relatable manner. This has the double advantage of introducing us to a topic that he is passionate about, while also giving us a glimpse of a researcher’s process, without hiding the momentary disillusions and disappointments, but emphasising the great achievements. Normalizing flows are also very interesting because it is grounded in strong theoretical results, that brings together a lot of different methods. This talk was very interesting, and yet felt very familiar, as if I already saw a very similar one elsewhere. Especially for Yann LeCun, who clearly reuses the same slides for many presentations at various events. They both came back to their favourite subjects: self-supervised learning for Yann LeCun, and system 1/system 2 for Yoshua Bengio. All in all, they are very good speakers, and their presentations are always insightful. Yann LeCun gives a lot of references on recent technical advances, which is great if you want to go deeper in the approaches he recommends. Yoshua Bengio is also very good at broadening the debate around deep learning, and introducing very important concepts from cognitive science. TODO Overall, there were 8 speakers (two for each day of the main conference). They made a 40-minute presentation, and then there was a Q&A both via the chat and via Zoom. I only saw 4 of them, but I expect I will be watching the others in the near future. This talk was fascinating. It is about robotics, and especially how to design the “software” of our robots. We want to program a robot in a way that it could work the best it can over all possible domains it can encounter. This talk was fascinating. It is about robotics, and especially how to design the “software” of our robots. We want to program a robot in a way that it could work the best it can over all possible domains it can encounter. I loved the discussion on how to describe the space of distributions over domains, from the point of view of the robot factory: There are many ways to describe a policy (i.e. the software running in the robot’s head), and many ways to obtain them. If you are familiar with recent advances in reinforcement learning, this talk is a great occasion to take a step back, and review the relevant background ideas from engineering and control theory. Finally, the most important take-away from this talk is the importance of abstractions. Whatever the methods we use to program our robots, we still need a lot of human insights to give them good structural biases. There are many more insights, on the cost of experience, (hierarchical) planning, learning constraints, etc, so I strongly encourage you to watch the talk! This is a talk about an area of ML research I do not know very well, but very clearly presented. I really like the approach of teaching a set of methods from a “historical”, personal point of view. Laurent Dinh shows us how he arrived at this topic, what he finds interesting, in a very personal and relatable manner. This has the double advantage of introducing us to a topic that he is passionate about, while also giving us a glimpse of a researcher’s process, without hiding the momentary disillusions and disappointments, but emphasising the great achievements. Normalizing flows are also very interesting because it is grounded in strong theoretical results, that brings together a lot of different methods. This talk was very interesting, and yet felt very familiar, as if I already saw a very similar one elsewhere. Especially for Yann LeCun, who clearly reuses the same slides for many presentations at various events. They both came back to their favourite subjects: self-supervised learning for Yann LeCun, and system 1/system 2 for Yoshua Bengio. All in all, they are very good speakers, and their presentations are always insightful. Yann LeCun gives a lot of references on recent technical advances, which is great if you want to go deeper in the approaches he recommends. Yoshua Bengio is also very good at broadening the debate around deep learning, and introducing very important concepts from cognitive science. TODO This talk was very interesting, and yet felt very familiar, as if I already saw a very similar one elsewhere. Especially for Yann LeCun, who clearly reuses the same slides for many presentations at various events. They both came back to their favourite subjects: self-supervised learning for Yann LeCun, and system 1/system 2 for Yoshua Bengio. All in all, they are very good speakers, and their presentations are always insightful. Yann LeCun gives a lot of references on recent technical advances, which is great if you want to go deeper in the approaches he recommends. Yoshua Bengio is also very good at broadening the debate around deep learning, and introducing very important concepts from cognitive science. TODO Humans can easily deconstruct all the information available in speech (meaning, language, emotion, speaker, etc.). However, this is very hard for machines. This paper explores the capacity of algorithms to reason about all the aspects of the signal, including visual cues. Their goal is to use spoken captions of images to train a predictive model. This talk was very interesting, and yet felt very familiar, as if I already saw a very similar one elsewhere. Especially for Yann LeCun, who clearly reuses the same slides for many presentations at various events. They both came back to their favourite subjects: self-supervised learning for Yann LeCun, and system 1/system 2 for Yoshua Bengio. All in all, they are very good speakers, and their presentations are always insightful. Yann LeCun gives a lot of references on recent technical advances, which is great if you want to go deeper in the approaches he recommends. Yoshua Bengio is also very good at broadening the debate around deep learning, and introducing very important concepts from cognitive science. TODO Humans can easily deconstruct all the information available in speech (meaning, language, emotion, speaker, etc.). However, this is very hard for machines. This paper explores the capacity of algorithms to reason about all the aspects of the signal, including visual cues. Their goal is to use spoken captions of images to train a predictive model. This talk was very interesting, and yet felt very familiar, as if I already saw a very similar one elsewhere. Especially for Yann LeCun, who clearly reuses the same slides for many presentations at various events. They both came back to their favourite subjects: self-supervised learning for Yann LeCun, and system 1/system 2 for Yoshua Bengio. All in all, they are very good speakers, and their presentations are always insightful. Yann LeCun gives a lot of references on recent technical advances, which is great if you want to go deeper in the approaches he recommends. Yoshua Bengio is also very good at broadening the debate around deep learning, and introducing very important concepts from cognitive science. TODO Humans can easily deconstruct all the information available in speech (meaning, language, emotion, speaker, etc.). However, this is very hard for machines. This paper explores the capacity of algorithms to reason about all the aspects of the signal, including visual cues. Their goal is to use spoken captions of images to train a predictive model. In this post, I will try to give my impressions on the event, and share the most interesting events and papers I saw. As a result of global travel restrictions, the conference was made fully-virtual. It was supposed to take place in Addis Ababa, Ethiopia, which is great for people who are often the target of restrictive visa policies in Northern American countries. The thing I appreciated most about the conference format was its emphasis on asynchronous communication. Given how little time they had to plan the conference, they could have made all poster presentations via video-conference and call it a day. Instead, each poster had to record a 5-minute video summarising their research. Alongside each presentation, there was a dedicated Rocket.Chat channelRocket.Chat seems to be an open-source alternative to Slack. Overall, the experience was great, and I appreciate the efforts of the organizers to use open source software instead of proprietary applications. I hope other conferences will do the same, and perhaps even avoid Zoom, because of recent privacy concerns (maybe try Jitsi?). The thing I appreciated most about the conference format was its emphasis on asynchronous communication. Given how little time they had to plan the conference, they could have made all poster presentations via video-conference and call it a day. Instead, each poster had to record a 5-minute videoThe videos are streamed using SlidesLive, which is a great solution for synchronising videos and slides. It is very comfortable to navigate through the slides and synchronising the video to the slides and vice-versa. As a result, SlidesLive also has a very nice library of talks, including major conferences. This is much better than browsing YouTube randomly. There were also Zoom session where authors were available for direct, face-to-face discussions, allowing for more traditional conversations. But asking questions on the channel had also the advantage of keeping a track of all questions that were asked by other people. As such, I quickly acquired the habit of watching the video, looking at the chat to see the previous discussions (even if they happened in the middle of the night in my timezone!), and then skimming the paper or asking questions myself. This talk was very interesting, and yet felt very familiar, as if I already saw a very similar one elsewhere. Especially for Yann LeCun, who clearly reuses the same slides for many presentations at various events. They both came back to their favourite subjects: self-supervised learning for Yann LeCun, and system 1/system 2 for Yoshua Bengio. All in all, they are very good speakers, and their presentations are always insightful. Yann LeCun gives a lot of references on recent technical advances, which is great if you want to go deeper in the approaches he recommends. Yoshua Bengio is also very good at broadening the debate around deep learning, and introducing very important concepts from cognitive science. TODO On Sunday, there were 15 different workshops. All of them were recorded, and are available on the website. As always, unfortunately, there are too many interesting things to watch everything, but I saw bits and pieces of different workshops. A lot of pretty advanced talks about RL. The general theme was meta-learning, aka “learning to learn”. This is a very active area of research, which goes way beyond classical RL theory, and offer many interesting avenues to adjacent fields (both inside ML and outside, especially cognitive science). The first talk, by Martha White, about inductive biases, was a very interesting and approachable introduction to the problems and challenges of the field. There was also a panel with Jürgen Schmidhuber. We hear a lot about him from the various controversies, but it’s nice to see him talking about research and future developments in RL. Ever since I read Judea Pearl’s The Book of Why on causality, I have been interested in how we can incorporate causality reasoning in machine learning. This is a complex topic, and I’m not sure yet that it is a complete revolution as Judea Pearl likes to portray it, but it nevertheless introduces a lot of new fascinating ideas. Yoshua Bengio gave an interesting talkYou can find it at 4:45:20 in the livestream of the workshop. Cognitive science is fascinating, and I believe that collaboration between ML practitioners and cognitive scientists will greatly help advance both fields. I only watched Leslie Kaelbling’s presentation, which echoes a lot of things from her talk at the main conference. It complements it nicely, with more focus on intelligence, especially embodied intelligence. I think she has the rights approach to relationships between AI and natural science, explicitly listing the things from her work that would be helpful to natural scientists, and things she wish she knew about natural intelligences. It raises many fascinating questions on ourselves, what we build, and what we understand. I felt it was very motivational! I didn’t attend this workshop, but I think I will watch the presentations if I can find some time. I have found the intersection of differential equations and ML very interesting, ever since the famous NeurIPS best paper on Neural ODEs. I think that such improvements to ML theory from other fields in mathematics would be extremely beneficial to a better understanding of the systems we build. In this post, I will try to give my impressions on the event, and share the most interesting events and papers I saw. As a result of global travel restrictions, the conference was made fully-virtual. It was supposed to take place in Addis Ababa, Ethiopia, which is great for people who are often the target of restrictive visa policies in Northern American countries. The thing I appreciated most about the conference format was its emphasis on asynchronous communication. Given how little time they had to plan the conference, they could have made all poster presentations via video-conference and call it a day. Instead, each poster had to record a 5-minute video summarising their research. Alongside each presentation, there was a dedicated Rocket.Chat channelRocket.Chat seems to be an open-source alternative to Slack. Overall, the experience was great, and I appreciate the efforts of the organizers to use open source software instead of proprietary applications. I hope other conferences will do the same, and perhaps even avoid Zoom, because of recent privacy concerns (maybe try Jitsi?). The thing I appreciated most about the conference format was its emphasis on asynchronous communication. Given how little time they had to plan the conference, they could have made all poster presentations via video-conference and call it a day. Instead, each poster had to record a 5-minute videoThe videos are streamed using SlidesLive, which is a great solution for synchronising videos and slides. It is very comfortable to navigate through the slides and synchronising the video to the slides and vice-versa. As a result, SlidesLive also has a very nice library of talks, including major conferences. This is much better than browsing YouTube randomly. There were also Zoom session where authors were available for direct, face-to-face discussions, allowing for more traditional conversations. But asking questions on the channel had also the advantage of keeping a track of all questions that were asked by other people. As such, I quickly acquired the habit of watching the video, looking at the chat to see the previous discussions (even if they happened in the middle of the night in my timezone!), and then skimming the paper or asking questions myself. This talk was very interesting, and yet felt very familiar, as if I already saw a very similar one elsewhere. Especially for Yann LeCun, who clearly reuses the same slides for many presentations at various events. They both came back to their favourite subjects: self-supervised learning for Yann LeCun, and system 1/system 2 for Yoshua Bengio. All in all, they are very good speakers, and their presentations are always insightful. Yann LeCun gives a lot of references on recent technical advances, which is great if you want to go deeper in the approaches he recommends. Yoshua Bengio is also very good at broadening the debate around deep learning, and introducing very important concepts from cognitive science. TODO On Sunday, there were 15 different workshops. All of them were recorded, and are available on the website. As always, unfortunately, there are too many interesting things to watch everything, but I saw bits and pieces of different workshops. A lot of pretty advanced talks about RL. The general theme was meta-learning, aka “learning to learn”. This is a very active area of research, which goes way beyond classical RL theory, and offer many interesting avenues to adjacent fields (both inside ML and outside, especially cognitive science). The first talk, by Martha White, about inductive biases, was a very interesting and approachable introduction to the problems and challenges of the field. There was also a panel with Jürgen Schmidhuber. We hear a lot about him from the various controversies, but it’s nice to see him talking about research and future developments in RL. Ever since I read Judea Pearl’s The Book of Why on causality, I have been interested in how we can incorporate causality reasoning in machine learning. This is a complex topic, and I’m not sure yet that it is a complete revolution as Judea Pearl likes to portray it, but it nevertheless introduces a lot of new fascinating ideas. Yoshua Bengio gave an interesting talkYou can find it at 4:45:20 in the livestream of the workshop. Cognitive science is fascinating, and I believe that collaboration between ML practitioners and cognitive scientists will greatly help advance both fields. I only watched Leslie Kaelbling’s presentation, which echoes a lot of things from her talk at the main conference. It complements it nicely, with more focus on intelligence, especially embodied intelligence. I think she has the rights approach to relationships between AI and natural science, explicitly listing the things from her work that would be helpful to natural scientists, and things she wish she knew about natural intelligences. It raises many fascinating questions on ourselves, what we build, and what we understand. I felt it was very motivational! I didn’t attend this workshop, but I think I will watch the presentations if I can find some time. I have found the intersection of differential equations and ML very interesting, ever since the famous NeurIPS best paper on Neural ODEs. I think that such improvements to ML theory from other fields in mathematics would be extremely beneficial to a better understanding of the systems we build. In this post, I will try to give my impressions on the event, and share the most interesting events and papers I saw. As a result of global travel restrictions, the conference was made fully-virtual. It was supposed to take place in Addis Ababa, Ethiopia, which is great for people who are often the target of restrictive visa policies in Northern American countries. The thing I appreciated most about the conference format was its emphasis on asynchronous communication. Given how little time they had to plan the conference, they could have made all poster presentations via video-conference and call it a day. Instead, each poster had to record a 5-minute video summarising their research. Alongside each presentation, there was a dedicated Rocket.Chat channelRocket.Chat seems to be an open-source alternative to Slack. Overall, the experience was great, and I appreciate the efforts of the organizers to use open source software instead of proprietary applications. I hope other conferences will do the same, and perhaps even avoid Zoom, because of recent privacy concerns (maybe try Jitsi?). The thing I appreciated most about the conference format was its emphasis on asynchronous communication. Given how little time they had to plan the conference, they could have made all poster presentations via video-conference and call it a day. Instead, each poster had to record a 5-minute videoThe videos are streamed using SlidesLive, which is a great solution for synchronising videos and slides. It is very comfortable to navigate through the slides and synchronising the video to the slides and vice-versa. As a result, SlidesLive also has a very nice library of talks, including major conferences. This is much better than browsing YouTube randomly. There were also Zoom session where authors were available for direct, face-to-face discussions, allowing for more traditional conversations. But asking questions on the channel had also the advantage of keeping a track of all questions that were asked by other people. As such, I quickly acquired the habit of watching the video, looking at the chat to see the previous discussions (even if they happened in the middle of the night in my timezone!), and then skimming the paper or asking questions myself. This talk was very interesting, and yet felt very familiar, as if I already saw a very similar one elsewhere. Especially for Yann LeCun, who clearly reuses the same slides for many presentations at various events. They both came back to their favourite subjects: self-supervised learning for Yann LeCun, and system 1/system 2 for Yoshua Bengio. All in all, they are very good speakers, and their presentations are always insightful. Yann LeCun gives a lot of references on recent technical advances, which is great if you want to go deeper in the approaches he recommends. Yoshua Bengio is also very good at broadening the debate around deep learning, and introducing very important concepts from cognitive science. TODO On Sunday, there were 15 different workshops. All of them were recorded, and are available on the website. As always, unfortunately, there are too many interesting things to watch everything, but I saw bits and pieces of different workshops. A lot of pretty advanced talks about RL. The general theme was meta-learning, aka “learning to learn”. This is a very active area of research, which goes way beyond classical RL theory, and offer many interesting avenues to adjacent fields (both inside ML and outside, especially cognitive science). The first talk, by Martha White, about inductive biases, was a very interesting and approachable introduction to the problems and challenges of the field. There was also a panel with Jürgen Schmidhuber. We hear a lot about him from the various controversies, but it’s nice to see him talking about research and future developments in RL. Ever since I read Judea Pearl’s The Book of Why on causality, I have been interested in how we can incorporate causality reasoning in machine learning. This is a complex topic, and I’m not sure yet that it is a complete revolution as Judea Pearl likes to portray it, but it nevertheless introduces a lot of new fascinating ideas. Yoshua Bengio gave an interesting talkYou can find it at 4:45:20 in the livestream of the workshop. Cognitive science is fascinating, and I believe that collaboration between ML practitioners and cognitive scientists will greatly help advance both fields. I only watched Leslie Kaelbling’s presentation, which echoes a lot of things from her talk at the main conference. It complements it nicely, with more focus on intelligence, especially embodied intelligence. I think she has the rights approach to relationships between AI and natural science, explicitly listing the things from her work that would be helpful to natural scientists, and things she wish she knew about natural intelligences. It raises many fascinating questions on ourselves, what we build, and what we understand. I felt it was very motivational! I didn’t attend this workshop, but I think I will watch the presentations if I can find some time. I have found the intersection of differential equations and ML very interesting, ever since the famous NeurIPS best paper on Neural ODEs. I think that such improvements to ML theory from other fields in mathematics would be extremely beneficial to a better understanding of the systems we build. The many volunteers, the online-only nature of the event, and the low registration fees also allowed for what felt like a very diverse, inclusive event. Many graduate students and researchers from industry (like me), who do not generally have the time or the resources to travel to conferences like this, were able to attend, and make the exchanges richer. In this post, I will try to give my impressions on the event, and share the most interesting events and papers I saw. In this post, I will try to give my impressions on the event, the speakers, and the workshops that I could attend. I will do a quick recap of the most interesting papers I saw in a future post. As a result of global travel restrictions, the conference was made fully-virtual. It was supposed to take place in Addis Ababa, Ethiopia, which is great for people who are often the target of restrictive visa policies in Northern American countries. The thing I appreciated most about the conference format was its emphasis on asynchronous communication. Given how little time they had to plan the conference, they could have made all poster presentations via video-conference and call it a day. Instead, each poster had to record a 5-minute videoThe videos are streamed using SlidesLive, which is a great solution for synchronising videos and slides. It is very comfortable to navigate through the slides and synchronising the video to the slides and vice-versa. As a result, SlidesLive also has a very nice library of talks, including major conferences. This is much better than browsing YouTube randomly. There were also Zoom session where authors were available for direct, face-to-face discussions, allowing for more traditional conversations. But asking questions on the channel had also the advantage of keeping a track of all questions that were asked by other people. As such, I quickly acquired the habit of watching the video, looking at the chat to see the previous discussions (even if they happened in the middle of the night in my timezone!), and then skimming the paper or asking questions myself. All of these excellent ideas were implemented by an amazing website, collecting all papers in a searchable, easy-to-use interface, and even a nice visualisation of papers as a point cloud! All of these excellent ideas were implemented by an amazing website, collecting all papers in a searchable, easy-to-use interface, and even including a nice visualisation of papers as a point cloud! Overall, there were 8 speakers (two for each day of the main conference). They made a 40-minute presentation, and then there was a Q&A both via the chat and via Zoom. I only saw 4 of them, but I expect I will be watching the others in the near future. Overall, there were 8 speakers (two for each day of the main conference). They made a 40-minute presentation, and then there was a Q&A both via the chat and via Zoom. I only saw a few of them, but I expect I will be watching the others in the near future. This talk was fascinating. It is about robotics, and especially how to design the “software” of our robots. We want to program a robot in a way that it could work the best it can over all possible domains it can encounter. I loved the discussion on how to describe the space of distributions over domains, from the point of view of the robot factory: There are many ways to describe a policy (i.e. the software running in the robot’s head), and many ways to obtain them. If you are familiar with recent advances in reinforcement learning, this talk is a great occasion to take a step back, and review the relevant background ideas from engineering and control theory. Finally, the most important take-away from this talk is the importance of abstractions. Whatever the methods we use to program our robots, we still need a lot of human insights to give them good structural biases. There are many more insights, on the cost of experience, (hierarchical) planning, learning constraints, etc, so I strongly encourage you to watch the talk! This is a talk about an area of ML research I do not know very well, but very clearly presented. I really like the approach of teaching a set of methods from a “historical”, personal point of view. Laurent Dinh shows us how he arrived at this topic, what he finds interesting, in a very personal and relatable manner. This has the double advantage of introducing us to a topic that he is passionate about, while also giving us a glimpse of a researcher’s process, without hiding the momentary disillusions and disappointments, but emphasising the great achievements. Normalizing flows are also very interesting because it is grounded in strong theoretical results, that brings together a lot of different methods. This is a very clear presentation of an area of ML research I do not know very well. I really like the approach of teaching a set of methods from a “historical”, personal point of view. Laurent Dinh shows us how he arrived at this topic, what he finds interesting, in a very personal and relatable manner. This has the double advantage of introducing us to a topic that he is passionate about, while also giving us a glimpse of a researcher’s process, without hiding the momentary disillusions and disappointments, but emphasising the great achievements. Normalizing flows are also very interesting because it is grounded in strong theoretical results, that brings together a lot of different methods. This talk was very interesting, and yet felt very familiar, as if I already saw a very similar one elsewhere. Especially for Yann LeCun, who clearly reuses the same slides for many presentations at various events. They both came back to their favourite subjects: self-supervised learning for Yann LeCun, and system 1/system 2 for Yoshua Bengio. All in all, they are very good speakers, and their presentations are always insightful. Yann LeCun gives a lot of references on recent technical advances, which is great if you want to go deeper in the approaches he recommends. Yoshua Bengio is also very good at broadening the debate around deep learning, and introducing very important concepts from cognitive science. TODO On Sunday, there were 15 different workshops. All of them were recorded, and are available on the website. As always, unfortunately, there are too many interesting things to watch everything, but I saw bits and pieces of different workshops. Cognitive science is fascinating, and I believe that collaboration between ML practitioners and cognitive scientists will greatly help advance both fields. I only watched Leslie Kaelbling’s presentation, which echoes a lot of things from her talk at the main conference. It complements it nicely, with more focus on intelligence, especially embodied intelligence. I think she has the rights approach to relationships between AI and natural science, explicitly listing the things from her work that would be helpful to natural scientists, and things she wish she knew about natural intelligences. It raises many fascinating questions on ourselves, what we build, and what we understand. I felt it was very motivational! Cognitive science is fascinating, and I believe that collaboration between ML practitioners and cognitive scientists will greatly help advance both fields. I only watched Leslie Kaelbling’s presentation, which echoes a lot of things from her talk at the main conference. It complements it nicely, with more focus on intelligence, especially embodied intelligence. I think she has the right approach to relationships between AI and natural science, explicitly listing the things from her work that would be helpful to natural scientists, and things she wish she knew about natural intelligences. It raises many fascinating questions on ourselves, what we build, and what we understand. I felt it was very motivational! I didn’t attend this workshop, but I think I will watch the presentations if I can find some time. I have found the intersection of differential equations and ML very interesting, ever since the famous NeurIPS best paper on Neural ODEs. I think that such improvements to ML theory from other fields in mathematics would be extremely beneficial to a better understanding of the systems we build. I didn’t attend this workshop, but I think I will watch the presentations if I can find the time. I have found the intersection of differential equations and ML very interesting, ever since the famous NeurIPS best paper on Neural ODEs. I think that such improvements to ML theory from other fields in mathematics would be extremely beneficial to a better understanding of the systems we build.
+
+
+.The Format of the Virtual Conference
+
+
+ where anyone could ask a question to the authors, or just show their appreciation for the work. This was a fantastic idea as it allowed any participant to interact with papers and authors at any time they please, which is especially important in a setting where people were spread all over the globe.Speakers
+Prof. Leslie Kaelbling, Doing for Our Robots What Nature Did For Us
+Workshops
+Some Interesting Papers
+Natural Language Processing
+Harwath et al., Learning Hierarchical Discrete Linguistic Units from Visually-Grounded Speech
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Reinforcement Learning
+ML and Neural Network Theory
+
+
ICLR 2020 Notes
+
+
+
+
+
+
+.The Format of the Virtual Conference
+
+
+ where anyone could ask a question to the authors, or just show their appreciation for the work. This was a fantastic idea as it allowed any participant to interact with papers and authors at any time they please, which is especially important in a setting where people were spread all over the globe.Speakers
+Prof. Leslie Kaelbling, Doing for Our Robots What Nature Did For Us
+Workshops
+Some Interesting Papers
+Natural Language Processing
+Harwath et al., Learning Hierarchical Discrete Linguistic Units from Visually-Grounded Speech
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Reinforcement Learning
+ML and Neural Network Theory
+
+
+.The Format of the Virtual Conference
+
+
+ where anyone could ask a question to the authors, or just show their appreciation for the work. This was a fantastic idea as it allowed any participant to interact with papers and authors at any time they please, which is especially important in a setting where people were spread all over the globe.Speakers
+Prof. Leslie Kaelbling, Doing for Our Robots What Nature Did For Us
+Workshops
+Some Interesting Papers
+Natural Language Processing
+Harwath et al., Learning Hierarchical Discrete Linguistic Units from Visually-Grounded Speech
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Reinforcement Learning
+ML and Neural Network Theory
+ Speakers
Prof. Leslie Kaelbling, Doing for Our Robots What Nature Did For Us
-
+
+Dr. Laurent Dinh, Invertible Models and Normalizing Flows
+Profs. Yann LeCun and Yoshua Bengio, Reflections from the Turing Award Winners
+Prof. Michael I. Jordan, The Decision-Making Side of Machine Learning: Dynamical, Statistical and Economic Perspectives
+Workshops
Some Interesting Papers
Natural Language Processing
diff --git a/_site/posts/iclr-2020-notes.html b/_site/posts/iclr-2020-notes.html
index 3281e1e..6472c9a 100644
--- a/_site/posts/iclr-2020-notes.html
+++ b/_site/posts/iclr-2020-notes.html
@@ -65,7 +65,19 @@
Speakers
Prof. Leslie Kaelbling, Doing for Our Robots What Nature Did For Us
-
+
+Dr. Laurent Dinh, Invertible Models and Normalizing Flows
+Profs. Yann LeCun and Yoshua Bengio, Reflections from the Turing Award Winners
+Prof. Michael I. Jordan, The Decision-Making Side of Machine Learning: Dynamical, Statistical and Economic Perspectives
+Workshops
Some Interesting Papers
Natural Language Processing
diff --git a/_site/rss.xml b/_site/rss.xml
index 9b0802f..65df8e7 100644
--- a/_site/rss.xml
+++ b/_site/rss.xml
@@ -32,7 +32,19 @@
Speakers
Prof. Leslie Kaelbling, Doing for Our Robots What Nature Did For Us
-
+
+Dr. Laurent Dinh, Invertible Models and Normalizing Flows
+Profs. Yann LeCun and Yoshua Bengio, Reflections from the Turing Award Winners
+Prof. Michael I. Jordan, The Decision-Making Side of Machine Learning: Dynamical, Statistical and Economic Perspectives
+Workshops
Some Interesting Papers
Natural Language Processing
diff --git a/posts/iclr-2020-notes.org b/posts/iclr-2020-notes.org
index cd53eea..93c732e 100644
--- a/posts/iclr-2020-notes.org
+++ b/posts/iclr-2020-notes.org
@@ -81,9 +81,59 @@ I will be watching the others in the near future.
This talk was fascinating. It is about robotics, and especially how to
design the "software" of our robots. We want to program a robot in a
way that it could work the best it can over all possible domains it
-can encounter.
+can encounter. I loved the discussion on how to describe the space of
+distributions over domains, from the point of view of the robot
+factory:
+- The domain could be very narrow (e.g. playing a specific Atari game)
+ or very broad and complex (performing a complex task in an open
+ world).
+- The factory could know in advance in which domain the robot will
+ evolve, or have a lot of uncertainty around it.
+There are many ways to describe a policy (i.e. the software running in
+the robot's head), and many ways to obtain them. If you are familiar
+with recent advances in reinforcement learning, this talk is a great
+occasion to take a step back, and review the relevant background ideas
+from engineering and control theory.
+Finally, the most important take-away from this talk is the importance
+of /abstractions/. Whatever the methods we use to program our robots,
+we still need a lot of human insights to give them good structural
+biases. There are many more insights, on the cost of experience,
+(hierarchical) planning, learning constraints, etc, so I strongly
+encourage you to watch the talk!
+
+** Dr. Laurent Dinh, [[https://iclr.cc/virtual_2020/speaker_4.html][Invertible Models and Normalizing Flows]]
+
+This is a talk about an area of ML research I do not know very well,
+but very clearly presented. I really like the approach of teaching a
+set of methods from a "historical", personal point of view. Laurent
+Dinh shows us how he arrived at this topic, what he finds interesting,
+in a very personal and relatable manner. This has the double advantage
+of introducing us to a topic that he is passionate about, while also
+giving us a glimpse of a researcher's process, without hiding the
+momentary disillusions and disappointments, but emphasising the great
+achievements. Normalizing flows are also very interesting because it
+is grounded in strong theoretical results, that brings together a lot
+of different methods.
+
+** Profs. Yann LeCun and Yoshua Bengio, [[https://iclr.cc/virtual_2020/speaker_7.html][Reflections from the Turing Award Winners]]
+
+This talk was very interesting, and yet felt very familiar, as if I
+already saw a very similar one elsewhere. Especially for Yann LeCun,
+who clearly reuses the same slides for many presentations at various
+events. They both came back to their favourite subjects:
+self-supervised learning for Yann LeCun, and system 1/system 2 for
+Yoshua Bengio. All in all, they are very good speakers, and their
+presentations are always insightful. Yann LeCun gives a lot of
+references on recent technical advances, which is great if you want to
+go deeper in the approaches he recommends. Yoshua Bengio is also very
+good at broadening the debate around deep learning, and introducing
+very important concepts from cognitive science.
+
+** Prof. Michael I. Jordan, [[https://iclr.cc/virtual_2020/speaker_8.html][The Decision-Making Side of Machine Learning: Dynamical, Statistical and Economic Perspectives]]
+
+TODO
* Workshops
From 41b844abe01d895efbc4fe901fa85bc53145830d Mon Sep 17 00:00:00 2001
From: Dimitri Lozeve Prof. Michael I. Jordan, The Decision-Making Side of Machine Learning: Dynamical, Statistical and Economic Perspectives
Workshops
Some Interesting Papers
Natural Language Processing
-Harwath et al., Learning Hierarchical Discrete Linguistic Units from Visually-Grounded Speech
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Reinforcement Learning
ML and Neural Network Theory
+Workshops
]]>
diff --git a/_site/posts/iclr-2020-notes.html b/_site/posts/iclr-2020-notes.html
index 6472c9a..ada5049 100644
--- a/_site/posts/iclr-2020-notes.html
+++ b/_site/posts/iclr-2020-notes.html
@@ -78,83 +78,11 @@
Prof. Michael I. Jordan, The Decision-Making Side of Machine Learning: Dynamical, Statistical and Economic Perspectives
Workshops
Some Interesting Papers
Natural Language Processing
-Harwath et al., Learning Hierarchical Discrete Linguistic Units from Visually-Grounded Speech
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Reinforcement Learning
ML and Neural Network Theory
+Workshops
diff --git a/_site/rss.xml b/_site/rss.xml
index 65df8e7..2da456d 100644
--- a/_site/rss.xml
+++ b/_site/rss.xml
@@ -45,83 +45,11 @@
Prof. Michael I. Jordan, The Decision-Making Side of Machine Learning: Dynamical, Statistical and Economic Perspectives
Workshops
Some Interesting Papers
Natural Language Processing
-Harwath et al., Learning Hierarchical Discrete Linguistic Units from Visually-Grounded Speech
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Reinforcement Learning
ML and Neural Network Theory
+Workshops
]]>
The Format of the Virtual Conference
+
+
+ summarising their research. Alongside each presentation, there was a dedicated Rocket.Chat channelRocket.Chat seems to be an open-source alternative to Slack. Overall, the experience was great, and I appreciate the efforts of the organizers to use open source software instead of proprietary applications. I hope other conferences will do the same, and perhaps even avoid Zoom, because of recent privacy concerns (maybe try Jitsi?).
where anyone could ask a question to the authors, or just show their appreciation for the work. This was a fantastic idea as it allowed any participant to interact with papers and authors at any time they please, which is especially important in a setting where people were spread all over the globe.Prof. Michael I. Jordan, The Decision-Making Side of Machine Learning: Dynamical, Statistical and Economic Perspectives
Some Interesting Papers
-Natural Language Processing
-Reinforcement Learning
-ML and Neural Network Theory
Workshops
+Beyond ‘tabula rasa’ in reinforcement learning: agents that remember, adapt, and generalize
+Causal Learning For Decision Making
+
+
+ (even though very similar to his keynote talk) on causal priors for deep learning.Bridging AI and Cognitive Science
+Integration of Deep Neural Models and Differential Equations
+
(even though very similar to his keynote talk) on causal priors for deep learning.ICLR 2020 Notes
+ ICLR 2020 Notes: Speakers and Workshops
@@ -57,7 +57,9 @@
The Format of the Virtual Conference
+
+
+ summarising their research. Alongside each presentation, there was a dedicated Rocket.Chat channelRocket.Chat seems to be an open-source alternative to Slack. Overall, the experience was great, and I appreciate the efforts of the organizers to use open source software instead of proprietary applications. I hope other conferences will do the same, and perhaps even avoid Zoom, because of recent privacy concerns (maybe try Jitsi?).
where anyone could ask a question to the authors, or just show their appreciation for the work. This was a fantastic idea as it allowed any participant to interact with papers and authors at any time they please, which is especially important in a setting where people were spread all over the globe.Prof. Michael I. Jordan, The Decision-Making Side of Machine Learning: Dynamical, Statistical and Economic Perspectives
Some Interesting Papers
-Natural Language Processing
-Reinforcement Learning
-ML and Neural Network Theory
Workshops
+Beyond ‘tabula rasa’ in reinforcement learning: agents that remember, adapt, and generalize
+Causal Learning For Decision Making
+
+
+ (even though very similar to his keynote talk) on causal priors for deep learning.Bridging AI and Cognitive Science
+Integration of Deep Neural Models and Differential Equations
+The Format of the Virtual Conference
+
+
+ summarising their research. Alongside each presentation, there was a dedicated Rocket.Chat channelRocket.Chat seems to be an open-source alternative to Slack. Overall, the experience was great, and I appreciate the efforts of the organizers to use open source software instead of proprietary applications. I hope other conferences will do the same, and perhaps even avoid Zoom, because of recent privacy concerns (maybe try Jitsi?).
where anyone could ask a question to the authors, or just show their appreciation for the work. This was a fantastic idea as it allowed any participant to interact with papers and authors at any time they please, which is especially important in a setting where people were spread all over the globe.Prof. Michael I. Jordan, The Decision-Making Side of Machine Learning: Dynamical, Statistical and Economic Perspectives
Some Interesting Papers
-Natural Language Processing
-Reinforcement Learning
-ML and Neural Network Theory
Workshops
+Beyond ‘tabula rasa’ in reinforcement learning: agents that remember, adapt, and generalize
+Causal Learning For Decision Making
+
+
+ (even though very similar to his keynote talk) on causal priors for deep learning.Bridging AI and Cognitive Science
+Integration of Deep Neural Models and Differential Equations
+
The Format of the Virtual Conference
@@ -34,9 +34,9 @@
where anyone could ask a question to the authors, or just show their appreciation for the work. This was a fantastic idea as it allowed any participant to interact with papers and authors at any time they please, which is especially important in a setting where people were spread all over the globe.Speakers
-Prof. Leslie Kaelbling, Doing for Our Robots What Nature Did For Us
@@ -46,11 +46,9 @@
Dr. Laurent Dinh, Invertible Models and Normalizing Flows
-Profs. Yann LeCun and Yoshua Bengio, Reflections from the Turing Award Winners
Prof. Michael I. Jordan, The Decision-Making Side of Machine Learning: Dynamical, Statistical and Economic Perspectives
-Workshops
Beyond ‘tabula rasa’ in reinforcement learning: agents that remember, adapt, and generalize
@@ -60,9 +58,9 @@
Bridging AI and Cognitive Science
-Integration of Deep Neural Models and Differential Equations
-
.
The many volunteers, the online-only nature of the event, and the low registration fees also allowed for what felt like a very diverse, inclusive event. Many graduate students and researchers from industry (like me), who do not generally have the time or the resources to travel to conferences like this, were able to attend, and make the exchanges richer.
-In this post, I will try to give my impressions on the event, and share the most interesting events and papers I saw.
+In this post, I will try to give my impressions on the event, the speakers, and the workshops that I could attend. I will do a quick recap of the most interesting papers I saw in a future post.
As a result of global travel restrictions, the conference was made fully-virtual. It was supposed to take place in Addis Ababa, Ethiopia, which is great for people who are often the target of restrictive visa policies in Northern American countries.
The thing I appreciated most about the conference format was its emphasis on asynchronous communication. Given how little time they had to plan the conference, they could have made all poster presentations via video-conference and call it a day. Instead, each poster had to record a 5-minute videoThe videos are streamed using SlidesLive, which is a great solution for synchronising videos and slides. It is very comfortable to navigate through the slides and synchronising the video to the slides and vice-versa. As a result, SlidesLive also has a very nice library of talks, including major conferences. This is much better than browsing YouTube randomly.
@@ -63,9 +63,9 @@
where anyone could ask a question to the authors, or just show their appreciation for the work. This was a fantastic idea as it allowed any participant to interact with papers and authors at any time they please, which is especially important in a setting where people were spread all over the globe.
There were also Zoom session where authors were available for direct, face-to-face discussions, allowing for more traditional conversations. But asking questions on the channel had also the advantage of keeping a track of all questions that were asked by other people. As such, I quickly acquired the habit of watching the video, looking at the chat to see the previous discussions (even if they happened in the middle of the night in my timezone!), and then skimming the paper or asking questions myself.
-All of these excellent ideas were implemented by an amazing website, collecting all papers in a searchable, easy-to-use interface, and even a nice visualisation of papers as a point cloud!
+All of these excellent ideas were implemented by an amazing website, collecting all papers in a searchable, easy-to-use interface, and even including a nice visualisation of papers as a point cloud!
Overall, there were 8 speakers (two for each day of the main conference). They made a 40-minute presentation, and then there was a Q&A both via the chat and via Zoom. I only saw 4 of them, but I expect I will be watching the others in the near future.
+Overall, there were 8 speakers (two for each day of the main conference). They made a 40-minute presentation, and then there was a Q&A both via the chat and via Zoom. I only saw a few of them, but I expect I will be watching the others in the near future.
This talk was fascinating. It is about robotics, and especially how to design the “software” of our robots. We want to program a robot in a way that it could work the best it can over all possible domains it can encounter. I loved the discussion on how to describe the space of distributions over domains, from the point of view of the robot factory:
There are many ways to describe a policy (i.e. the software running in the robot’s head), and many ways to obtain them. If you are familiar with recent advances in reinforcement learning, this talk is a great occasion to take a step back, and review the relevant background ideas from engineering and control theory.
Finally, the most important take-away from this talk is the importance of abstractions. Whatever the methods we use to program our robots, we still need a lot of human insights to give them good structural biases. There are many more insights, on the cost of experience, (hierarchical) planning, learning constraints, etc, so I strongly encourage you to watch the talk!
This is a talk about an area of ML research I do not know very well, but very clearly presented. I really like the approach of teaching a set of methods from a “historical”, personal point of view. Laurent Dinh shows us how he arrived at this topic, what he finds interesting, in a very personal and relatable manner. This has the double advantage of introducing us to a topic that he is passionate about, while also giving us a glimpse of a researcher’s process, without hiding the momentary disillusions and disappointments, but emphasising the great achievements. Normalizing flows are also very interesting because it is grounded in strong theoretical results, that brings together a lot of different methods.
+This is a very clear presentation of an area of ML research I do not know very well. I really like the approach of teaching a set of methods from a “historical”, personal point of view. Laurent Dinh shows us how he arrived at this topic, what he finds interesting, in a very personal and relatable manner. This has the double advantage of introducing us to a topic that he is passionate about, while also giving us a glimpse of a researcher’s process, without hiding the momentary disillusions and disappointments, but emphasising the great achievements. Normalizing flows are also very interesting because it is grounded in strong theoretical results, that brings together a lot of different methods.
This talk was very interesting, and yet felt very familiar, as if I already saw a very similar one elsewhere. Especially for Yann LeCun, who clearly reuses the same slides for many presentations at various events. They both came back to their favourite subjects: self-supervised learning for Yann LeCun, and system 1/system 2 for Yoshua Bengio. All in all, they are very good speakers, and their presentations are always insightful. Yann LeCun gives a lot of references on recent technical advances, which is great if you want to go deeper in the approaches he recommends. Yoshua Bengio is also very good at broadening the debate around deep learning, and introducing very important concepts from cognitive science.
-TODO
On Sunday, there were 15 different workshops. All of them were recorded, and are available on the website. As always, unfortunately, there are too many interesting things to watch everything, but I saw bits and pieces of different workshops.
Cognitive science is fascinating, and I believe that collaboration between ML practitioners and cognitive scientists will greatly help advance both fields. I only watched Leslie Kaelbling’s presentation, which echoes a lot of things from her talk at the main conference. It complements it nicely, with more focus on intelligence, especially embodied intelligence. I think she has the rights approach to relationships between AI and natural science, explicitly listing the things from her work that would be helpful to natural scientists, and things she wish she knew about natural intelligences. It raises many fascinating questions on ourselves, what we build, and what we understand. I felt it was very motivational!
+Cognitive science is fascinating, and I believe that collaboration between ML practitioners and cognitive scientists will greatly help advance both fields. I only watched Leslie Kaelbling’s presentation, which echoes a lot of things from her talk at the main conference. It complements it nicely, with more focus on intelligence, especially embodied intelligence. I think she has the right approach to relationships between AI and natural science, explicitly listing the things from her work that would be helpful to natural scientists, and things she wish she knew about natural intelligences. It raises many fascinating questions on ourselves, what we build, and what we understand. I felt it was very motivational!
I didn’t attend this workshop, but I think I will watch the presentations if I can find some time. I have found the intersection of differential equations and ML very interesting, ever since the famous NeurIPS best paper on Neural ODEs. I think that such improvements to ML theory from other fields in mathematics would be extremely beneficial to a better understanding of the systems we build.
+I didn’t attend this workshop, but I think I will watch the presentations if I can find the time. I have found the intersection of differential equations and ML very interesting, ever since the famous NeurIPS best paper on Neural ODEs. I think that such improvements to ML theory from other fields in mathematics would be extremely beneficial to a better understanding of the systems we build.
diff --git a/_site/rss.xml b/_site/rss.xml index aa71ca8..782266d 100644 --- a/_site/rss.xml +++ b/_site/rss.xml @@ -21,7 +21,7 @@The many volunteers, the online-only nature of the event, and the low registration fees also allowed for what felt like a very diverse, inclusive event. Many graduate students and researchers from industry (like me), who do not generally have the time or the resources to travel to conferences like this, were able to attend, and make the exchanges richer.
-In this post, I will try to give my impressions on the event, and share the most interesting events and papers I saw.
+In this post, I will try to give my impressions on the event, the speakers, and the workshops that I could attend. I will do a quick recap of the most interesting papers I saw in a future post.
As a result of global travel restrictions, the conference was made fully-virtual. It was supposed to take place in Addis Ababa, Ethiopia, which is great for people who are often the target of restrictive visa policies in Northern American countries.
The thing I appreciated most about the conference format was its emphasis on asynchronous communication. Given how little time they had to plan the conference, they could have made all poster presentations via video-conference and call it a day. Instead, each poster had to record a 5-minute videoThe videos are streamed using SlidesLive, which is a great solution for synchronising videos and slides. It is very comfortable to navigate through the slides and synchronising the video to the slides and vice-versa. As a result, SlidesLive also has a very nice library of talks, including major conferences. This is much better than browsing YouTube randomly.
@@ -30,9 +30,9 @@
where anyone could ask a question to the authors, or just show their appreciation for the work. This was a fantastic idea as it allowed any participant to interact with papers and authors at any time they please, which is especially important in a setting where people were spread all over the globe.
There were also Zoom session where authors were available for direct, face-to-face discussions, allowing for more traditional conversations. But asking questions on the channel had also the advantage of keeping a track of all questions that were asked by other people. As such, I quickly acquired the habit of watching the video, looking at the chat to see the previous discussions (even if they happened in the middle of the night in my timezone!), and then skimming the paper or asking questions myself.
-All of these excellent ideas were implemented by an amazing website, collecting all papers in a searchable, easy-to-use interface, and even a nice visualisation of papers as a point cloud!
+All of these excellent ideas were implemented by an amazing website, collecting all papers in a searchable, easy-to-use interface, and even including a nice visualisation of papers as a point cloud!
Overall, there were 8 speakers (two for each day of the main conference). They made a 40-minute presentation, and then there was a Q&A both via the chat and via Zoom. I only saw 4 of them, but I expect I will be watching the others in the near future.
+Overall, there were 8 speakers (two for each day of the main conference). They made a 40-minute presentation, and then there was a Q&A both via the chat and via Zoom. I only saw a few of them, but I expect I will be watching the others in the near future.
This talk was fascinating. It is about robotics, and especially how to design the “software” of our robots. We want to program a robot in a way that it could work the best it can over all possible domains it can encounter. I loved the discussion on how to describe the space of distributions over domains, from the point of view of the robot factory:
There are many ways to describe a policy (i.e. the software running in the robot’s head), and many ways to obtain them. If you are familiar with recent advances in reinforcement learning, this talk is a great occasion to take a step back, and review the relevant background ideas from engineering and control theory.
Finally, the most important take-away from this talk is the importance of abstractions. Whatever the methods we use to program our robots, we still need a lot of human insights to give them good structural biases. There are many more insights, on the cost of experience, (hierarchical) planning, learning constraints, etc, so I strongly encourage you to watch the talk!
This is a talk about an area of ML research I do not know very well, but very clearly presented. I really like the approach of teaching a set of methods from a “historical”, personal point of view. Laurent Dinh shows us how he arrived at this topic, what he finds interesting, in a very personal and relatable manner. This has the double advantage of introducing us to a topic that he is passionate about, while also giving us a glimpse of a researcher’s process, without hiding the momentary disillusions and disappointments, but emphasising the great achievements. Normalizing flows are also very interesting because it is grounded in strong theoretical results, that brings together a lot of different methods.
+This is a very clear presentation of an area of ML research I do not know very well. I really like the approach of teaching a set of methods from a “historical”, personal point of view. Laurent Dinh shows us how he arrived at this topic, what he finds interesting, in a very personal and relatable manner. This has the double advantage of introducing us to a topic that he is passionate about, while also giving us a glimpse of a researcher’s process, without hiding the momentary disillusions and disappointments, but emphasising the great achievements. Normalizing flows are also very interesting because it is grounded in strong theoretical results, that brings together a lot of different methods.
This talk was very interesting, and yet felt very familiar, as if I already saw a very similar one elsewhere. Especially for Yann LeCun, who clearly reuses the same slides for many presentations at various events. They both came back to their favourite subjects: self-supervised learning for Yann LeCun, and system 1/system 2 for Yoshua Bengio. All in all, they are very good speakers, and their presentations are always insightful. Yann LeCun gives a lot of references on recent technical advances, which is great if you want to go deeper in the approaches he recommends. Yoshua Bengio is also very good at broadening the debate around deep learning, and introducing very important concepts from cognitive science.
-TODO
On Sunday, there were 15 different workshops. All of them were recorded, and are available on the website. As always, unfortunately, there are too many interesting things to watch everything, but I saw bits and pieces of different workshops.
Cognitive science is fascinating, and I believe that collaboration between ML practitioners and cognitive scientists will greatly help advance both fields. I only watched Leslie Kaelbling’s presentation, which echoes a lot of things from her talk at the main conference. It complements it nicely, with more focus on intelligence, especially embodied intelligence. I think she has the rights approach to relationships between AI and natural science, explicitly listing the things from her work that would be helpful to natural scientists, and things she wish she knew about natural intelligences. It raises many fascinating questions on ourselves, what we build, and what we understand. I felt it was very motivational!
+Cognitive science is fascinating, and I believe that collaboration between ML practitioners and cognitive scientists will greatly help advance both fields. I only watched Leslie Kaelbling’s presentation, which echoes a lot of things from her talk at the main conference. It complements it nicely, with more focus on intelligence, especially embodied intelligence. I think she has the right approach to relationships between AI and natural science, explicitly listing the things from her work that would be helpful to natural scientists, and things she wish she knew about natural intelligences. It raises many fascinating questions on ourselves, what we build, and what we understand. I felt it was very motivational!
I didn’t attend this workshop, but I think I will watch the presentations if I can find some time. I have found the intersection of differential equations and ML very interesting, ever since the famous NeurIPS best paper on Neural ODEs. I think that such improvements to ML theory from other fields in mathematics would be extremely beneficial to a better understanding of the systems we build.
+I didn’t attend this workshop, but I think I will watch the presentations if I can find the time. I have found the intersection of differential equations and ML very interesting, ever since the famous NeurIPS best paper on Neural ODEs. I think that such improvements to ML theory from other fields in mathematics would be extremely beneficial to a better understanding of the systems we build.
]]> diff --git a/posts/iclr-2020-notes.org b/posts/iclr-2020-notes.org index 5417b6f..c28271e 100644 --- a/posts/iclr-2020-notes.org +++ b/posts/iclr-2020-notes.org @@ -20,8 +20,9 @@ inclusive event. Many graduate students and researchers from industry travel to conferences like this, were able to attend, and make the exchanges richer. -In this post, I will try to give my impressions on the event, and -share the most interesting events and papers I saw. +In this post, I will try to give my impressions on the event, the +speakers, and the workshops that I could attend. I will do a quick +recap of the most interesting papers I saw in a future post. [fn:volunteer] To better organize the event, and help people navigate the various online tools, they brought in 500(!) volunteers, waved our @@ -60,7 +61,7 @@ skimming the paper or asking questions myself. All of these excellent ideas were implemented by an [[https://iclr.cc/virtual_2020/papers.html?filter=keywords][amazing website]], collecting all papers in a searchable, easy-to-use interface, and even -a nice [[https://iclr.cc/virtual_2020/paper_vis.html][visualisation]] of papers as a point cloud! +including a nice [[https://iclr.cc/virtual_2020/paper_vis.html][visualisation]] of papers as a point cloud! [fn:slideslive] The videos are streamed using [[https://library.slideslive.com/][SlidesLive]], which is a great solution for synchronising videos and slides. It is very @@ -80,8 +81,8 @@ even avoid Zoom, because of recent privacy concerns (maybe try Overall, there were 8 speakers (two for each day of the main conference). They made a 40-minute presentation, and then there was a -Q&A both via the chat and via Zoom. I only saw 4 of them, but I expect -I will be watching the others in the near future. +Q&A both via the chat and via Zoom. I only saw a few of them, but I +expect I will be watching the others in the near future. ** Prof. Leslie Kaelbling, [[https://iclr.cc/virtual_2020/speaker_2.html][Doing for Our Robots What Nature Did For Us]] @@ -112,12 +113,12 @@ encourage you to watch the talk! ** Dr. Laurent Dinh, [[https://iclr.cc/virtual_2020/speaker_4.html][Invertible Models and Normalizing Flows]] -This is a talk about an area of ML research I do not know very well, -but very clearly presented. I really like the approach of teaching a -set of methods from a "historical", personal point of view. Laurent -Dinh shows us how he arrived at this topic, what he finds interesting, -in a very personal and relatable manner. This has the double advantage -of introducing us to a topic that he is passionate about, while also +This is a very clear presentation of an area of ML research I do not +know very well. I really like the approach of teaching a set of +methods from a "historical", personal point of view. Laurent Dinh +shows us how he arrived at this topic, what he finds interesting, in a +very personal and relatable manner. This has the double advantage of +introducing us to a topic that he is passionate about, while also giving us a glimpse of a researcher's process, without hiding the momentary disillusions and disappointments, but emphasising the great achievements. Normalizing flows are also very interesting because it @@ -138,9 +139,9 @@ go deeper in the approaches he recommends. Yoshua Bengio is also very good at broadening the debate around deep learning, and introducing very important concepts from cognitive science. -** Prof. Michael I. Jordan, [[https://iclr.cc/virtual_2020/speaker_8.html][The Decision-Making Side of Machine Learning: Dynamical, Statistical and Economic Perspectives]] +# ** Prof. Michael I. Jordan, [[https://iclr.cc/virtual_2020/speaker_8.html][The Decision-Making Side of Machine Learning: Dynamical, Statistical and Economic Perspectives]] -TODO +# TODO * Workshops @@ -182,7 +183,7 @@ between ML practitioners and cognitive scientists will greatly help advance both fields. I only watched [[https://baicsworkshop.github.io/program/baics_45.html][Leslie Kaelbling's presentation]], which echoes a lot of things from her talk at the main conference. It complements it nicely, with more focus on intelligence, especially -/embodied/ intelligence. I think she has the rights approach to +/embodied/ intelligence. I think she has the right approach to relationships between AI and natural science, explicitly listing the things from her work that would be helpful to natural scientists, and things she wish she knew about natural intelligences. It raises many @@ -192,9 +193,8 @@ understand. I felt it was very motivational! ** [[https://iclr.cc/virtual_2020/workshops_5.html][Integration of Deep Neural Models and Differential Equations]] I didn't attend this workshop, but I think I will watch the -presentations if I can find some time. I have found the intersection -of differential equations and ML very interesting, ever since the -famous [[https://papers.nips.cc/paper/7892-neural-ordinary-differential-equations][NeurIPS best paper]] on Neural ODEs. I think that such -improvements to ML theory from other fields in mathematics would be -extremely beneficial to a better understanding of the systems we -build. +presentations if I can find the time. I have found the intersection of +differential equations and ML very interesting, ever since the famous +[[https://papers.nips.cc/paper/7892-neural-ordinary-differential-equations][NeurIPS best paper]] on Neural ODEs. I think that such improvements to +ML theory from other fields in mathematics would be extremely +beneficial to a better understanding of the systems we build.