diff --git a/_site/atom.xml b/_site/atom.xml index 49b53cd..1e6eb4e 100644 --- a/_site/atom.xml +++ b/_site/atom.xml @@ -43,6 +43,20 @@
For more books on linear programming, the two books Dantzig (1997), Dantzig (2003) are very complete, if somewhat more mathematically advanced. Bertsimas and Tsitsiklis (1997) is also a great reference, if you can find it.
For all the other subfields, this great StackExchange answer contains a lot of useful references, including most of the above. Of particular note are Peyré and Cuturi (2019) for optimal transport, Boyd (2004) for convex optimization (freely available online), and Nocedal (2006) for numerical optimization. Kochenderfer (2019) is not in the list (because it is very recent) but is also excellent, with examples in Julia covering nearly every kind of optimization algorithms.
If you would like to watch video lectures, there are a few good opportunities freely available online, in particular on MIT OpenCourseWare. The list of courses at MIT is available on their webpage. I haven’t actually looked in details at the courses contentI am more comfortable reading books than watching lecture videos online. Although I liked attending classes during my studies, I do not have the same feeling in front of a video. When I read, I can re-read three times the same sentence, pause to look up something, or skim a few paragraphs. I find that the inability to do that with a video diminishes greatly my ability to concentrate.
+
+, so I cannot vouch for them directly, but MIT courses are generally of excellent quality. Most courses are also taught by Bertsimas and Bertsekas, who are very famous and wrote many excellent books.
Of particular notes are:
+Another interesting course I found online is Deep Learning in Discrete Optimization, at Johns Hopkins It is taught by William Cook, who is the author of In Pursuit of the Traveling Salesman, a nice introduction to the TSP problem in a readable form.
+
+. It contains an interesting overview of deep learning and integer programming, with a focus on connections, and applications to recent research areas in ML (reinforcement learning, attention, etc.).
For more books on linear programming, the two books Dantzig (1997), Dantzig (2003) are very complete, if somewhat more mathematically advanced. Bertsimas and Tsitsiklis (1997) is also a great reference, if you can find it.
For all the other subfields, this great StackExchange answer contains a lot of useful references, including most of the above. Of particular note are Peyré and Cuturi (2019) for optimal transport, Boyd (2004) for convex optimization (freely available online), and Nocedal (2006) for numerical optimization. Kochenderfer (2019) is not in the list (because it is very recent) but is also excellent, with examples in Julia covering nearly every kind of optimization algorithms.
If you would like to watch video lectures, there are a few good opportunities freely available online, in particular on MIT OpenCourseWare. The list of courses at MIT is available on their webpage. I haven’t actually looked in details at the courses contentI am more comfortable reading books than watching lecture videos online. Although I liked attending classes during my studies, I do not have the same feeling in front of a video. When I read, I can re-read three times the same sentence, pause to look up something, or skim a few paragraphs. I find that the inability to do that with a video diminishes greatly my ability to concentrate.
+
+, so I cannot vouch for them directly, but MIT courses are generally of excellent quality. Most courses are also taught by Bertsimas and Bertsekas, who are very famous and wrote many excellent books.
Of particular notes are:
+Another interesting course I found online is Deep Learning in Discrete Optimization, at Johns Hopkins It is taught by William Cook, who is the author of In Pursuit of the Traveling Salesman, a nice introduction to the TSP problem in a readable form.
+
+. It contains an interesting overview of deep learning and integer programming, with a focus on connections, and applications to recent research areas in ML (reinforcement learning, attention, etc.).
For more books on linear programming, the two books Dantzig (1997), Dantzig (2003) are very complete, if somewhat more mathematically advanced. Bertsimas and Tsitsiklis (1997) is also a great reference, if you can find it.
For all the other subfields, this great StackExchange answer contains a lot of useful references, including most of the above. Of particular note are Peyré and Cuturi (2019) for optimal transport, Boyd (2004) for convex optimization (freely available online), and Nocedal (2006) for numerical optimization. Kochenderfer (2019) is not in the list (because it is very recent) but is also excellent, with examples in Julia covering nearly every kind of optimization algorithms.
If you would like to watch video lectures, there are a few good opportunities freely available online, in particular on MIT OpenCourseWare. The list of courses at MIT is available on their webpage. I haven’t actually looked in details at the courses contentI am more comfortable reading books than watching lecture videos online. Although I liked attending classes during my studies, I do not have the same feeling in front of a video. When I read, I can re-read three times the same sentence, pause to look up something, or skim a few paragraphs. I find that the inability to do that with a video diminishes greatly my ability to concentrate.
+
+, so I cannot vouch for them directly, but MIT courses are generally of excellent quality. Most courses are also taught by Bertsimas and Bertsekas, who are very famous and wrote many excellent books.
Of particular notes are:
+Another interesting course I found online is Deep Learning in Discrete Optimization, at Johns Hopkins It is taught by William Cook, who is the author of In Pursuit of the Traveling Salesman, a nice introduction to the TSP problem in a readable form.
+
+. It contains an interesting overview of deep learning and integer programming, with a focus on connections, and applications to recent research areas in ML (reinforcement learning, attention, etc.).