

Full description not available
K**P
many examples are very helpful for readers like me. However
I've read the textbook dealing with DP up to chapter 6. Since this book is approached mathematically, I think it is very well made except a few typos. A bit of unsatisfactoriness for me is the style of the book. I prefer the style introducing the general result first, and then proves why they are derived and where they are derived from. And then, many examples are very helpful for readers like me. However, this book introduces examples first, and then constructs the general form from and using the examples. I am sure that everybody has different styles. My word is just not my type. Nevertheless, the contents that textbook handles are wonderful.
D**
So much knowledge!
This set of two books is just an absolute archive of knowledge. Everything you need to know on Optimal Control and Dynamic programming from beginner level to advanced intermediate is here. Plus worked examples are great. They aren't boring examples as well.This set pairs well with Simulation-Based Optimization by Abhijit Gosavi.I guess the point is, this book should be the central framework of any graduate course in optimal control and operations research.
H**N
Five Stars
A classic!
N**O
Kindle version is terrible do not buy it
Seems like someone scanned it in while very drunk or something. Publisher really should lift its game this is just embarrassing for them.
A**I
Excellent book
*This is easily the best book on dynamic programming. It certainly is the most up-to-date book on this topic. The first volume covers numerous topics such as deterministic control, HJB equation for the deterministic case, Pontryagin principle, finite horizon MDPs, partially observable MDPs, and rollout heuristics. The second volume treats the infinite horizon case for the regular MDP --- average reward, discounted reward, semi-Markov control, and even some reinforcement learning.*I love the notation. The proofs in this book are much easier than those you will find elsewhere. (This opinion is based on my study of proofs in other texts.) The treatment is very sophisticated and yet very accessible! Furthermore, what is really a bonus here --- something you won't find in the other books --- is a discussion on the stochastic shortest path (SSP). The SSP makes it so easy to analyze the average reward problem and the finite horizon problem with a stationary transition probability structure.*I strongly recommend this book to all readers interested in understanding the basics of DP and the convergence proofs underlying the DP machinery. It is a must on your book shelf if you are working on research in DP or topics related to DP such as reinforcement learning or adaptive (approximate) DP.
S**T
Best book I've used so far.
This book does a very good job presenting both deterministic and stochastic optimal control. The author does a particularly good job in presenting the derivation of the Bellman equation and its relation to variational formulations for deterministic optimal control. There are also many very good problems with which the reader can test her understanding, and the author has made many solutions available on his web page. Problems range from testing theoretical understanding to determinig optimal policies for various control problems. There are even some exercises which ask the reader to develop parallel codes to solve some problems, so I think there is something in this book for everybody. Richard Bellman once said that there is considerably more to optimal control than just locating the eigenvalues of some matrix in the complex plane. I believe that Bertsekas has remained faithful to Bellman's view with the broad range of problems which he attacks through dynamic programming. I am currently doing a PhD thesis in mathematics studying Bellman equations, and I cannot think of a better source for intuition about control problems than Bertsekas' book. He even does a nice job in pointing out where he has omitted technicalities in the mathematical treatment for those who wish a very rigorous approach to control. If there is a better book out there, I am not aware of it.
R**L
rp
really nice book on dynamic programming... easy to understand and contains all requisite details.
S**R
It is a good start to study reinforcement learning and its beyond
Prof. Bertsekas is definitely the top-level researcher in this domain. It is a good start to study reinforcement learning and its beyond.
Trustpilot
2 months ago
1 month ago
4 days ago
1 week ago