Non-Convex Optimization: RMSProp Based Optimization for Long Short-Term Memory Network
Average rating
Cast your vote
You can rate an item by clicking the amount of stars they wish to award to this item.
When enough users have cast their vote on this item, the average rating will also be shown.
Star rating
Your vote was cast
Thank you for your feedback
Thank you for your feedback
Author
Yan, JianzhiKeyword
Nonconvex programmingLong Short-Term Memory (LSTM)
Back propagation (Artificial intelligence)
RMSProp optimization
Date Published
2020-05
Metadata
Show full item recordAbstract
This project would give a comprehensive picture of non-convex optimization for deep learning, explain in details about Long Short-Term Memory (LSTM) and RMSProp. We start by illustrating the internal mechanisms of LSTM, like the network structure and backpropagation through time (BPTT). Then introducing RMSProp optimization, some relevant mathematical theorems and proofs in those sections, which give a clear picture of how RMSProp algorithm is helpful to escape the saddle point. After all the above, we apply it with LSTM with RMSProp for the experiment; the result would present the efficiency and accuracy, especially how our method beat traditional strategy in non-convex optimization.