Title: Online Learning: the Stochastic Gradient Descent Family of Algorithms
Abstract: The focus of this chapter is to introduce the stochastic gradient descent family of online/adaptive algorithms. The gradient descent approach to optimization is presented and the stochastic approximation method is discussed. The emphasis in this chapter is on the squared error loss function. The LMS algorithm and its offspring, such as the APA and the NLMS, are introduced. Finally, distributed learning is discussed with a focus on distributed versions of the LMS.
Publication Year: 2020
Publication Date: 2020-01-01
Language: en
Type: book-chapter
Indexed In: ['crossref']
Access and Citation
Cited By Count: 4
AI Researcher Chatbot
Get quick answers to your questions about the article from our AI researcher chatbot