This paper reviews recent advances on online/adaptive sparsity-promoting algorithms. The emphasis is on on a recent family of schemes, which build upon convex analytic tools. The benefits of this algorithmic family is that it can easily deal with the existence of a set of convex constraints and also to bypass the need of differentiability of cost functions. It can thus deal well with notions related to robustness and their associated costs. Extensions to constraints, which are realized via mappings whose fixed point set are non-convex, are also discussed. The case of learning in a distributed fashion is also discussed.
展开▼