Inference and Optimization over Networks: Communication Efficiency and Optimality
2018-12-13T19:52:08Z (GMT) by
We study distributed inference, learning and optimization in scenarios which involve networked entities in<br>time-varying and random networks, which are ad-hoc in nature. In this thesis, we propose distributed recursive<br>algorithms where the networked entities simultaneously incorporate locally sensed information and<br>information obtained from the neighborhood. The class of distributed algorithms proposed in this thesis<br>encompasses distributed estimation, distributed composite hypothesis testing and distributed optimization.<br>The central theme of the scenarios considered involve systems constrained by limited on board batteries<br>and hence constrained by limited sensing, computation and extremely limited communication resources.<br>A typical example of such resource constrained scenarios being distributed data-parallel machine learning<br>systems, in which the individual entities are commodity devices such as cellphones.<br>Due to the inherent ad-hoc nature of the aforementioned setups, in conjunction with random environments<br>render these setups central coordinator-less. Keeping in mind the resource constrained nature of such setups,<br>we propose distributed inference and optimization algorithms which characterize the interplay between<br>communication, computation and optimality, while allowing for heterogeneity among clients in terms of objectives,<br>data collection and statistical dependencies.<br>With massive data, models for learning and optimization have been getting more and more complex to the<br>extent of being almost analytically intractable. In such models, obtaining gradients for the associated loss<br>function is very expensive and potentially intractable due to the lack of a closed form for the loss function.<br>A major thrust of this thesis is gradient free zeroth order optimization which encompasses distributed setups<br>which exhibit data parallelism and also potentially analytically intractable loss functions. On top of<br>gradient free optimization, in this thesis we also study projection free zeroth order methods for constrained<br>optimization.<br>The techniques developed in this thesis are generic and are of independent interest in classical fields such as<br>stochastic approximation, statistical decision theory and optimization.