Skip to main content

Distinguished Seminar: D. Richard Brown, Worcester Polytechnic Institute, "Machine Learning for End-to-End Optimization of Communication Systems"

All dates for this event occur in the past.

Brown

Distinguished Seminar Series

Rick Brown, Professor and Department Head in the Department of Electrical and Computer Engineering at Worcester Polytechnic Institute, will give a distinguished lecture on October 20.

Title:

Machine Learning for End-to-End Optimization of Communication Systems

Abstract:

Brown gives an overview of a recent paradigm shift in the physical layer design of communication systems. Rather than the traditional approach of separately designing source codes, channel codes, and modulation schemes, researchers have recently considered combining these individual functions into a single function at the transmitter and receiver. The resulting structure can be thought of as an autoencoder and machine learning can be used to jointly optimize transmitter and receiver components in a single process to maximize the reliability of correctly communicating a given message. I will review some of the recent literature exploring this idea and will present our recent results for joint coding and modulation. First, I will present a more efficient formulation of the problem that retains the generality of the early work with significantly fewer parameters and faster training speed. I will then present some surprising results for simple classical communications settings and, rather than treating the learned network as a black box, I will share some interpretations of the codes learned by the system. These results suggest some new insights for the underlying structure of good code designs. I will then show examples of how an autoencoder can find joint coding and modulation schemes that outperform prior approaches for settings where no known good codes exist.

Bio:

D. Richard Brown III is currently a Professor and the Department Head in the Department of Electrical and Computer Engineering at Worcester Polytechnic Institute, where he has been a faculty member since 2000. He received a PhD in Electrical Engineering from Cornell University in 2000 and MS and BS degrees in Electrical Engineering from The University of Connecticut in 1996 and 1992, respectively. From 1992-1997, he was a design engineer at General Electric Electrical Distribution and Control in Plainville, Connecticut. From August 2007 to June 2008, he held an appointment as a Visiting Associate Professor at Princeton University.  From 2016-2018, he served as a Program Director at the National Science Foundation in the Computing and Communications Foundations (CCF) division of the Directorate for Computer & Information Science & Engineering (CISE). He is also currently serving as an Associate Editor for IEEE Transactions on Wireless Communications.

Kaiyi

Special Presidential Fellow Discussion

Title:  Meta-Learning:  Theoretical Convergence and Comparison 

Time: 10/20/2020 from 1 p.m. to 1:30 p.m.

Speaker: Kaiyi Ji

Zoom link: https://osu.zoom.us/j/92729751601?pwd=c3FMaU9SV0RKY1VXQjY5MStvVU1pUT09 

Abstract: Meta-learning or learning to learn has emerged as a powerful tool in machine learning practice for quickly learning new tasks by using the prior experience from related tasks. Two types of meta-learning approaches are very popular in current practice. One type, such as model-agnostic meta-learning (MAML) algorithm, attempts to learn the good initial for all model parameters, and hence can be computationally costly. The second one, such as almost no inner loop (ANIL), treats most model parameters as embedded common features and trains initial only for the remaining small portion of parameters. Recent empirical studies have demonstrated that the second type algorithms can be significantly sample and computationally efficient. Yet, such a common practice has no theoretical guarantee. In this talk, I provide our recent results that bridge such a gap. I will first briefly introduce the idea of meta learning and the two types of algorithms. I will then present our theoretical analysis of these algorithms, provide a comparison between their computational complexity, and characterize the impact of loss geometries and hyper-parameters on the convergence. I will finally comment on the insight and guideline that we obtain from the theory and the future issues that should be addressed. The talk will be based on our recent work accepted by the upcoming NeurIPS conference.   

Bio: Kaiyi Ji is a fifth-year PhD student at the department of Electrical and Computer Engineering, The Ohio State University, supervised by Prof. Yingbin Liang. He was a visiting student research collaborator at the department of Electrical Engineering, Princeton University from Mar. 2020 to Apr. 2020, working with Prof. Vincent Poor and Prof. Jason Lee. He obtained his B.E. degree from University of Science and Technology of China in 2016. His research interest lies in the theory side of machine learning and large-scale optimization, including characterizing the convergence rate and generalization error bounds on machine learning problems (such as meta-learning, generative adversarial networks (GANs), multi-armed bandit problem) as well as on optimization problems (such as bilevel programing, online learning, gradient-free optimization). His earlier PhD study also explored the performance of cache networking including theory and applications to database systems. He has published more than 10 journal papers and conference papers in top venues including ICML, NeurIPS, AAAI, IJCAI, INFOCOM, SIGMETRICS, etc. He received the 2016/2017 University Fellowship and the 2020/21 Presidential Fellowship at The Ohio State University.