Deniz Gündüz received the M.S. and Ph.D. degrees in electrical engineering from NYU Tandon School of Engineering (formerly Polytechnic University) in 2004 and 2007, respectively. After his PhD, he served as a postdoctoral research associate at Princeton University, as a consulting assistant professor at Stanford University, and as a research associate at CTTC in Barcelona, Spain. ln Sep. 2012, he joined the Electrical and Electronic Engineering Department of Imperial College London, UK, where he is currently a Professor of Information Processing, and serves as the deputy head of the Intelligent Systems and Networks Group. He is also a part-time faculty member at the University of Modena and Reggio Emilia, Italy, and has held visiting positions at University of Padova (2018-2020) and Princeton University (2009-2012).
His research interests lie in the areas of communications and information theory, machine learning, and privacy. Dr. Gündüz is a Fellow of the IEEE, and a Distinguished Lecturer of the IEEE Information Theory Society (2020-22). He is an Area Editor for the IEEE Transactions on Information Theory, IEEE Transactions on Communications, and the IEEE Journal on Selected Areas in Communications (JSAC) – Special Series on Machine Learning in Communications and Networks. He also serves as an Editor of the IEEE Transactions on Wireless Communications. He is the recipient of the IEEE Communications Society – Communication Theory Technical Committee (CTTC) Early Achievement Award in 2017, a Starting Grant of the European Research Council (ERC) in 2016, and several best paper awards.
Update: “Semantic and Goal Oriented Communications“
Abstract: Traditional digital communication systems are designed to convert a noisy channel into a reliable bit-pipe; they are oblivious to the origin of the bits or how they are eventually used at the receiver. However, with the advances in machine learning (ML) technologies and their widespread adoption, in the near future, most communications will take place among machines, where massive amounts of data is available at the transmitter, and the goal is often not to transmit this data to the receiver, but instead, to enable the receiver to make the right inference or take the right action. On the other hand, ML algorithms to achieve these goals are designed either for centralised implementation at powerful cloud servers, or assume finite-rate, but delay- and error-free communication links. In this talk, I will show that the current approach of separating communication system design from ML algorithm design is highly suboptimal for emerging edge intelligence applications, and an end-to-end design taking into account the “semantics” of the underlying data and the final “goal” at the receiver is essential.