单 位:计算机学院
报告题目: Unification of Deep Learning and Reasoning
报 告 人:Prof. Dapeng Oliver Wu(Dept. of Electrical & Computer Engineering University of Florida, USA)
报告时间:2018年3月20号(周二),上午10:30至11:30
报告地点:工学一号馆学术报告厅216
个人简历:
Dapeng Oliver Wu received Ph.D. in Electrical and Computer Engineering from Carnegie Mellon University, Pittsburgh, PA, in 2003. Since 2003, he has been on the faculty of Electrical and Computer Engineering Department at University of Florida, Gainesville, FL, where he is currently Professor. His research interests are in the areas of networking, communications, video coding, image processing, computer vision, signal processing, and machine learning.
He received University of Florida Term Professorship Award in 2017, University of Florida Research Foundation Professorship Award in 2009, AFOSR Young Investigator Program (YIP) Award in 2009, ONR Young Investigator Program (YIP) Award in 2008, NSF CAREER award in 2007, the IEEE Circuits and Systems for Video Technology (CSVT) Transactions Best Paper Award for Year 2001, the Best Paper Award in GLOBECOM 2011, and the Best Paper Award in QShine 2006. Currently, he serves as Editor-in-Chief of IEEE Transactions on Network Science and Engineering, and Associate Editor of IEEE Transactions on Communications, IEEE Transactions on Signal and Information Processing over Networks, and IEEE Signal Processing Magazine. He was the founding Editor-in-Chief of Journal of Advances in Multimedia between 2006 and 2008, and an Associate Editor for IEEE Transactions on Circuits and Systems for Video Technology, IEEE Transactions on Wireless Communications and IEEE Transactions on Vehicular Technology. He has served as Technical Program Committee (TPC) Chair for IEEE INFOCOM 2012. He was elected as a Distinguished Lecturer by IEEE Vehicular Technology Society in 2016. He is an IEEE Fellow.
报告摘要:
While deep learning has achieved a huge success in various learning problems, the current models are still far away from replicating many functions that a normal human brain can do. Memorization based deep architecture have been recently proposed with the objective to learn and predict better.
In this talk, I will present a model that involves a primary learner with an adjacent structured memory bank which can not only predict the output from a given input but also relate it to all its
past memorized instances and help in its creative understanding. This paper presents a spatially forked deep learning architecture that can even predict and reason about the nature of an input belonging to a category never seen in the training data by relating it with the memorized past representations at the higher layers. Characterizing images of unseen geometrical figures is used as an example to showcase the operational success of the proposed framework.