Professor Bing Liu: Continuous Machine Learning
Classic machine learning works by learning a model from a set of training examples. Although this paradigm has been very successful, it requires a large amount of manually labeled data, and it is only suitable for well-defined, static, and narrow domains. Going forward, this isolated learning paradigm is no longer sufficient. For example, it is almost impossible to pre-train intelligent personal assistants, chatbots, self-driving cars, and other robotics systems so that they can intelligently interact with their dynamic environments because it is very difficult for humans to provide labeled examples or any other supervised information to cover all possible scenarios that the systems may encounter in their environments. Thus, such systems must learn on the job by themselves continuously, retain the learned knowledge, and use it to help future learning. When faced with an unfamiliar situation, they must adapt their past knowledge to deal with the situation and learn from it. This general learning capability is one of the hallmarks of the human intelligence. Without this capability, it is probably impossible to build a truly intelligence system. In this talk, I will introduce this learning paradigm and discuss some recent research in this direction.
Bing Liu is a distinguished professor of Computer Science at the University of Illinois at Chicago. He received his Ph.D. in Artificial Intelligence from the University of Edinburgh. His research interests include sentiment analysis, lifelong learning, data mining, machine learning, and natural language processing (NLP). He has published extensively in top conferences and journals. Two of his papers have received 10-year Test-of-Time awards from KDD. He also authored four books: two on sentiment analysis, one on lifelong learning, and one on Web mining. Some of his work has been widely reported in the press, including a front-page article in the New York Times. On professional services, he served as the Chair of ACM SIGKDD (ACM Special Interest Group on Knowledge Discovery and Data Mining) from 2013-2017. He also served as program chair of many leading data mining conferences, including KDD, ICDM, CIKM, WSDM, SDM, and PAKDD, as associate editor of leading journals such as TKDE, TWEB, DMKD and TKDD, and as area chair or senior PC member of numerous NLP, AI, Web, and data mining conferences. He is a Fellow of ACM, AAAI and IEEE.
Rajeev Rastogi: Machine Learning @ Amazon
In this talk, I will first provide an overview of key problem areas where we are applying Machine Learning (ML) techniques within Amazon such as product demand forecasting, product search, and information extraction from reviews, and associated technical challenges. I will then talk about three specific applications where we use a variety of methods to learn semantically rich representations of data: question answering where we use deep learning techniques, product size recommendations where we use probabilistic models, and fake reviews detection where we use tensor factorization algorithms.
is a Director of Machine Learning at Amazon where he is developing ML platforms and applications for the e-commerce domain. Previously, he was Vice President of Yahoo! Labs Bangalore and the founding Director of the Bell Labs Research Center in Bangalore, India. Rajeev is an ACM Fellow and a Bell Labs Fellow. He is active in the fields of databases, data mining, and networking, and has served on the program committees of several conferences in these areas. He currently serves on the editorial board of the CACM, and has been an Associate editor for IEEE Transactions on Knowledge and Data Engineering in the past. He has published over 125 papers, and holds over 50 patents. Rajeev received his B. Tech degree from IIT Bombay, and a PhD degree in Computer Science from the University of Texas, Austin.
Kate Smith-Miles: Instance Spaces for Objective Assessment of Algorithms and Benchmark Test Suites
Objective assessment of algorithm performance is notoriously difficult, with conclusions often inadvertently biased towards the chosen test instances. Rather than reporting average performance of algorithms across a set of chosen instances, we discuss a new methodology to enable the strengths and weaknesses of different algorithms to be compared across a broader generalised instance space. Initially developed for combinatorial optimisation, the methodology has recently been extended for machine learning classification, and to ask whether the UCI repository and OpenML are sufficient as benchmark test suites. Results will be presented to demonstrate: (i) how pockets of the instance space can be found where algorithm performance varies significantly from the average performance of an algorithm; (ii) how the properties of the instances can be used to predict algorithm performance on previously unseen instances with high accuracy; (iii) how the relative strengths and weaknesses of each algorithm can be visualized and measured objectively; and (iv) how new test instances can be generated to fill the instance space and offer greater insights into algorithmic power.
Kate Smith-Miles holds an Australian Laureate Fellowship (2014-2019) from the Australian Research Council, and is a Professor of Applied Mathematics at The University of Melbourne. She was previously Head of the School of Mathematical Sciences at Monash University (2009-2014), and Head of the School of Engineering and IT at Deakin University (2006-2009). Having held chairs in three disciplines (mathematics, engineering and IT) has given her a broad interdisciplinary focus, and she was the inaugural Director of MAXIMA (Monash Academy for Cross and Interdisciplinary Mathematical Applications) from 2014-2017.
Kate has published around 250 refereed journal and international conference papers in the areas of neural networks, optimisation, machine learning, and various applied mathematics topics. She has supervised to completion 24 PhD students, and has been awarded over AUD$12 million in competitive grants. In 2010 she was awarded the Australian Mathematical Society Medal for distinguished research, and in 2017 she was awarded the E. O. Tuck Medal for outstanding research and distinguished service in applied mathematics by the Australian and New Zealand Industrial and Applied Mathematics Society (ANZIAM). Kate is a Fellow of the Institute of Engineers Australia and a Fellow of the Australian Mathematical Society (AustMS). She is the current President of the AustMS, and a member of the Australian Research Council’s College of Experts from 2017-2019. She also regularly acts as a consultant to industry in the areas of optimisation, data mining, intelligent systems, and mathematical modelling.