Dean, College of Information and Computer Sciences
University of Massachusetts Amherst
Computing for the Common Good: Meeting Humanity’s Needs in the Age of Machine Learning
Abstract: We are poised on the brink of a new era of computing, one in which computer systems will be able to handle many of the tasks that only humans can do today. The new systems will feed on data and leverage a diverse set of analytic tools and strategies, constantly learning. They will be able to partner with people to improve our lives and enhance our ability to solve complex problems. But there are risks that arise with this new technology: systems can exhibit bias, leading to unfair results, or they may behave oddly and be hard to explain. To counteract these issues, researchers at the University of Massachusetts Amherst are embracing a vision of “Computing for the Common Good”: computing that not only may be used for good, but that is also, intrinsically, good – fair, accessible, explainable, trustworthy, effective and efficient. In this talk, I will discuss this vision and some of the key technical building blocks that may enable us to achieve it. I will illustrate each concept with examples of ongoing research and its application, and end with a few thoughts on what more is needed.
Bio: Dr. Laura Haas joined the University of Massachusetts Amherst in August 2017 as Dean of the College of Information and Computer Sciences, after a long career at IBM, where she was accorded the title IBM Fellow in recognition of her impact. At the time of her retirement from IBM, she was Director of IBM Research’s Accelerated Discovery Lab (2011-2017), after serving as Director of Computer Science at IBM’s Almaden Research Center from 2005 to 2011. She had worldwide responsibility for IBM Research’s exploratory science program from 2009 through 2013. From 2001-2005, she led the Information Integration Solutions architecture and development teams in IBM's Software Group. Previously, Dr. Haas was a research staff member and manager at Almaden. She is best known for her work on the Starburst query processor, from which DB2 LUW was developed, on Garlic, a system which allowed integration of heterogeneous data sources, and on Clio, the first semi-automatic tool for heterogeneous schema mapping. She has received several IBM awards for Outstanding Innovation and Technical Achievement, an IBM Corporate Award for information integration technology, the Anita Borg Institute Technical Leadership Award, and the ACM SIGMOD Edgar F. Codd Innovation Award. Dr. Haas was Vice President of the VLDB Endowment Board of Trustees from 2004-2009 and served on the board of the Computing Research Association from 2007-2016 (vice chair 2009-2015); she currently serves on the National Academies Computer Science and Telecommunications Board (2013-2019). She is an ACM Fellow, a member of the National Academy of Engineering, and a Fellow of the American Academy of Arts and Sciences.
Universidade de São Paulo
Knowledge-based machine learning: How? Why?
Abstract: Knowledge representation and machine learning are two core topics in the development of artificial intelligences. There are now solid tools for knowledge representation, ranging from description logics to probabilistic graph-based models. There is now also a deluge of data and a flood of successful machine learning techniques. How can all these tools work together? We illustrate some of the issues in knowledge-based machine learning by investigating modeling languages that mix statistical and logical reasoning, thus bringing first-order power to probabilistic modeling. But why should we care at all about knowledge representation, when it seems that big datasets are enough to guide us in every decision? One reason is to produce accountable and explainable decisions: we discuss issues that arise when explaining knowledge completion from large datasets.
Bio: Full Professor at Universidade de São Paulo, Brazil, where he works with probabilistic reasoning and machine learning, with a special interest in formalisms that extend probability theory. He got an Engineering degree at USP-Brazil and a PhD in Robotics at Carnegie Mellon University-USA, and has served, among other activities, as Program and General Chair of the Conf. on Uncertainty in Artificial Intelligence, Area Chair of the Int. Joint Conf. on Artificial Intelligence, Associate Editor of the Artificial Intelligence Journal, Associate Editor of the Journal of Artificial Intelligence Research, and Associate Editor of the Journal of Approximate Reasoning; he was chair of the chapter on Artificial Intelligence at the Sociedade Brasileira de Computação.
Eduardo de Paula Costa
R&D Data Science & Informatics
Agriculture Division of DowDuPont™
Machine learning applied to breeding programs of plants and animals
Abstract: One important challenge in Bioinformatics is how to make sense of the large amount of molecular data that has been generated with the advance of genotyping technologies. In this context, machine learning methods have been successfully developed and applied to different tasks. In breeding programs of animals and plants, the task is to select individuals for reproduction to obtain desired characteristics in their offspring. To that aim, predictive methods can be used to model functions that link genotype data to their phenotype. Such predictive models, which have been developed using different machine learning methods, such as linear regression, random forests, and deep learning, allow breeders to make use of data analysis to assist their decisions. In this talk, I will give an introduction to predictive methods for breeding applications and how machine learning plays a critical role is this real-world application. I will illustrate the application of these methods, both in the industry and academic research. I will also present the main challenges in the field and opportunities for further research.
Bio: Eduardo Costa received his BS in Computer Science in 2005 (Sao Paulo State University – Brazil) and his MS in Computer Science and Computational Mathematics in 2008 (University of Sao Paulo – Brazil), where he started his research on Machine Learning with special focus on Bioinformatics. After that, he continued his research as part of the Machine Learning group from the Department of Computer Science at KU Leuven – Belgium, where he received his PhD in 2013. Eduardo then returned to Brazil, where he worked as a post-doctoral associate at the University of São Paulo and a temporary lecturer at the São Paulo State University. Some of the topics he studied in his MS, PhD and post-doctoral research are: hierarchical classification of proteins, phylogenetic tree reconstruction, protein subfamily classification, and genome-wide annotation of transposable elements. In 2014, Eduardo joined Dow AgroSciences as a part of the Computational Biology group. During his first three years in the company he mainly worked in the Seeds Platform of the company, providing data analysis capabilities to the breeding organization. His main projects were: genome wide selection and statistical analysis for field trials. Currently, he is part of the Computational Biology and Systems-biology (CoBS) group, where he is mainly involved in the development of toxicogenomics capabilities for the Crop Protection Platform. In 2017, Eduardo was one of the five winners of the NEON Great Start Award, a DAS legacy global award that recognized early-career employees across all functions in the company.