与6位图灵奖得主和100多位专家

共同探讨人工智能的下一个十年

长按图片或点击阅读原文,内行盛会,首次免费注册

北京智源大会倒计时:3


2020年6月21-24日,第二届北京智源大会(官网:https://2020.baai.ac.cn)将邀请包括6位图灵奖获得者在内的上百位人工智能领袖,一起回顾过去,展望未来,深入系统探讨“人工智能的下一个十年”。本次大会将开设19个专题论坛,主题涵盖人工智能数理基础、自然语言处理、智能体系架构与芯片、人工智能伦理治理与可持续发展、机器学习、智能信息检索与挖掘、认知神经基础、机器感知、决策智能、AI医疗、AI创业、AI交通、AI+大数据+防疫、AI框架、图神经网络、知识智能、强化学习、青年科学家机器学习前沿,以及AI科技女性,遍历人工智能基础研究及创新应用,结合时局解析人工智能最新动态,探索未来发展方向。

我们将带你走进各个分论坛,领略嘉宾风采、洞悉前沿趋势。今天介绍的是将于6月24日上午举行的机器学习前沿青年科学家专题论坛。

论坛主席

Michael I. Jordan

加州大学伯克利分校任电气工程与计算机科学系和统计系杰出教授,智源学术顾问委员会委员。美国科学院、美国工程院、美国艺术与科学院三院院士,机器学习领域唯一位获此成就的科学家。他是多家国际顶级期刊和国际顶级学术会议(AAAS、AAAI、ACM、ASA、CSS、IEEE、IMS、ISBA、SIAM)的会士。他曾获IJCAI研究卓越奖(IJCAI Research Excellence Award)(2016)、David E. Rumelhart奖(2015)以及ACM / AAAI Allen Newell奖(2009)等。2016年,Jordan教授被Semantic Scholar评为CS领域最具影响力学者。其研究兴趣涵盖机器学习、统计、认知和生物科学等领域,近年来尤其集中在非参数贝叶斯分析、图模型、谱方法以及分布式计算、自然语言处理、信号处理和统计遗传学等方面。

论坛主持

朱军

清华大学计算机系教授、智源学者,智能技术与系统国家重点实验室副主任、卡内基梅隆大学兼职教授。2013年,入选IEEE Intelligent Systems的“人工智能10大新星”(AI’s 10 to Watch)。他主要从事机器学习研究,在国际重要期刊与会议发表学术论文80余篇。担任国际期刊IEEE TPAMI和Artificial Intelligence的编委、国际会议ICML 2014地区联合主席、以及ICML、NIPS等国际会议的领域主席。

演讲主题及嘉宾介绍

1. Towards a theoretical understanding of learning to learn methods

议题简介:Optimization algorithms play a central role in deep learning. Recently a line of work tries to design better optimization algorithms using a meta-learning approach - where one optimizes the performance of an optimizer. However, there are challenges to this approach in both theory and practice. In this talk we will investigate the learning-to-learn approach on simple objectives. We show that (a) For simple quadratic objectives, one can design a loss function whose gradient is well-behaved and gradient descent converges, however the auto-differentiation tools based on backpropagation would run into numerical issues and fail to compute the gradient correctly. We also give a way to fix this issue. (b) Training the optimizer using validation loss is provably better than training the optimizer using training loss. The former can achieve good generalization performance while the latter could overfit even for simple quadratic functions. We verify these results using simple experiments on synthetic data as well as MNIST.

演讲嘉宾:鬲融

I am now an assistant professor at the Computer Science Department of Duke University. I got my Ph.D. from the Computer Science Department of Princeton University. My advisor is Sanjeev Arora. I was a post-doc at Microsoft Research, New England.  I am broadly interested in theoretical computer science and machine learning. Modern machine learning algorithms such as deep learning try to automatically learn useful hidden representations of the data. How can we formalize hidden structures in the data, and how do we design efficient algorithms to find them? My research aims to answer these questions by studying problems that arise in analyzing text, images and other forms of data, using techniques such as non-convex optimization and tensor decompositions. See the Research page for more detail.

2. Near-Optimal Reinforcement Learning with Self-Play

议题简介:Self-play, where the algorithm learns by playing against itself without requiring any direct supervision, has become the new weapon in modern Reinforcement Learning (RL) for achieving superhuman performance in practice. However, the majority of existing theory in reinforcement learning only applies to the setting where a single agent plays against a fixed environment. It remains largely open how to design efficient self-play algorithms in two-player sequential games, especially when it is necessary to manage the exploration/exploitation tradeoff. In this talk, we present the first line of provably efficient self-play algorithms in a basic setting of tabular episodic Markov games. Our algorithms further feature the near-optimal sample complexity---the number of samples required by our algorithms matches the information-theoretic lower bound up to a polynomial factor of the length of each episode.

演讲嘉宾:金驰

Chi Jin is assistant professor of Electrical Engineering at Princeton University.  He obtained his Ph.D. in Computer Science at UC Berkeley, advised by Michael I. Jordan. He received his B.S. in Physics from Peking University. His research interest lies in theoretical machine learning, with special emphases on nonconvex optimization and reinforcement learning. His representative work includes proving noisy gradient descent / accelerated gradient descent escape saddle points efficiently, proving sample complexity bounds for Q-learning / LSVI with UCB, and designing near-optimal algorithms for minimax optimization.

3. How Private Are Private Algorithms?

议题简介:Privacy-preserving data analysis has been put on a firm mathematical foundation since the introduction of differential privacy (DP) in 2006. This privacy definition, however, has some well-known weaknesses: notably, it does not tightly handle composition. In this talk, we propose a new relaxation of DP that we term "f-DP", which has a number of appealing properties and avoids some of the difficulties associated with prior relaxations. First, f-DP preserves the hypothesis testing interpretation of differential privacy, which makes its guarantees easily interpretable. It allows for lossless reasoning about composition and post-processing, and notably, a direct way to analyze privacy amplification by subsampling. We define a canonical single-parameter family of definitions within our class that is termed "Gaussian Differential Privacy", based on hypothesis testing of two shifted normal distributions. We prove that this family is focal to f-DP by introducing a central limit theorem, which shows that the privacy guarantees of any hypothesis-testing based definition of privacy (including differential privacy) converge to Gaussian differential privacy in the limit under composition. This central limit theorem also gives a tractable analysis tool. We demonstrate the use of the tools we develop by giving an improved analysis of the privacy guarantees of noisy stochastic gradient descent. This is joint work with Jinshuo Dong and Aaron Roth.

演讲嘉宾:苏炜杰

Assistant Professor of Statistics at the Wharton School, University of Pennsylvania. Prior to joining Penn, he received his Ph.D. in Statistics from Stanford University in 2016 and his B.S. in Mathematics from Peking University in 2011. Su's research interests include high-dimensional inference, multiple testing, statistical aspects of optimization, and private data analysis. He is a recipient of an NSF CAREER Award in 2019.

4. Conformal Inference of Counterfactuals and Individual Treatment Effects

议题简介:Evaluating treatment effect heterogeneity widely informs treatment decision making. At the moment, much emphasis is placed on the estimation of the conditional average treatment effect via flexible machine learning algorithms. While these methods enjoy some theoretical appeal in terms of consistency and convergence rates, they generally perform poorly in terms of uncertainty quantification. This is troubling since assessing risk is crucial for reliable decision-making in sensitive and uncertain environments. In this work, we propose a conformal inference-based approach that can produce reliable interval estimates for counterfactuals and individual treatment effects under the potential outcome framework. For completely randomized or stratified randomized experiments with perfect compliance, the intervals have guaranteed average coverage in finite samples regardless of the unknown data generating mechanism. For randomized experiments with ignorable compliance and general observational studies obeying the strong ignorability assumption, the intervals satisfy a doubly robust property which states the following: the average coverage is approximately controlled if either the propensity score or the conditional quantiles of potential outcomes can be estimated accurately. Numerical studies on both synthetic and real datasets empirically demonstrate that existing methods suffer from a significant coverage deficit even in simple models. In contrast, our methods achieve the desired coverage with reasonably short intervals.

演讲嘉宾:Lihua Lei

I'm a postdoctoral researcher in the Statistics Department at Stanford University, advised by Professor Emmanuel Candes. Previously I got my Ph.D. at UC Berkeley, advised by Professors Peter Bickel and Michael Jordan. I was also very fortunate to be supervised by Professors Noureddine El Karoui, William Fithian and Peng Ding on particular projects. Prior to this, I was major in mathematics and statistics in School of Mathematical Sciences at Peking University with an economic minor in China Center for Economic Research at Peking University. I was pleased to be a research assistant with Professor Lan Wu and supervised by Professor Song Xi Chen on my undergraduate thesis. My research interests include multiple hypothesis testing, causal inference, network analysis, high dimensional statistical inference, optimization, resampling methods, time series analysis and econometrics.

5. Shape Matters: Understanding the Implicit Bias of the Noise Covariance

议题简介:The noise in stochastic gradient descent (SGD) provides a crucial implicit regularization effect for training overparameterized models. Prior theoretical work largely focuses on spherical Gaussian noise, whereas empirical studies demonstrate the phenomenon that parameter-dependent noise --- induced by mini-batches or label perturbation --- is far more effective than Gaussian noise. In the talk, I will present a recent work that theoretically characterizes this phenomenon on a quadratically-parameterized model introduced by Vaskevicius et al. and Woodworth et al.  We show that in an over-parameterized setting, SGD with label noise recovers the sparse ground-truth with an arbitrary initialization, whereas SGD with Gaussian noise or gradient descent overfits to dense solutions with large norms. Our analysis reveals that parameter-dependent noise introduces a bias towards local minima with smaller noise variance, whereas spherical Gaussian noise does not.

演讲嘉宾马腾宇

斯坦福大学计算机科学和统计学助理教授。主要研究领域包括机器学习和算法,如非凸优化、深度学习及其理论、强化学习、表示学习、高维统计等。已在国际顶级会议和期刊上发表高质量论文40多篇。获得2018 ACM博士论文奖荣誉奖(Honorable Mentions),NeuRIPS 2016最佳学生论文, COLT 2018最佳论文奖。本科就读于清华大学交叉信息研究院,是2008级“姚班”学生。毕业后前往美国在普林斯顿攻读博士学位,期间师从Sanjeev Arora教授。

- 点击阅读原文或长按图片,内行盛会,首次免费注册-

机器学习前沿:Michael Jordan与鬲融、金驰、马腾宇等青年才俊的对话相关推荐

  1. 机器学习奠基人Michael Jordan:下代技术是融合经济学,解读2项重要进展(含PPT)...

    2019年11月1日北京智源大会全体大会及闭幕式上,被誉为"机器学习之父"的加州大学伯克利教授.智源研究院学术顾问委员会委员 Michael I.Jordan 做了题为<决策 ...

  2. 机器学习奠基人Michael Jordan:下代技术是融合经济学,解读2项重要进展

    来源: 北京智源人工智能研究院 2019年11月1日北京智源大会全体大会及闭幕式上,被誉为"机器学习之父"的加州大学伯克利教授.智源研究院学术顾问委员会委员 Michael I.J ...

  3. 2020 IEEE冯诺依曼奖得主:Michael Jordan --机器学习领域泰斗级人物

    近日,2020 年度 IEEE 冯诺伊曼奖项结果正式公布.机器学习泰斗 Michael Jordan 摘得本年度奖项,获奖理由为「对机器学习和数据科学的重大贡献」. 关于冯诺依曼奖 IEEE 冯诺依曼 ...

  4. 2020 IEEE 冯诺依曼奖得主:Michael Jordan --机器学习领域泰斗级人物

    近日,2020 年度 IEEE 冯诺伊曼奖项结果正式公布.机器学习泰斗 Michael Jordan 摘得本年度奖项,获奖理由为「对机器学习和数据科学的重大贡献」. 关于冯诺依曼奖 IEEE 冯诺依曼 ...

  5. Michael Jordan获2020IEEE冯诺依曼奖,曾培养吴恩达、Bengio

    2019-12-06 11:49:40 机器之心报道 机器之心编辑部 近日,2020 年度 IEEE 冯诺伊曼奖项结果正式公布.机器学习泰斗 Michael Jordan 摘得本年度奖项,获奖理由为「 ...

  6. 有望取代Spark,Michael Jordan和Ion Stoica提出下一代分布式实时机器学习框架Ray牛在哪?...

    从MR到Spark再到Ray Michael I. Jordan力荐的Ray 尽在"Ray Summit Pre-Con" 2020年9月21日 09:00-12:10 Ray项目 ...

  7. 2020中国Ray技术峰会丨取代Spark,Michael Jordan和Ion Stoica提出下一代分布式实时机器学习框架...

    从MR到Spark再到Ray Michael I. Jordan力荐的Ray 尽在"Ray Summit Pre-Con" 2020年9月21日 09:00-12:10 Ray项目 ...

  8. 机器大神 Michael Jordan 教授主题演讲:机器学习——创新视角,直面挑战》

    2019独角兽企业重金招聘Python工程师标准>>> 说到人工智能,不得不提到一个关键词就是机器学习,机器学习领域的突破和爆发,使人工智能领域有了飞跃的发展.人工智能的时候会特别关 ...

  9. AI:2020年6月24日北京智源大会演讲分享之机器学习前沿青年科学家专题论坛——10:40-11:10金驰《Near-Optimal Reinforcement Learning with Sel》

    AI:2020年6月24日北京智源大会演讲分享之机器学习前沿青年科学家专题论坛--10:40-11:10金驰<Near-Optimal Reinforcement Learning with S ...

最新文章

  1. parcel react_如何使用Parcel捆绑React.js应用程序
  2. golang sftp传输文件
  3. SD-WAN — 技术架构
  4. python编程300集免费-python 300本电子书合集
  5. 黑马lavarel教程---12、lavarel验证码
  6. 32. Leetcode 141. 环形链表 (链表-双指针-环形链表)
  7. 利用SQL语言表达复杂查询
  8. 理解工作流系统参考模型
  9. java后端项目怎么实现图片预览_项目经验不重样!3个基于 SpringBoot 的图片识别处理系统送给你!...
  10. 纯js分页代码(简洁实用)
  11. ORA-28002 密码过期解决方案
  12. 预测!显卡容量10年左右会超过500GB。■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■...
  13. 万年历c语言程序微博,简易的万年历程序C语言
  14. php 快递单打印模板下载,电子面单 - 无需录单提供快递发货,接口返回html快递模板及单号 – 基于ThinkPHP和Bootstrap的极速后台开发框架...
  15. 学习《华为基本法》(6):公司的成长
  16. 性能退化评估 matlab,LED驱动电源性能退化参数监测及寿命预测方法研究
  17. 四分位数和百分位数_20种四分位数
  18. Java环境下运行fastqc_在Ubuntu上安装FastQC
  19. 河南省周口市安吉软件测试培训中心第一次软件测试课程——测试理论考试(含答案)
  20. CST画椭圆螺旋曲线elliptical spiral的方法

热门文章

  1. python从网上获取数据失败怎么解决_求助:scrapy爬取数据失败,反复调试都不成功...
  2. python数据逆透视_PIVOT(透视转换)和UNPIVOT(逆透视转换)
  3. null未定义_PHP的isset()、is_null、empty()使用总结
  4. jmail反馈是否发送成功_如何在钉钉上自动发送定制消息或通知给同事?(10行代码搞定)...
  5. td 内单选框不可用_材料特殊处理TD、TICN概述
  6. 关于Kubernetes Dashboard漏洞CVE-2018-18264的修复公告
  7. 通信基站电池,再也不怕丢了
  8. JavaScript splice() 方法
  9. redis的导入导出需要特别注意的地方
  10. postmessage and sendmessage