点击蓝字

关注我们

AI TIME欢迎每一位AI爱好者的加入!

9月16日 15:00~21:00

AI TIME特别邀请了多位PhD,带来ICML-4!

哔哩哔哩直播通道

扫码关注AITIME哔哩哔哩官方账号

观看直播

链接:https://live.bilibili.com/21813994

15:00-17:00

★ 嘉宾介绍 ★

朱鑫祺

悉尼大学三年级PhD,在 Prof. Dacheng Tao 和 Dr. Chang Xu 指导下进行解耦表征学习,计算机视觉相关的研究。

报告题目:

基于可交换李群变分自编码的解耦学习

内容简介:

We view disentanglement learning as discovering an underlying structure that equivariantly reflects the factorized variations shown in data. Traditionally, such a structure is fixed to be a vector space with data variations represented by translations along individual latent dimensions. We argue this simple structure is suboptimal since it requires the model to learn to discard the properties (e.g. different scales of changes, different levels of abstractness) of data variations, which is an extra work than equivariance learning. Instead, we propose to encode the data variations with groups, a structure not only can equivariantly represent variations, but can also be adaptively optimized to preserve the properties of data variations. Considering it is hard to conduct training on group structures, we focus on Lie groups and adopt a parameterization using Lie algebra. Based on the parameterization, some disentanglement learning constraints are naturally derived. A simple model named Commutative Lie Group VAE is introduced to realize the group-based disentanglement learning. Experiments show that our model can effectively learn disentangled representations without supervision, and can achieve state-of-the-art performance without extra constraints.

陈晓晖

Tufts University 一年级 PhD,在Prof. Liping Liu 和 Prof. Michael Hughes 的指导下研究 Generative Modeling 和 Graph Learning。

报告题目:

自回归图生成模型上的节点生成顺序建模

内容简介:

A graph generative model defines a distribution over graphs. One type of generative model is constructed by autoregressive neural networks, which sequentially add nodes and edges to generate a graph. However, the likelihood of a graph under the autoregressive model is intractable, as there are numerous sequences leading to the given graph; this makes maximum likelihood estimation challenging. Instead, in this work we derive the exact joint probability over the graph and the node ordering of the sequential process. From the joint, we approximately marginalize out the node orderings and compute a lower bound on the log-likelihood using variational inference. We train graph generative models by maximizing this bound, without using the ad-hoc node orderings of previous methods. Our experiments show that the log-likelihood bound is significantly tighter than the bound of previous schemes. Moreover, the models fitted with the proposed algorithm can generate high-quality graphs that match the structures of target graphs not seen during training. We have made our code publicly available at https://github.com/tufts-ml/graph-generation-vi.

张智杰

中科院计算所五年级博士生,导师为张家琳研究员。研究兴趣包括组合优化、近似算法、机器学习。最近的研究课题包括次模优化与影响力最大化。

报告题目:

网络推断与数据驱动的影响力最大化问题

内容简介:

Influence maximization is the task of selecting a small number of seed nodes in a social network to maximize the spread of the influence from these seeds, and it has been widely investigated in the past two decades. In the canonical setting, the whole social network as well as its diffusion parameters is given as input. In this paper, we consider the more realistic sampling setting where the network is unknown and we only have a set of passively observed cascades that record the set of activated nodes at each diffusion step. We study the task of influence maximization from these cascade samples (IMS), and present constant approximation algorithms for this task under mild conditions on the seed set distribution. To achieve the optimization goal, we also provide a novel solution to the network inference problem, that is, learning diffusion parameters and the network structure from the cascade data. Comparing with prior solutions, our network inference algorithm requires weaker assumptions and does not rely on maximum-likelihood estimation and convex programming. Our IMS algorithms enhance the learning-and-then-optimization approach by allowing a constant approximation ratio even when the diffusion parameters are hard to learn, and we do not need any assumption related to the network structure or diffusion parameters.

19:30-21:00

杨智勇:

博士毕业于中国科学院信息工程研究所,现为中国科学院大学博士后。目前主要的研究方向主要为AUC优化、多任务学习、机器学习理论。在ICML、NeurIPS、T-PAMI等CCF-A类期刊/会议发表一作论文7篇。担任ICML、NeurIPS、ICLR、AAAI、IJCAI等会议PC member;IJCAI 2021 senior PC member;T-PAMI、T-IP等国际期刊审稿人。曾入选博新计划、百度AI华人新星百强榜单,曾获百度奖学金全球20强提名奖、中科院院长特别奖、NeurIPS top 10% 审稿人等荣誉。

报告题目:

TPAUC指标的end-to-end 优化方法

内容简介:

The Area Under the ROC Curve (AUC) is a crucial metric for machine learning, which evaluates the average performance over all possible True Positive Rates (TPRs) and False Positive Rates (FPRs). Based on the knowledge that a skillful classifier should simultaneously embrace a high TPR and a low FPR, we turn to study a more general variant called Two-way Partial AUC (TPAUC), where only the region with TPR≥α,FPR≤β is included in the area. Moreover, a recent work shows that the TPAUC is essentially inconsistent with the existing Partial AUC metrics where only the FPR range is restricted, opening a new problem to seek solutions to leverage high TPAUC. Motivated by this, we present the first trial in this paper to optimize this new metric. The critical challenge along this course lies in the difficulty of performing gradient-based optimization with end-to-end stochastic training, even with a proper choice of surrogate loss. To address this issue, we propose a generic framework to construct surrogate optimization problems, which supports efficient end-to-end training with deep-learning. Moreover, our theoretical analyses show that: 1) the objective function of the surrogate problems will achieve an upper bound of the original problem under mild conditions, and 2) optimizing the surrogate problems leads to good generalization performance in terms of TPAUC with a high probability. Finally, empirical studies over several benchmark datasets speak to the efficacy of our framework.

沈广宇

普渡大学计算机系二年级在读博士,在 Prof. Xiangyu Zhang 的研究组进行神经网络安全性相关的研究,包括对抗攻击,后门攻击以及防御。

报告题目:

基于多臂老虎机优化的神经网络后门扫描

内容简介:

Back-door attack poses a severe threat to deep learning systems. It injects hidden malicious be- haviors to a model such that any input stamped with a special pattern can trigger such behaviors. Detecting back-door is hence of pressing need. Many existing defense techniques use optimiza- tion to generate the smallest input pattern that forces the model to misclassify a set of benign inputs injected with the pattern to a target label. However, the complexity is quadratic to the num- ber of class labels such that they can hardly handle models with many classes. Inspired by Multi-Arm Bandit in Reinforcement Learning, we propose a K-Arm optimization method for backdoor detec- tion. By iteratively and stochastically selecting the most promising labels for optimization with the guidance of an objective function, we substan- tially reduce the complexity, allowing to handle models with many classes. Moreover, by itera- tively refining the selection of labels to optimize, it substantially mitigates the uncertainty in choos- ing the right labels, improving detection accuracy. At the time of submission, the evaluation of our method on over 4000 models in the IARPA Tro- jAI competition from round 1 to the latest round 4 achieves top performance on the leaderboard. Our technique also supersedes five state-of-the-art techniques in terms of accuracy and the scanning time needed. The code of our work is available at https://github.com/PurduePAML/ K-ARM_Backdoor_Optimization

闫雪

闫雪是中国科学院自动化所一年级博士生,研究兴趣包括机器学习,多智能体评估。

报告题目:

基于低秩矩阵填充的高效多智能体策略评估

内容简介:

Multi-agent evaluation aims at the assessment of an agent's strategy on the basis of interaction with others. Typically, existing methods such as -rank and its approximation still require to exhaustively compare all pairs of -tuple joint strategies for an accurate ranking, which in practice is computationally expensive. In this paper, we intend to reduce the number of pairwise comparisons in order to recover a satisfied ranking for -players. We explore the fact that agents with similar skills may achieve similar performance payoff against others, as evidenced from our experiments. Two situations are considered: the first one is when we can obtain the true payoffs (e.g., noise-free evaluation). The other one is when we can only access noisy payoff observations (e.g., noisy evaluation). Based on these formulations, we leverage low-rank matrix completion and design two novel algorithms for noise-free and noisy evaluations respectively leverage low-rank matrix completion. For both of these settings, we derive that  (  is num. of agents and  is the rank of the payoff matrix) comparisons are required to achieve sufficiently well evaluation performance. Empirical results on evaluating the players in three synthetic games and twelve real world games from OpenSpiel demonstrate that payoff evaluation of a few  pairs can lead to comparable performance compared to algorithms that know the complete payoff matrix.

直播结束后我们会邀请讲者在微信群中与大家答疑交流,请添加“AI TIME小助手(微信号:AITIME_HY)”,回复“icml”,将拉您进“AI TIME ICML 会议交流群”!

AI TIME微信小助手

主       办:AI TIME

合作媒体:学术头条、AI 数据派

合作伙伴:智谱·AI、中国工程院知领直播、学堂在线、学术头条、biendata、 Ever链动

AI TIME欢迎AI领域学者投稿,期待大家剖析学科历史发展和前沿技术。针对热门话题,我们将邀请专家一起论道。同时,我们也长期招募优质的撰稿人,顶级的平台需要顶级的你,

请将简历等信息发至yun.he@aminer.cn!

微信联系:AITIME_HY

AI TIME是清华大学计算机系一群关注人工智能发展,并有思想情怀的青年学者们创办的圈子,旨在发扬科学思辨精神,邀请各界人士对人工智能理论、算法、场景、应用的本质问题进行探索,加强思想碰撞,打造一个知识分享的聚集地。

更多资讯请扫码关注

我知道你“在看”哟~

点击 阅读原文 了解更多精彩

直播预告| ICML专场四~相关推荐

  1. 直播预告 | ICLR专场四

    点击蓝字 关注我们 AI TIME欢迎每一位AI爱好者的加入! 6月2日晚7:30-9:00 AI TIME特别邀请了三位优秀的讲者跟大家共同开启ICLR专场四! 哔哩哔哩直播通道 扫码关注AITIM ...

  2. 直播预告|ICML专场最后一场啦!来蹲守直播间呀

    点击蓝字 关注我们 AI TIME欢迎每一位AI爱好者的加入! 9月28日 15:00~20:30 AI TIME特别邀请了多位PhD,带来ICML-6! 哔哩哔哩直播通道 扫码关注AITIME哔哩哔 ...

  3. 直播预告| CVPR专场四来了!

    点击蓝字 关注我们 AI TIME欢迎每一位AI爱好者的加入! 8月4日 19:30~21:00 AI TIME特别邀请了来UIUC.香港中文大学.香港科技大学的博士生,来为大家带来分享! 哔哩哔哩直 ...

  4. 直播预告 | NeurIPS 专场一 青年科学家专场

    点击蓝字 关注我们 AI TIME欢迎每一位AI爱好者的加入! 01 预告一:NeurIPS 2021专场一 2月17日下午2点,本期AI TIME NeurIPS邀请了来自宾夕法尼亚州立大学.伊利诺 ...

  5. 直播预告 | NeurIPS 专场六 青年科学家专场

    点击蓝字 关注我们 AI TIME欢迎每一位AI爱好者的加入! 01 预告一:NeurIPS 2021专场六 3月9日下午2点,本期AI TIME NeurIPS邀请了来自加州大学尔湾分校.加州大学伯 ...

  6. 直播预告 | NeurIPS 专场二 青年科学家专场

    点击蓝字 关注我们 AI TIME欢迎每一位AI爱好者的加入! 01 预告一:NeurIPS 2021专场二 2月23日下午2点,本期AI TIME NeurIPS邀请了来自多伦多大学.北京大学.斯坦 ...

  7. 直播预告 | NeurIPS 专场七 青年科学家专场

    点击蓝字 关注我们 AI TIME欢迎每一位AI爱好者的加入! 01 预告一:NeurIPS 2021专场七 3月10日下午2点,本期AI TIME NeurIPS邀请了来自蒙特利尔学习算法研究所.杜 ...

  8. 直播预告 | NeurIPS 专场八

    点击蓝字 关注我们 AI TIME欢迎每一位AI爱好者的加入! 3月16日下午2点,本期AI TIME NeurIPS邀请了来自加州大学旧金山分校.哈佛大学.亚利桑那州立大学.西安交通大学.埃默里大学 ...

  9. 直播预告 | ICLR专场二

    点击蓝字 关注我们 AI TIME欢迎每一位AI爱好者的加入! 『今日视频推荐』 5月26日晚7:30-9:00 AI TIME特别邀请了三位优秀的讲者跟大家共同开启ICLR专场二! 哔哩哔哩直播通道 ...

最新文章

  1. linux里面有mysql的僵尸进程_Linux的僵尸进程处理1
  2. Linux(Contos7.5)环境搭建之Gitblit安装(三)
  3. Entity Framework 在MySQL中执行SQL语句,关于参数问题
  4. mvvm模式和mvc的区别_Android 开发中的架构模式 -- MVC / MVP / MVVM
  5. HDOJ 4005-The war解题报告
  6. ttysac1 java_基于Android的串口聊天室 (基于tiny4412) 一
  7. 1至100之和用c语言表达方式,C语言菜鸟基础教程之求1到100的和
  8. 女孩必读:打死不能嫁的36种男人
  9. 聊城大学计算机学院操作系统,聊城大学计算机学院第学期操作系统B卷
  10. 【TWVRP】基于matlab遗传算法求解带时间窗的含充电站车辆路径规划问题【含Matlab源码 1177期】
  11. 树莓派CM4六路串口设置及使用
  12. 编译原理笔记(二)之词法分析
  13. JS 日期的获取和计算 ios不兼容问题
  14. 网站设计源代码制作素材成品(风景 6页)___内嵌式
  15. python输出计算结果_Python学习--02输入和输出、运算符
  16. mbp 封神台靶场 一(笔记)
  17. 数组是“二等公民”的话题
  18. axure网站开发原型设计(需求文档必备)
  19. MA Chapter 17 Budgetary process(SRCharlotte)
  20. 高效时间管理工具之四象限法则

热门文章

  1. intelliJ IDES MySql数据库JDBC连接代码
  2. Java Servlet 详解:(三)在 IDES 中开发 Servlet-多图警告
  3. 2021-C++程序设计-实验3-继承和虚函数
  4. win10打印机安装提示无法连接到打印机
  5. 如何帮助公司设计一个优秀的品牌标志?
  6. Java web+MySQL编写简易候选人投票(完整代码)
  7. R语言实例:diamonds 数据可视化分析报告
  8. 微信头像更新了,有你喜欢的吗?
  9. 桌面图标全部成被选中状态解决办法
  10. 2016年上市新SUV斯柯达kodiaq大气造型