ICLR 2021 Workshop 接收

  • Measuring Uncertainty through Bayesian Learning of Deep Neural Network Structure

Zhijie Deng, Yucen Luo and Jun Zhu PDF

  • AutoHAS: Efficient Hyperparameter and Architecture Search

Xuanyi Dong, Mingxing Tan, Adams Yu, Daiyi Peng, Bogdan Gabrys and Quoc Le PDF

  • Tensorizing Neural Architecture Search in the Supernet

Hansi Yang, Quanming Yao and James T. Kwok PDF

  • Simulation-based Scoring for Model-based Asynchronous Hyperparameter and Neural Architecture Search

Matthias Seeger, Aaron Klein, Thibaut Lienart and Louis Tiao PDF

  • Making Differentiable Architecture Search less local

Erik Bodin, Federico Tomasi and Zhenwen Dai PDF

  • Width transfer: on the (in)variance of width optimization

Ting-Wu Chin, Diana Marculescu and Ari Morcos PDF

  • How Powerful are Performance Predictors in Neural Architecture Search?

Colin White, Arber Zela, Binxin Ru, Yang Liu and Frank Hutter PDF

  • On Adversarial Robustness: A Neural Architecture Search perspective

Chaitanya Devaguptapu, Gaurav Mittal, Devansh Agarwal and Vineeth N Balasubramanian PDF

  • A multi-objective perspective on jointly tuning hardware and hyperparameters

David Salinas, Valerio Perrone, Cedric Archambeau and Olivier Cruchant PDF

  • HardCoRe-NAS: Hard Constrained diffeRentiable Neural Architecture Search

Niv Nayman, Yonathan Aflalo, Asaf Noy and Lihi Zelnik PDF

  • Cost-aware Adversarial Best Arm Identification

Nikita Ivkin, Zohar Karnin, Valerio Perrone and Giovanni Zappella PDF

  • MONCAE: Multi-Objective Neuroevolution of Convolutional Autoencoders

Daniel Dimanov, Emili Balaguer-Ballester, Shahin Rostami and Colin Singleton PDF

  • Overfitting in Bayesian Optimization: an empirical study and early-stopping solution

Anastasia Makarova, Huibin Shen, Valerio Perrone, Aaron Klein, Jean Baptiste Faddoul, Andreas Krause, Matthias Seeger and Cedric Archambeau PDF

  • How does Weight Sharing Help in Neural Architecture Search?

Yuge Zhang, Quanlu Zhang and Yaming Yang PDF

  • AlphaNet: Improved Training of Supernet with Alpha-Divergence

Dilin Wang, Chengyue Gong, Meng Li, Qiang Liu and Vikas Chandra PDF

  • One-Shot Neural Architecture Search Via Compressive Sensing

Minsu Cho, Mohammadreza Soltani and Chinmay Hegde PDF

  • Rethinking NAS Operations for Diverse Tasks

Nicholas Roberts, Mikhail Khodak, Tri Dao, Liam Li, Christopher Re and Ameet Talwalkar PDF

  • Recovering uantitative Models of Human Information Processing with Differentiable Architecture Search

Sebastian Musslick PDF

  • Flexible Multi-task Networks by Learning Parameter Allocation

Krzysztof Maziarz, Efi Kokiopoulou, Andrea Gesmundo, Luciano Sbaiz, Gabor Bartok and Jesse Berent

PDF

ICLR 2021 接收

1. How to Train Your Super-Net: An Analysis of Training Heuristics in Weight-Sharing NAS

2. DARTS-: Robustly Stepping out of Performance Collapse Without Indicators

3. Noisy Differentiable Architecture Search

4. FTSO: Effective NAS via First Topology Second Operator

Our method, named FTSO, reduces NAS’s search time from days to 0.68 seconds while achieving 76.42% testing accuracy on ImageNet and 97.77% testing accuracy on CIFAR10 via searching for network topology and operators separately

5. DOTS: Decoupling Operation and Topology in Differentiable Architecture Search

We improve DARTS by discoupling the topology representation from the operation weights and make explicit topology search.

4. Geometry-Aware Gradient Algorithms for Neural Architecture Search

Studying the right single-level optimization geometry yields state-of-the-art methods for NAS.

5. GOLD-NAS: Gradual, One-Level, Differentiable

A new differentiable NAS framework incorporating one-level optimization and gradual pruning, working on large search spaces.

6. Weak NAS Predictor Is All You Need

We present a novel method to estimate weak predictors progressively in predictor-based neural architecture search. By coarse-to-fine iteration, the ranking of sampling space is refined gradually which helps find the optimal architectures eventually.

7. Differentiable Graph Optimization for Neural Architecture Search

we learn a differentiable graph neural network as a surrogate model to rank candidate architectures.

8 . DrNAS: Dirichlet Neural Architecture Search

we propose a simple yet effective progressive learning scheme that enables searching directly on large-scale tasks, eliminating the gap between search and evaluation phases. Extensive experiments demonstrate the effectiveness of our method.

9 . Neural Architecture Search of SPD Manifold Networks

we first introduce a geometrically rich and diverse SPD neural architecture search space for an efficient SPD cell design. Further, we model our new NAS problem using the supernet strategy which models the architecture search problem as a one-shot training process of a single supernet.

10 . Neighborhood-Aware Neural Architecture Search

We propose a neighborhood-aware formulation for neural architecture search to find flat minima in the search space that can generalize better to new settings.

11 . A Surgery of the Neural Architecture Evaluators

This paper assesses current fast neural architecture evaluators with multiple direct criteria, under controlled settings.

12 . Exploring single-path Architecture Search ranking correlations

An empirical study of how several method variations affect the quality of the architecture ranking prediction.

13 . Neural Architecture Search without Training

14 . Zero-Cost Proxies for Lightweight NAS

A single minibatch of data is used to score neural networks for NAS instead of performing full training.

15 . Improving Zero-Shot Neural Architecture Search with Parameters Scoring

A score can be designed taking into account the jacobian in parameter space, that is highly predictive of final performance in a task.

16 . Multi-scale Network Architecture Search for Object Detection

17. Triple-Search: Differentiable Joint-Search of Networks, Precision, and Accelerators

We propose the Triple-Search framework to jointly search network structure, precision and hardware architecture in a differentiable manner.

18 . TransNAS-Bench-101: Improving Transferrability and Generalizability of Cross-Task Neural Architecture Search

19 . Searching for Convolutions and a More Ambitious NAS

20 . EnTranNAS: Towards Closing the Gap between the Architectures in Search and Evaluation

We show how effective dimensionality can shed light on a number of phenomena in modern deep learning including double descent, width-depth trade-offs, and subspace inference, while providing a straightforward and compelling generalization metric.

21 . Efficient Graph Neural Architecture Search

By designing a novel and expressive search space, an efficient one-shot NAS method based on stochastic relaxation and natural gradient is proposed.

22 . Searching for Convolutions and a More Ambitious NAS

A general-purpose search space for neural architecture search that enables discovering operations that beat convolutions on image data.

23 . Exploring single-path Architecture Search ranking correlations

An empirical study of how several method variations affect the quality of the architecture ranking prediction.

24 . Network Architecture Search for Domain Adaptation

25. HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark

26. Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective

Our TE-NAS framework analyzes the spectrum of the neural tangent kernel (NTK) and the number of linear regions in the input space, achieving high-quality architecture search while dramatically reducing the search cost to four hours on ImageNet.

27. Stabilizing DARTS with Amended Gradient Estimation on Architectural Parameters

Fixing errors in gradient estimation of architectural parameters for stabilizing the DARTS algorithm.

28. NAS-Bench-301 and the Case for Surrogate Benchmarks for Neural Architecture Search

29. NASOA: Towards Faster Task-oriented Online Fine-tuning

We propose a Neural Architecture Search and Online Adaption framework named NASOA towards a faster task-oriented fine-tuning upon the request of users.

28. Model-based Asynchronous Hyperparameter and Neural Architecture Search

We present a new, asynchronous multi-fidelty Bayesian optimization method to efficiently search for hyperparameters and architectures of neural networks.

29. Searching for Convolutions and a More Ambitious NAS

A general-purpose search space for neural architecture search that enables discovering operations that beat convolutions on image data.

30. A Gradient-based Kernel Approach for Efficient Network Architecture Search

We first formulate these two terms into a unified gradient-based kernel and then select architectures with the largest kernels at initialization as the final networks. The new approach replaces the expensive "train-then-test’’ evaluation paradigm.

31. Fast MNAS: Uncertainty-aware Neural Architecture Search with Lifelong Learning

We proposed FNAS which accelerates standard RL based NAS process by 10x and guarantees better performance on various vision tasks.

32. Explicit Learning Topology for Differentiable Neural Architecture Search

33. NASLib: A Modular and Flexible Neural Architecture Search Library

34. TransNAS-Bench-101: Improving Transferrability and Generalizability of Cross-Task Neural Architecture Search

35. Rethinking Architecture Selection in Differentiable NAS

36. Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective

37. Rapid Neural Architecture Search by Learning to Generate Graphs from Datasets

We propose an efficient NAS framework that is trained once on a database consisting of datasets and pretrained networks and can rapidly generate a neural architecture for a novel dataset.

38. Interpretable Neural Architecture Search via Bayesian Optimisation with Weisfeiler-Lehman Kernels

We propose a NAS method that is sample-efficient, highly performant and interpretable.

39. AutoHAS: Efficient Hyperparameter and Architecture Search

40. EnTranNAS: Towards Closing the Gap between the Architectures in Search and Evaluation

42. Differentiable Graph Optimization for Neural Architecture Search

we learn a differentiable graph neural network as a surrogate model to rank candidate architectures.

41. Width transfer: on the (in)variance of width optimization

we control the training configurations, i.e., network architectures and training data, for three existing width optimization algorithms and find that the optimized widths are largely transferable across settings.

42. NAHAS: Neural Architecture and Hardware Accelerator Search

We propose NAHAS, a latency-driven software/hardware co-optimizer that jointly optimize the design of neural architectures and a mobile edge processor.

43. Neural Network Surgery: Combining Training with Topology Optimization

We demonstrate a hybrid approach for combining neural network training with a genetic-algorithm based architecture optimization.

44. Efficient Architecture Search for Continual Learning

Our proposed CLEAS works closely with neural architecture search (NAS) which leverages reinforcement learning techniques to search for the best neural architecture that fits a new task.

45. Auto Seg-Loss: Searching Metric Surrogates for Semantic Segmentation

Auto Seg-Loss is the first general framework for searching surrogate losses for mainstream semantic segmentation metrics.

46. Improving Random-Sampling Neural Architecture Search by Evolving the Proxy Search Space

47. SEDONA: Search for Decoupled Neural Networks toward Greedy Block-wise Learning

Our approach is the first attempt to automate decoupling neural networks for greedy block-wise learning and outperforms both end-to-end backprop and state-of-the-art greedy-learning methods on CIFAR-10, Tiny-ImageNet and ImageNet classification.

48. Intra-layer Neural Architecture Search

Neural architecture search at the level of individual weight parameters.

ICLR 2021 NAS 相关论文(包含Workshop)相关推荐

  1. ICLR 2021 | 美团AutoML论文:鲁棒的神经网络架构搜索 DARTS-

    高质量模型的设计和更新迭代是当前 AI 生产开发的痛点和难点,在这种背景下,自动化机器学习(AutoML)应运而生.2017年,谷歌正式提出神经网络架构搜索(Neural Architecture S ...

  2. SIGIR 2021 | 推荐系统相关论文分类整理

    © 作者|范欣妍 机构|中国人民大学高瓴人工智能学院 导师|赵鑫教授 研究方向 | 推荐系统 导读 ACM SIGIR 2021是CCF A类会议,人工智能领域智能信息检索( Information ...

  3. GNN、RL强势崛起,CNN初现疲态?这是ICLR 2021最全论文主题分析

    来源:机器之心 本文约1200字,建议阅读5分钟 ICIR 2021最全论文主题分析. [ 摘要 ]ICLR 2021 会议的 Rebuttal 环节已经结束,最终接收结果也将在下月正式公布.日前,有 ...

  4. 接收率高达29%的ICLR 2021有哪些论文入榜呢?

    ​文末可领取 ICLR2021深度学习论文集 ICLR,全称为International Conference on Learning Representations(国际学习表征会议),2013年由 ...

  5. CIKM 2021 | 推荐系统相关论文分类整理

    © 作者|孙文奇 机构|中国人民大学高瓴人工智能学院 研究方向|推荐系统 本文选取了CIKM2021中85篇长文.15篇应用文和29篇短文,重点对推荐系统相关论文(76篇)按不同的任务场景和研究话题进 ...

  6. 论文清单:SIGIR 2021推荐系统相关论文分类整理

    © 作者|范欣妍 机构|中国人民大学高瓴人工智能学院 研究方向 | 推荐系统 文章来源 | RUC AI Box 导读 ACM SIGIR 2021是CCF A类会议,人工智能领域智能信息检索( In ...

  7. SIGMOD 2021 | 时间序列相关论文一览(附原文源码)

    欢迎关注,专注学术论文.机器学习.人工智能.Python技巧 ACM SIGMOD,数据管理国际会议(Special Interest Group on Management Of Data.)是由美 ...

  8. WSDM 2021 | 时间序列相关论文一览

    点击蓝字 关注我们 #TSer# 会议介绍 WSDM的英文全称是 The International Conference on Web Search and Data Mining,中文意思是国际互 ...

  9. EMNLP 2021图相关论文合集

    EMNLP是由国际计算语言学协会下属特殊兴趣小组SIGDAT发起并组织的系列会议,是自然语言处理领域顶级的国际学术会议之一.EMNLP 2021 将于 11 月 7 日 - 11 日进行,一共接收了6 ...

最新文章

  1. 为SharePoint顶部链接开发自定义数据源
  2. centos中查找某一段时间的文件
  3. Java EE 课程作业(second)-- 企业级应用和互联网应用的区别
  4. 服务器能安装ios系统吗,想给iPhone重装iOS,怎能不用iMazing
  5. 基于某网站的信息爬取与保存_指定查询内容
  6. 关于IKAnalyzer自定义分词的切换主词典的方法
  7. java中抽象类的定义_Java中抽象类的定义和使用
  8. Android 增量更新实例(Smart App Updates)
  9. jquery File upload 的一个例子
  10. 针式 PKM 个人知识管理软件 帮助
  11. 2022年1~8月语音合成(TTS)和语音识别(ASR)论文月报
  12. Android HTTP网络详解
  13. mysql 分组统计 及 统计结果横向展示
  14. 【转】为什么要使用ModelDriven
  15. NR PRACH(四)PRACH与SSB的映射
  16. 主子式、顺序主子式、余子式、代数余子式
  17. 西门子et200 分布式i/o_西门子S7-1500H冗余系统硬件及网络结构
  18. codelite+mingw安装
  19. prism 自定义Region
  20. Modbus RTU转Modbus TCP模块,RS232/485转以太网模块,WJ102

热门文章

  1. html2canvas微信头像没绘制,解决使用canvas生成含有微信头像的邀请海
  2. vivo Y79的Usb调试模式在哪里,打开vivo Y79Usb调试模式的方法
  3. CAN总线接口静电保护及ESD二极管选型
  4. Android Drawable设计圆角
  5. 今日头条校招2017.7.21编程3,PM、idea、程序员
  6. 在线教育:基因不同,命运不同
  7. 关于支付宝CertificateException: X.509 not found的那些事~づ♡ど,JDK同样的版本之间也会有问题!
  8. 去掉字符串头尾指定字符
  9. 浏览器有新消息之后,图标在电脑任务栏闪烁提示
  10. 微信开发系列 — — 微信模板消息