文章目录

  • 一、对抗样本
    • 1.1Enhancing Transformation-Based Defenses Against Adversarial Attacks with a Distribution Classifier .
    • 1.2 Implicit Bias of Gradient Descent based Adversarial Training on Separable Data
    • 1.3 Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks
    • 1.4 Rethinking Softmax Cross-Entropy Loss for Adversarial Robustness
    • 1.5 Robust Local Features for Improving the Generalization of Adversarial Training
    • 1.6 Fooling Detection Alone is Not Enough: Adversarial Attack against Multiple Object Tracking
    • 1.7 Improving Adversarial Robustness Requires Revisiting Misclassified Examples
    • 1.8 Adversarial Policies: Attacking Deep Reinforcement Learning
    • 1.9 Detecting and Diagnosing Adversarial Images with Class-Conditional Capsule Reconstructions
    • 1.10 GAT: Generative Adversarial Training for Adversarial Example Detection and Robust Classification
    • 1.11 Black-Box Adversarial Attack with Transferable Model-based Embedding
    • 1.12 Certified Robustness for Top-k Predictions against Adversarial Perturbations via Randomized Smoothing
    • 1.13 Adversarially Robust Representations with Smooth Encoders
    • 1.14 Unpaired Point Cloud Completion on Real Scans using Adversarial Training
    • 1.15 Adversarially robust transfer learning
    • 1.16 Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness
    • 1.17 Sign-OPT: A Query-Efficient Hard-label Adversarial Attack
    • 1.18 Fast is better than free: Revisiting adversarial training
    • 1.19 Intriguing Properties of Adversarial Training at Scale
    • 1.20 Biologically inspired sleep algorithm for increased generalization and adversarial robustness in deep neural networks
    • 1.21 Jacobian Adversarially Regularized Networks for Robustness
    • 1.22 Certified Defenses for Adversarial Patches
    • 1.23 Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks
    • 1.24 Provable robustness against all adversarial lp-perturbations for p ≥ 1
    • 1.25 EMPIR: Ensembles of Mixed Precision Deep Networks for Increased Robustness Against Adversarial Attacks
    • 1.26 MMA Training: Direct Input Space Margin Maximization through Adversarial Training
    • 1.27 BayesOpt Adversarial Attack
    • 1.28 Unrestricted Adversarial Examples via Semantic Manipulation
    • 1.29 BREAKING CERTIFIED DEFENSES: SEMANTIC ADVERSARIAL EXAMPLES WITH SPOOFED ROBUSTNESS CERTIFICATES
    • 1.30 (Spotlight)Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets
    • 1.31 (Spotlight)Enhancing Adversarial Defense by k-Winners-Take-All
    • 1.32 (Spotlight)FreeLB: Enhanced Adversarial Training for Natural Language Understanding
    • 1.33 (Spotlight)ON ROBUSTNESS OF NEURAL ORDINARY DIFFERENTIAL EQUATIONS
    • 1.34 (Oral)Adversarial Training and Provable Defenses: Bridging the Gap
    • 1.35 MACER: ATTACK-FREE AND SCALABLE ROBUST TRAINING VIA MAXIMIZING CERTIFIED RADIUS
    • 1.36 IMPROVED SAMPLE COMPLEXITIES FOR DEEP NETWORKS AND ROBUST CLASSIFICATION VIA AN ALLLAYER MARGIN
    • 1.37 TOWARDS STABLE AND EFFICIENT TRAINING OF VERIFIABLY ROBUST NEURAL NETWORKS
    • 1.38 TRIPLE WINS: BOOSTING ACCURACY, ROBUSTNESS AND EFFICIENCY TOGETHER BY ENABLING INPUTADAPTIVE INFERENCE
    • 1.39 A FRAMEWORK FOR ROBUSTNESS CERTIFICATION OF SMOOTHED CLASSIFIERS USING F-DIVERGENCES
    • 1.40 ROBUSTNESS VERIFICATION FOR TRANSFORMERS

一、对抗样本

1.1Enhancing Transformation-Based Defenses Against Adversarial Attacks with a Distribution Classifier .

PAPER LINK

1.2 Implicit Bias of Gradient Descent based Adversarial Training on Separable Data

PAPER LINK

1.3 Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks

PAPER LINK

1.4 Rethinking Softmax Cross-Entropy Loss for Adversarial Robustness

PAPER LINK

1.5 Robust Local Features for Improving the Generalization of Adversarial Training

PAPER LINK

1.6 Fooling Detection Alone is Not Enough: Adversarial Attack against Multiple Object Tracking

PAPER LINK

1.7 Improving Adversarial Robustness Requires Revisiting Misclassified Examples

PAPER LINK

1.8 Adversarial Policies: Attacking Deep Reinforcement Learning

PAPER LINK

1.9 Detecting and Diagnosing Adversarial Images with Class-Conditional Capsule Reconstructions

PAPER LINK

1.10 GAT: Generative Adversarial Training for Adversarial Example Detection and Robust Classification

PAPER LINK

1.11 Black-Box Adversarial Attack with Transferable Model-based Embedding

PAPER LINK

1.12 Certified Robustness for Top-k Predictions against Adversarial Perturbations via Randomized Smoothing

PAPER LINK

1.13 Adversarially Robust Representations with Smooth Encoders

PAPER LINK

1.14 Unpaired Point Cloud Completion on Real Scans using Adversarial Training

PAPER LINK

1.15 Adversarially robust transfer learning

PAPER LINK

1.16 Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness

PAPER LINK

1.17 Sign-OPT: A Query-Efficient Hard-label Adversarial Attack

PAPER LINK

1.18 Fast is better than free: Revisiting adversarial training

PAPER LINK

1.19 Intriguing Properties of Adversarial Training at Scale

PAPER LINK

1.20 Biologically inspired sleep algorithm for increased generalization and adversarial robustness in deep neural networks

PAPER LINK

1.21 Jacobian Adversarially Regularized Networks for Robustness

PAPER LINK

1.22 Certified Defenses for Adversarial Patches

PAPER LINK

1.23 Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks

PAPER LINK

1.24 Provable robustness against all adversarial lp-perturbations for p ≥ 1

PAPER LINK

1.25 EMPIR: Ensembles of Mixed Precision Deep Networks for Increased Robustness Against Adversarial Attacks

PAPER LINK

1.26 MMA Training: Direct Input Space Margin Maximization through Adversarial Training

PAPER LINK

1.27 BayesOpt Adversarial Attack

PAPER LINK

1.28 Unrestricted Adversarial Examples via Semantic Manipulation

PAPER LINK

1.29 BREAKING CERTIFIED DEFENSES: SEMANTIC ADVERSARIAL EXAMPLES WITH SPOOFED ROBUSTNESS CERTIFICATES

PAPER LINK

1.30 (Spotlight)Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets

PAPER LINK

1.31 (Spotlight)Enhancing Adversarial Defense by k-Winners-Take-All

PAPER LINK

1.32 (Spotlight)FreeLB: Enhanced Adversarial Training for Natural Language Understanding

PAPER LINK

1.33 (Spotlight)ON ROBUSTNESS OF NEURAL ORDINARY DIFFERENTIAL EQUATIONS

PAPER LINK

1.34 (Oral)Adversarial Training and Provable Defenses: Bridging the Gap

PAPER LINK

1.35 MACER: ATTACK-FREE AND SCALABLE ROBUST TRAINING VIA MAXIMIZING CERTIFIED RADIUS

PAPER LINK

1.36 IMPROVED SAMPLE COMPLEXITIES FOR DEEP NETWORKS AND ROBUST CLASSIFICATION VIA AN ALLLAYER MARGIN

PAPER LINK

1.37 TOWARDS STABLE AND EFFICIENT TRAINING OF VERIFIABLY ROBUST NEURAL NETWORKS

PAPER LINK

1.38 TRIPLE WINS: BOOSTING ACCURACY, ROBUSTNESS AND EFFICIENCY TOGETHER BY ENABLING INPUTADAPTIVE INFERENCE

PAPER LINK

1.39 A FRAMEWORK FOR ROBUSTNESS CERTIFICATION OF SMOOTHED CLASSIFIERS USING F-DIVERGENCES

PAPER LINK

1.40 ROBUSTNESS VERIFICATION FOR TRANSFORMERS

PAPER LINK

科研篇(11):ICLR20 分类整理-对抗样本Meta-Learning相关推荐

  1. 科研篇一:NeurIPS2019 分类整理-对抗样本Meta-Learning

    目录 一.NeurIPS2019 paper分类-对抗样本 1.1.Adversarial Examples Are Not Bugs, They Are Features 1.2.Metric Le ...

  2. 2019 年终总结,245+篇,已分类整理

    code小生 一个专注大前端领域的技术平台公众号回复Android加入安卓技术群 按照惯例,是每年都会有年度总结文章的,2019也不例外. code小生 以下内容按照技术模块来划分,每个大的分类下文章 ...

  3. 深度学习对抗样本的八个误解与事实

    2019独角兽企业重金招聘Python工程师标准>>> 对抗样本是通过稍微修改实际样本而构造出的合成样本,以便于一个分类器以高置信度认为它们属于错误的分类.垃圾类的样本(如fooli ...

  4. FGPM:文本对抗样本生成新方法

    ©PaperWeekly 原创 · 作者|孙裕道 学校|北京邮电大学博士生 研究方向|GAN图像生成.情绪对抗样本生成 论文标题: Fast Gradient Projection Method fo ...

  5. Meta Learning/Learning to Learn, 到底我们要学会学习什么?||介绍了几篇元学习文章

    https://www.zhihu.com/question/62482926/answer/625352436 转载:https://zhuanlan.zhihu.com/p/32270990 1 ...

  6. 科研篇二:对抗样本(Adversarial Example)综述

    文章目录 一.写作动机与文献来源 二.术语定义 2.1.对抗样本/图片(Adversarial Example/Image) 2.2.对抗干扰(Adversarial perturbation) 2. ...

  7. [译] TensorFlow 教程 #11 - 对抗样本

    注:作者未提供教程#10,以后若有更新将补充. 本文主要演示了如何给图像增加"对抗噪声",以此欺骗模型,使其误分类. 01 - 简单线性模型 | 02 - 卷积神经网络 | 03 ...

  8. 建议收藏!近期值得读的 9 篇「对抗样本」最新论文

    在碎片化阅读充斥眼球的时代,越来越少的人会去关注每篇论文背后的探索和思考.在这个栏目里,你会快速 get 每篇精选论文的亮点和痛点,时刻紧跟 AI 前沿成果. 本期我们筛选了 9 篇「对抗样本」领域的 ...

  9. (zhuan) 126 篇殿堂级深度学习论文分类整理 从入门到应用

    126 篇殿堂级深度学习论文分类整理 从入门到应用 | 干货 雷锋网  作者: 三川 2017-03-02 18:40:00 查看源网址 阅读数:66 如果你有非常大的决心从事深度学习,又不想在这一行 ...

最新文章

  1. 信息系统管理整体管理过程
  2. 枚举类型用法_Mybatis-plus常见用法总结三
  3. linux select shell,linux之shell编程select和case用法
  4. vector内存扩容
  5. logistic模型原理与推导过程分析(2)
  6. LeetCode MySQL 1142. 过去30天的用户活动 II
  7. vue学习笔记-01-前端的发展历史(从后端到前端,再到前后端分离,再到全栈)
  8. 多线程Runnable类创建多线程
  9. Pytest之pytest.assume用例中断言1失败会继续执行后续代码断言2
  10. 第三节:21个新的语义化标签,你撸过几个?
  11. Wpf解决TextBox文件拖入问题、拖放问题
  12. LeetCode 235. 二叉搜索树的最近公共祖先(递归)
  13. uniapp 如何给搜索框设值_uni-app 顶部配置搜索框和左右图标
  14. PLC接入工业互联网解决方案
  15. DW CS5及CC的部分序列号总结
  16. QIUI囚爱男用APP远程贞操锁2.0 破解不完全指南(附破解工具)
  17. Java Io中涉及到的类和涉及模式
  18. VNA SAA 进行天线阻抗匹配(贴片天线)
  19. C语言编程笔记——MOOC翁恺
  20. 王选-“从Dijkstra谈帅才的洞察力”【转】

热门文章

  1. SVN入门及配置使用(多平台)
  2. 光电自动避障小车_AGV系统助力工厂物流自动化【agv小车吧】
  3. iOS并发编程指南(3)
  4. 小程序订阅消息流程及案例
  5. 中国汽水制造商市场趋势报告、技术动态创新及市场预测
  6. 将本地的jar包放入maven仓库
  7. ElasticSearch 学习笔记:Multi Search
  8. win10 操作中心是灰色无法选择 终极解决方法
  9. 优酷路由宝增加php,优酷路由宝旗舰版YK-L2刷改华硕[N14U N54U]5G 2G的7620老毛子Padavan固件方法...
  10. 小鸡拿着蚯蚓闯关的java游戏,蚯蚓大闯关游戏下载|蚯蚓大闯关安卓版下载 v1.0.0 - 跑跑车安卓网...