调研范围

2018NIPS、2019NIPS、2018ECCV、2019ICCV、2019CVPR、2020CVPR、2019ICML、2019ICLR、2020ICLR

2018NIPS

Contamination Attacks and Mitigation in Multi-Party Machine Learning(防御)

作者:Jamie Hayes(Univeristy College London) Olga Ohrimenko(Microsoft Research)

摘要:Machine learning is data hungry; the more data a model has access to in training, the more likely it is to perform well at inference time. Distinct parties may want to combine their local data to gain the benefits of a model trained on a large corpus of data. We consider such a case: parties get access to the model trained on their joint data but do not see each others individual datasets. We show that one needs to be careful when using this multi-party model since a potentially malicious party can taint the model by providing contaminated data. We then show how adversarial training can defend against such attacks by preventing the model from learning trends specific to individual parties data, thereby also guaranteeing party-level membership privacy.

不同方各自提供数据,彼此数据互相不可见,然后使用联合数据进行模型训练,使用对抗训练来防止某一方对数据进行污染攻击

A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks(防御)

作者:Kimin Lee, Kibok Lee, Honglak Lee, Jinwoo Shin(Korea Advanced Institute of Science and Technology (KAIST), University of Michigan, Google Brain)

摘要:Detecting test samples drawn sufficiently far away from the training distribution statistically or adversarially is a fundamental requirement for deploying a good classifier in many real-world machine learning applications. However, deep neural networks with the softmax classifier are known to produce highly overconfident posterior distributions even for such abnormal samples. In this paper, we propose a simple yet effective method for detecting any abnormal samples, which is applicable to any pre-trained softmax neural classifier. We obtain the class conditional Gaussian distributions with respect to (low- and upper-level) features of the deep models under Gaussian discriminant analysis, which result in a confidence score based on the Mahalanobis distance. While most prior methods have been evaluated for detecting either out-of-distribution or adversarial samples, but not both, the proposed method achieves the state-of-the-art performances for both cases in our experiments. Moreover, we found that our proposed method is more robust in harsh cases, e.g., when the training dataset has noisy labels or small number of samples. Finally, we show that the proposed method enjoys broader usage by applying it to class-incremental learning: whenever out-of-distribution samples are detected, our classification rule can incorporate new classes well without further training deep models.

在测试时检测异常数据,可以同时检测出训练数据分布外样本和对抗样本

Attacks Meet Interpretability: Attribute-steered Detection of Adversarial Samples(人脸防御)

作者:Guanhong Tao, Shiqing Ma, Yingqi Liu, Xiangyu Zhang(Department of Computer Science, Purdue University)

摘要:Adversarial sample attacks perturb benign inputs to induce DNN misbehaviors. Recent research has demonstrated the widespread presence and the devastating consequences of such attacks. Existing defense techniques either assume prior knowledge of specific attacks or may not work well on complex models due to their underlying assumptions. We argue that adversarial sample attacks are deeply entangled with interpretability of DNN models: while classification results on benign inputs can be reasoned based on the human perceptible features/attributes, results on adversarial samples can hardly be explained. Therefore, we propose a novel adversarial sample detection technique for face recognition models, based on interpretability. It features a novel bi-directional correspondence inference between attributes and internal neurons to identify neurons critical for individual attributes. The activation values of critical neurons are enhanced to amplify the reasoning part of the computation and the values of other neurons are weakened to suppress the uninterpretable part. The classification results after such transformation are compared with those of the original model to detect adversaries. Results show that our technique can achieve 94% detection accuracy for 7 different kinds of attacks with 9.91% false positives on benign inputs. In contrast, a state-of-the-art feature squeezing technique can only achieve 55% accuracy with 23.3% false positives.

基于模型可解释性对人脸识别模型进行对抗样本检测

Robust Detection of Adversarial Attacks by Modeling the Intrinsic Properties of Deep Neural Networks(防御)

作者:Zhihao Zheng, Pengyu Hong(Department of Computer Science, Brandeis University)

摘要:It has been shown that deep neural network (DNN) based classifiers are vulnerable to human-imperceptive adversarial perturbations which can cause DNN classifiers to output wrong predictions with high confidence. We propose an unsupervised learning approach to detect adversarial inputs without any knowledge of attackers. Our approach tries to capture the intrinsic properties of a DNN classifier and uses them to detect adversarial inputs. The intrinsic properties used in this study are the output distributions of the hidden neurons in a DNN classifier presented with natural images. Our approach can be easily applied to any DNN classifiers or combined with other defense strategy to improve robustness. Experimental results show that our approach demonstrates state-of-the-art robustness in defending black-box and gray-box attacks.

用DNN的隐藏层输出特征的分布,来检测对抗输入样本,在黑盒和灰盒防御上SOTA

2019NIPS

Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial Training(防御)

作者:Haichao Zhang(Horizon Robotics) Jianyu Wang(Baidu Research)

摘要:We introduce a feature scattering-based adversarial training approach for improving model robustness against adversarial attacks. Conventional adversarial training approaches leverage a supervised scheme (either targeted or non-targeted) in generating attacks for training, which typically suffer from issues such as label leaking as noted in recent works. Differently, the proposed approach generates adversarial images for training through feature scattering in the latent space, which is unsupervised in nature and avoids label leaking. More importantly, this new approach generates perturbed images in a collaborative fashion, taking the inter-sample relationships into consideration. We conduct analysis on model robustness and demonstrate the effectiveness of the proposed approach through extensively experiments on different datasets compared with state-of-the-art approaches.

基于特征分布的对抗训练方法来模型对于对抗攻击的鲁棒性

Subspace Attack: Exploiting Promising Subspaces for Query-Efficient Black-box Attacks(探针黑盒攻击)

作者:Ziang Yan Yiwen Guo Changshui Zhang(Institute for Artificial Intelligence, Tsinghua University (THUAI))

摘要:Unlike the white-box counterparts that are widely studied and readily accessible, adversarial examples in black-box settings are generally more Herculean on account of the difficulty of estimating gradients. Many methods achieve the task by issuing numerous queries to target classification systems, which makes the whole procedure costly and suspicious to the systems. In this paper, we aim at reducing the query complexity of black-box attacks in this category. We propose to exploit gradients of a few reference models which arguably span some promising search subspaces. Experimental results show that, in comparison with the state-of-the-arts, our method can gain up to 2x and 4x reductions in the requisite mean and medium numbers of queries with much lower failure rates even if the reference models are trained on a small and inadequate dataset disjoint to the one for training the victim model. Code and models for reproducing our results will be made publicly available.

利用少数参考模型的梯度信息,扩展一些有用的搜索子空间,从而减少黑盒攻击时的查询次数

Functional Adversarial Attacks(攻击)

作者:Cassidy Laidlaw, Soheil Feizi(University of Maryland)

摘要:We propose functional adversarial attacks, a novel class of threat models for crafting adversarial examples to fool machine learning models. Unlike a standard lp-ball threat model, a functional adversarial threat model allows only a single function to be used to perturb input features to produce an adversarial example. For example, a functional adversarial attack applied on colors of an image can change all red pixels simultaneously to light red. Such global uniform changes in images can be less perceptible than perturbing pixels of the image individually. For simplicity, we refer to functional adversarial attacks on image colors as ReColorAdv, which is the main focus of our experiments. We show that functional threat models can be combined with existing additive (lp) threat models to generate stronger threat models that allow both small, individual perturbations and large, uniform changes to an input. Moreover, we prove that such combinations encompass perturbations that would not be allowed in either constituent threat model. In practice, ReColorAdv can significantly reduce the accuracy of a ResNet-32 trained on CIFAR-10. Furthermore, to the best of our knowledge, combining ReColorAdv with other attacks leads to the strongest existing attack even after adversarial training.

不同于普通的对每张图像单独进行攻击计算,该方法提出通过一个统一的函数对输入图像进行攻击计算,同时该方法可以和其他普通方法进行结合

Improving Black-box Adversarial Attacks with a Transfer-based Prior(黑盒攻击)

作者:Shuyu Cheng, Yinpeng Dong, Tianyu Pang, Hang Su, Jun Zhu(Dept. of Comp. Sci. and Tech., BNRist Center, State Key Lab for Intell. Tech. & Sys., Institute for AI, THBI Lab, Tsinghua University)

摘要:We consider the black-box adversarial setting, where the adversary has to generate adversarial perturbations without access to the target models to compute gradients. Previous methods tried to approximate the gradient either by using a transfer gradient of a surrogate white-box model, or based on the query feedback. However, these methods often suffer from low attack success rates or poor query efficiency since it is non-trivial to estimate the gradient in a high-dimensional space with limited information. To address these problems, we propose a prior-guided random gradient-free (P-RGF) method to improve black-box adversarial attacks, which takes the advantage of a transfer-based prior and the query information simultaneously. The transfer-based prior given by the gradient of a surrogate model is appropriately integrated into our algorithm by an optimal coefficient derived by a theoretical analysis. Extensive experiments demonstrate that our method requires much fewer queries to attack black-box models with higher success rates compared with the alternative state-of-the-art methods.

将替代模型的梯度作为先验,迁移到攻击模型上,本方法可以使用更少的查询次数来成功攻击黑盒模型

2018ECCV

Ask, Acquire, and Attack: Data-free UAP Generation using Class Impressions(通用攻击)

作者:Konda Reddy Mopuri, Phani Krishna Uppala, and R. Venkatesh Babu(Video Analytics Lab, Indian Institute of Science, Bangalore, India)

摘要:Deep learning models are susceptible to input specific noise, called adversarial perturbations. Moreover, there exist input-agnostic noise, called Universal Adversarial Perturbations (UAP) that can affect inference of the models over most input samples. Given a model, there exist broadly two approaches to craft UAPs: (i) data-driven: that require data, and (ii) data-free: that do not require data samples. Data-driven approaches require actual samples from the underlying data distribution and craft UAPs with high success (fooling) rate. However, data-free approaches craft UAPs without utilizing any data samples and therefore result in lesser success rates. In this paper, for data-free scenarios, we propose a novel approach that emulates the effect of data samples with class impressions in order to craft UAPs using data-driven objectives. Class impression for a given pair of category and model is a generic representation (in the input space) of the samples belonging to that category. Further, we present a neural network based generative model that utilizes the acquired class impressions to learn crafting UAPs. Experimental evaluation demonstrates that the learned generative model, (i) readily crafts UAPs via simple feed-forwarding through neural network layers, and (ii) achieves state-of-the-art success rates for data-free scenario and closer to that for data-driven setting without actually utilizing any data samples.

数据无关的通用攻击问题,通过“类别印象(给定某个类别和模型,类别印象为该类别样本的通用表示)”来模拟数据样本的效果

Practical Black-box Attacks on Deep Neural Networks using Efficient Query Mechanisms(黑盒攻击)

作者:Arjun Nitin Bhagoji(Princeton University), Warren He(University of California, Berkeley), Bo Li(University of Illinois at Urbana–Champaign), and Dawn Song

摘要:Existing black-box attacks on deep neural networks (DNNs)
have largely focused on transferability, where an adversarial instance generated for a locally trained model can “transfer” to attack other learning models. In this paper, we propose novel Gradient Estimation black-box attacks for adversaries with query access to the target model’s class probabilities, which do not rely on transferability. We also propose strategies to decouple the number of queries required to generate each adversarial sample from the dimensionality of the input. An iterative variant of our attack achieves close to 100% attack success rates for both targeted and untargeted attacks on DNNs. We carry out a thorough comparative evaluation of black-box attacks and show that Gradient Estimation attacks achieve attack success rates similar to state-of-the-art white-box attacks on the MNIST and CIFAR-10 datasets. We also apply the Gradient Estimation attacks successfully against real-world classifiers hosted by Clarifai. Further, we evaluate black-box attacks against state-of-theart defenses based on adversarial training and show that the Gradient Estimation attacks are very effective even against these defenses.

提出梯度估计的黑盒攻击方法,对目标模型的类别概率进行查询,可以同时完成特定类别攻击和非特定类别攻击

Improving DNN Robustness to Adversarial Attacks using Jacobian Regularization(防御)

作者:Daniel Jakubovitz and Raja Giryes(School of Electrical Engineering,
Tel Aviv University, Israel)

摘要:Deep neural networks have lately shown tremendous performance in various applications including vision and speech processing
tasks. However, alongside their ability to perform these tasks with such
high accuracy, it has been shown that they are highly susceptible to adversarial attacks: a small change in the input would cause the network
to err with high confidence. This phenomenon exposes an inherent fault
in these networks and their ability to generalize well. For this reason,
providing robustness to adversarial attacks is an important challenge in
networks training, which has led to extensive research. In this work, we
suggest a theoretically inspired novel approach to improve the networks’
robustness. Our method applies regularization using the Frobenius norm
of the Jacobian of the network, which is applied as post-processing, after
regular training has finished. We demonstrate empirically that it leads
to enhanced robustness results with a minimal change in the original
network’s accuracy.

使用网络雅克比矩阵的Frobenius范数(2范数)来提高网络鲁棒性,该方法在网络训练完成后对输入样本进行后处理

2019ICCV

Adversarial Defense via Learning to Generate Diverse Attacks(攻击)

作者:Yunseok Jang, Tianchen Zhao, Seunghoon Hong, Honglak Lee(University of Michigan)

摘要:With the remarkable success of deep learning, Deep Neural Networks (DNNs) have been applied as dominant tools to various machine learning domains. Despite this success, however, it has been found that DNNs are surprisingly vulnerable to malicious attacks; adding a small, perceptually indistinguishable perturbations to the data can easily degrade classification performance. Adversarial training is an effective defense strategy to train a robust classifier. In this work, we propose to utilize the generator to learn how to create adversarial examples. Unlike the existing approaches that create a one-shot perturbation by a deterministic generator, we propose a recursive and stochastic generator that produces much stronger and diverse perturbations that comprehensively reveal the vulnerability of the target classifier. Our experiment results on MNIST and CIFAR-10 datasets show that the classifier adversarially trained with our method yields more robust performance over various white-box and black-box attacks

使用生成器迭代和随机地生成对抗样本,从而完成白盒和黑盒的攻击

Sparse and Imperceivable Adversarial Attacks(攻击)

作者:Francesco Croce, Matthias Hein(University of Tubingen)

摘要:Neural networks have been proven to be vulnerable to a variety of adversarial attacks. From a safety perspective, highly sparse adversarial attacks are particularly dangerous. On the other hand the pixelwise perturbations of parse attacks are typically large and thus can be potentially detected. We propose a new black-box technique to craft adversarial examples aiming at minimizing l0-distance to the original image. Extensive experiments show that our attack is better or competitive to the state of the art. Moreover, we can integrate additional bounds on the componentwise perturbation. Allowing pixels to change only in region of high variation and avoiding changes along axisaligned edges makes our adversarial examples almost nonperceivable. Moreover, we adapt the Projected Gradient Descent attack to the l0-norm integrating componentwise constraints. This allows us to do adversarial training to enhance the robustness of classifiers against sparse and imperceivable adversarial manipulations.

L0攻击,在某个特定部分的边界上加扰动,使得扰动肉眼不可察觉

Enhancing Adversarial Example Transferability with an Intermediate Level Attack(攻击)

作者:Qian Huang, Isay Katsman, Horace He, Zeqi Gu, Serge Belongie, Ser-Nam Lim(Cornell University)

摘要:Neural networks are vulnerable to adversarial examples, malicious inputs crafted to fool trained models. Adversarial examples often exhibit black-box transfer, meaning that adversarial examples for one model can fool another model. However, adversarial examples are typically overfit to exploit the particular architecture and feature representation of a source model, resulting in sub-optimal black-box transfer attacks to other target models. We introduce the Intermediate Level Attack (ILA), which attempts to fine-tune an existing adversarial example for greater black-box transferability by increasing its perturbation on a pre-specified layer of the source model, improving upon state-of-the-art methods. We show that we can select a layer of the source model to perturb without any knowledge of the target models while achieving high transferability. Additionally, we provide some explanatory insights regarding our method and the effect of optimizing for adversarial examples using intermediate feature maps.

为了解决黑盒攻击时的迁移问题,提出中间层攻击,对于源模型的某个特定层进行攻击,从而再修改输入图像,会有很好的迁移效果

The LogBarrier adversarial attack: making effective use of decision boundary information (攻击)

作者:Chris Finlay, Aram-Alexandre Pooladian, and Adam Oberman(McGill University)

摘要:Adversarial attacks for image classification are small perturbations to images that are designed to cause misclassification by a model. Adversarial attacks formally correspond to an optimization problem: find a minimum norm image perturbation, constrained to cause misclassification. A number of effective attacks have been developed. However, to date, no gradient-based attacks have used best practices from the optimization literature to solve this constrained minimization problem. We design a new untargeted attack, based on these best practices, using the wellregarded logarithmic barrier method. On average, our attack distance is similar or better than all state-of-the-art attacks on benchmark datasets (MNIST, CIFAR10, ImageNet-1K). In addition, our method performs significantly better on the most challenging images, those which normally require larger perturbations for misclassification. We employ the LogBarrier attack on several adversarially defended models, and show that it adversarially perturbs all images more efficiently than other attacks: the distance needed to perturb all images is significantly smaller with the LogBarrier attack than with other state-ofthe-art attacks.

使用对数障碍法的最优化方法进行无目标对抗攻击

A Geometry-Inspired Decision-Based Attack(攻击)

作者:Yujia Liu(University of Science and Technology of China) Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard(Ecole Polytechnique Federale de Lausanne)

摘要:Deep neural networks have recently achieved tremendous success in image classification. Recent studies have however shown that they are easily misled into incorrect classification decisions by adversarial examples. Adversaries can even craft attacks by querying the model in blackbox settings, where no information about the model is released except its final decision. Such decision-based attacks usually require lots of queries, while real-world image recognition systems might actually restrict the number of queries. In this paper, we propose qFool, a novel decisionbased attack algorithm that can generate adversarial examples using a small number of queries. The qFool method can drastically reduce the number of queries compared to previous decision-based attacks while reaching the same quality of adversarial examples. We also enhance our method by constraining adversarial perturbations in low-frequency subspace, which can make qFool even more computationally efficient. Altogether, we manage to fool commercial image recognition systems with a small number of queries, which demonstrates the actual effectiveness of our new algorithm in practice.

通过基于决策的攻击方法来减少查询次数,同时将对抗扰动约束在低频子空间中

Once a MAN: Towards Multi-Target Attack via Learning Multi-Target Adversarial Network Once(攻击)

作者:Jiangfan Han, Xiaoyi Dong, Ruimao Zhang, Dongdong Chen, Weiming Zhang, Nenghai Yu, Ping Luo, Xiaogang Wang(CUHK-SenseTime Joint Laboratory, University of Science and Technology of China)

摘要:Modern deep neural networks are often vulnerable to adversarial samples. Based on the first optimization-based attacking method, many following methods are proposed to improve the attacking performance and speed. Recently, generation-based methods have received much attention since they directly use feed-forward networks to generate the adversarial samples, which avoid the time-consuming iterative attacking procedure in optimization-based and gradient-based methods. However, current generationbased methods are only able to attack one specific target (category) within one model, thus making them not applicable to real classification systems that often have hundreds/thousands of categories. In this paper, we propose the first Multi-target Adversarial Network (MAN), which can generate multi-target adversarial samples with a single model. By incorporating the specified category information into the intermediate features, it can attack any category of the target classification model during runtime. Experiments show that the proposed MAN can produce stronger attack results and also have better transferability than previous state-of-the-art methods in both multi-target attack task and single-target attack task. We further use the adversarial samples generated by our MAN to improve the robustness of the classification model. It can also achieve better classification accuracy than other methods when attacked by various methods.

使用同一个生成模型可以生成各种目标类别的对抗样本,在中间特征吸收特定类别的信息

Bilateral Adversarial Training: Towards Fast Training of More Robust Models Against Adversarial Attacks(防御)

作者:Jianyu Wang, Haichao Zhang(Baidu Research USA)

摘要:In this paper, we study fast training of adversarially robust models. From the analyses of the state-of-the-art defense method, i.e., the multi-step adversarial training, we hypothesize that the gradient magnitude links to the model robustness. Motivated by this, we propose to perturb both the image and the label during training, which we call Bilateral Adversarial Training (BAT). To generate the adversarial label, we derive an closed-form heuristic solution. To generate the adversarial image, we use one-step targeted attack with the target label being the most confusing class. In the experiment, we first show that random start and the most confusing target attack effectively prevent the label leaking and gradient masking problem. Then coupled with the adversarial label part, our model significantly improves the state-of-the-art results. For example, against PGD100 white-box attack with cross-entropy loss, on CIFAR10, we achieve 63.7% versus 47.2%; on SVHN, we achieve 59.1% versus 42.1%. At last, the experiment on the very (computationally) challenging ImageNet dataset further demonstrates the effectiveness of our fast method.

在对抗训练时同时使用带扰动的图像和标签,以提高模型鲁棒性

2019CVPR

Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness against Adversarial Attack(防御)

作者:Zhezhi He, Adnan Siraj Rakin and Deliang Fan(Dept. of Electrical and Computer Engineering, University of Central Florida)

摘要:Recent developments in the field of Deep Learning have exposed the underlying vulnerability of Deep Neural Network (DNN) against adversarial examples. In image classification, an adversarial example is a carefully modified image that is visually imperceptible to the original image but can cause DNN model to misclassify it. Training the network with Gaussian noise is an effective technique to perform model regularization, thus improving model robustness against input variation. Inspired by this classical method, we explore to utilize the regularization characteristic of noise injection to improve DNN’s robustness against adversarial attack. In this work, we propose ParametricNoise-Injection (PNI) which involves trainable Gaussian noise injection at each layer on either activation or weights through solving the Min-Max optimization problem, embedded with adversarial training. These parameters are trained explicitly to achieve improved robustness. The extensive results show that our proposed PNI technique effectively improves the robustness against a variety of powerful whitebox and black-box attacks such as PGD, C & W, FGSM, transferable attack, and ZOO attack. Last but not the least, PNI method improves both clean- and perturbed-data accuracy in comparison to the state-of-the-art defense methods, which outperforms current unbroken PGD defense by 1.1 % and 6.8 % on clean- and perturbed- test data respectively, using ResNet-20 architecture.

利用噪声注入的正则化性质提高模型鲁棒性,在网络每层的权重和激活图添加高斯噪声

Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses(攻击)

作者:Jérôme Rony, Luiz G. Hafemann, Luiz S. Oliveira,Ismail Ben Ayed, Robert Sabourin, Eric Granger(Laboratoire d’imagerie, de vision et d’intelligence artificielle (LIVIA), ÉTS Montreal)

摘要:Research on adversarial examples in computer vision tasks has shown that small, often imperceptible changes to an image can induce misclassification, which has security implications for a wide range of image processing systems. Considering L2 norm distortions, the Carlini and Wagner attack is presently the most effective white-box attack in the literature. However, this method is slow since it performs a line-search for one of the optimization terms, and often requires thousands of iterations. In this paper, an efficient approach is proposed to generate gradient-based attacks that induce misclassifications with low L2 norm, by decoupling the direction and the norm of the adversarial perturbation that is added to the image. Experiments conducted on the MNIST, CIFAR-10 and ImageNet datasets indicate that our attack achieves comparable results to the state-of-the-art (in terms of L2 norm) with considerably fewer iterations (as few as 100 iterations), which opens the possibility of using these attacks for adversarial training. Models trained with our attack achieve state-of-the-art robustness against whitebox gradient-based L2 attacks on the MNIST and CIFAR-10 datasets, outperforming the Madry defense when the attacks are limited to a maximum norm.

通过解耦对抗扰动的方向和大小,减少攻击时的迭代次数

Curls & Whey: Boosting Black-Box Adversarial Attacks(黑盒攻击)

作者:Yucheng Shi, Siyu Wang, Yahong Han(College of Intelligence and Computing, Tianjin University)

摘要:Image classifiers based on deep neural networks suffer from harassment caused by adversarial examples. Two defects exist in black-box iterative attacks that generate adversarial examples by incrementally adjusting the noiseadding direction for each step. On the one hand, existing iterative attacks add noises monotonically along the direction of gradient ascent, resulting in a lack of diversity and adaptability of the generated iterative trajectories. On the other hand, it is trivial to perform adversarial attack by adding excessive noises, but currently there is no refinement mechanism to squeeze redundant noises. In this work, we propose Curls & Whey black-box attack to fix the above two defects. During Curls iteration, by combining gradient ascent and descent, we ‘curl’ up iterative trajectories to integrate more diversity and transferability into adversarial examples. Curls iteration also alleviates the diminishing marginal effect in existing iterative attacks. The Whey optimization further squeezes the ‘whey’ of noises by exploiting the robustness of adversarial perturbation. Extensive experiments on Imagenet and Tiny-Imagenet demonstrate that our approach achieves impressive decrease on noise magnitude in ℓ2 norm. Curls & Whey attack also shows promising transferability against ensemble models as well as adversarially trained models. In addition, we extend our attack to the targeted misclassification, effectively reducing the difficulty of targeted attacks under black-box condition.

提出Curls & Whey的黑盒攻击方法,Curls迭代使对抗样本具有多样性和迁移性,Whey优化压缩扰动量

ShieldNets: Defending Against Adversarial Attacks Using Probabilistic Adversarial Robustness(防御)

作者:Rajkumar Theagarajan, Ming Chen, Bir Bhanu, Jing Zhang(Center for Research in Intelligent Systems)

摘要:Defending adversarial attack is a critical step towards reliable deployment of deep learning empowered solutions for industrial applications. Probabilistic adversarial robustness (PAR), as a theoretical framework, is introduced to neutralize adversarial attacks by concentrating sample probability to adversarial-free zones. Distinct to most of the existing defense mechanisms that require modifying the architecture/training of the target classifier which is not feasible in the real-world scenario, e.g., when a model has already been deployed, PAR is designed in the first place to provide proactive protection to an existing fixed model. ShieldNet is implemented as a demonstration of PAR in this work by using PixelCNN. Experimental results show that this approach is generalizable, robust against adversarial transferability and resistant to a wide variety of attacks on the Fashion-MNIST and CIFAR10 datasets, respectively.

引入概率对抗鲁棒性,将样本概率集中到无攻击区域

Efficient Decision-based Black-box Adversarial Attacks on Face Recognition(攻击)

作者:Yinpeng Dong, Hang Su, Baoyuan Wu, Zhifeng Li, Wei Liu, Tong Zhang, Jun Zhu(Tsinghua University, Tencent AI Lab, Hong Kong University of Science and Technology)

摘要:Face recognition has obtained remarkable progress in recent years due to the great improvement of deep convolutional neural networks (CNNs). However, deep CNNs are vulnerable to adversarial examples, which can cause fateful consequences in real-world face recognition applications with security-sensitive purposes. Adversarial attacks are widely studied as they can identify the vulnerability of the models before they are deployed. In this paper, we evaluate the robustness of state-of-the-art face recognition models in the decision-based black-box attack setting, where the attackers have no access to the model parameters and gradients, but can only acquire hard-label predictions by sending queries to the target model. This attack setting is more practical in real-world face recognition systems. To improve the efficiency of previous methods, we propose an evolutionary attack algorithm, which can model the local geometry of the search directions and reduce the dimension of the search space. Extensive experiments demonstrate the effectiveness of the proposed method that induces a minimum perturbation to an input face image with fewer queries. We also apply the proposed method to attack a real-world face recognition system successfully.

通过对搜索方向的局部几何形状进行建模,并减少搜索空间维度,来进行人脸识别模型的攻击

Trust Region Based Adversarial Attack on Neural Networks(攻击)

作者:Zhewei Yao, Amir Gholami, Peng Xu, Kurt Keutzer, Michael W. Mahoney(University of California, Berkeley, Stanford Univeristy)

摘要:Deep Neural Networks are quite vulnerable to adversarial perturbations. Current state-of-the-art adversarial attack methods typically require very time consuming hyper-parameter tuning, or require many iterations to solve an optimization based adversarial attack. To address this problem, we present a new family of trust region based adversarial attacks, with the goal of computing adversarial perturbations efficiently. We propose several attacks based on variants of the trust region optimization method. We test the proposed methods on Cifar-10 and ImageNet datasets using several different models including AlexNet, ResNet-50, VGG-16, and DenseNet-121 models. Our methods achieve comparable results with the Carlini-Wagner (CW) attack, but with significant speed up of up to 37×, for the VGG-16 model on a Titan Xp GPU. For the case of ResNet-50 on ImageNet, we can bring down its classification accuracy to less than 0.1% with at most 1.5% relative L∞ (or L2) perturbation requiring only 1.02 seconds as compared to 27.04 seconds for the CW attack. We have open sourced our method which can be accessed at [1].

基于信任区域的对抗攻击,相比C&W方法效果类似但是速度快

2020CVPR

DaST: Data-free Substitute Training for Adversarial Attacks(黑盒攻击)

作者:Mingyi Zhou, Jing Wu, Yipeng Liu, Shuaicheng Liu, Ce Zhu(University of Electronic Science and Technology of China)

摘要:Machine learning models are vulnerable to adversarial examples. For the black-box setting, current substitute attacks need pre-trained models to generate adversarial examples. However, pre-trained models are hard to obtain in real-world tasks. In this paper, we propose a data-free substitute training method (DaST) to obtain substitute models for adversarial black-box attacks without the requirement of any real data. To achieve this, DaST utilizes specially designed generative adversarial networks (GANs) to train the substitute models. In particular, we design a multi-branch architecture and label-control loss for the generative model to deal with the uneven distribution of synthetic samples. The substitute model is then trained by the synthetic samples generated by the generative model, which are labeled by the attacked model subsequently. The experiments demonstrate the substitute models produced by DaST can achieve competitive performance compared with the baseline models which are trained by the same train set with attacked models. Additionally, to evaluate the practicability of the proposed method on the real-world task, we attack an online machine learning model on the Microsoft Azure platform. The remote model misclassifies 98.35% of the adversarial examples crafted by our method. To the best of our knowledge, we are the first to train a substitute model for adversarial attacks without any real data. Our codes are publicly available.

使用GAN来训练替代模型,并使用GAN的生成数据而不需要真实数据

Projection & Probability-Driven Black-Box Attack(黑盒攻击)

作者:Jie Li, Rongrong Ji, Hong Liu, Jianzhuang Liu, Bineng Zhong, Cheng Deng, Qi Tian(Department of Artificial Intelligence, School of Informatics, Xiamen University, Noah’s Ark Lab, Huawei Technologies)

摘要:Generating adversarial examples in a black-box setting retains a significant challenge with vast practical application prospects. In particular, existing black-box attacks suffer from the need for excessive queries, as it is non-trivial to find an appropriate direction to optimize in the highdimensional space. In this paper, we propose Projection & Probability-driven Black-box Attack (PPBA) to tackle this problem by reducing the solution space and providing better optimization. For reducing the solution space, we first model the adversarial perturbation optimization problem as a process of recovering frequency-sparse perturbations with compressed sensing, under the setting that random noise in the low-frequency space is more likely to be adversarial. We then propose a simple method to construct a low-frequency constrained sensing matrix, which works as a plug-and-play projection matrix to reduce the dimensionality. Such a sensing matrix is shown to be flexible enough to be integrated into existing methods like NES and BanditsTD. For better optimization, we perform a random walk with a probability-driven strategy, which utilizes all queries over the whole progress to make full use of the sensing matrix for a less query budget. Extensive experiments show that our method requires at most 24% fewer queries with a higher attack success rate compared with state-ofthe-art approaches. Finally, the attack method is evaluated on the real-world online service, i.e., Google Cloud Vision API, which further demonstrates our practical potentials.

使用概率和映射驱动的黑盒攻击方法,可以减少搜索空间,提高优化效率

Ensemble Generative Cleaning with Feedback Loops for Defending Adversarial Attacks(防御)

作者:Jianhe Yuan and Zhihai He(University of Missouri, Columbia MO)

摘要:Effective defense of deep neural networks against adversarial attacks remains a challenging problem, especially under powerful white-box attacks. In this paper, we develop a new method called ensemble generative cleaning with feedback loops (EGC-FL) for effective defense of deep neural networks. The proposed EGC-FL method is based on two central ideas. First, we introduce a transformed deadzone layer into the defense network, which consists of an orthonormal transform and a deadzone-based activation function, to destroy the sophisticated noise pattern of adversarial attacks. Second, by constructing a generative cleaning network with a feedback loop, we are able to generate an ensemble of diverse estimations of the original clean image. We then learn a network to fuse this set of diverse estimations together to restore the original image. Our extensive experimental results demonstrate that our approach improves the state-of-art by large margins in both white-box and black-box attacks. It significantly improves the classification accuracy for white-box PGD attacks upon the second best method by more than 29% on the SVHN dataset and more than 39% on the challenging CIFAR-10 dataset.

使用正交变换去除输入中的随机噪声,再使用生成式网络生成干净样本

When NAS Meets Robustness: In Search of Robust Architectures against Adversarial Attacks(防御)

作者:Minghao Guo1, Yuzhe Yang2, Rui Xu, Ziwei Liu, Dahua Lin(The Chinese University of Hong Kong, MIT CSAIL)

摘要:Recent advances in adversarial attacks uncover the intrinsic vulnerability of modern deep neural networks. Since then, extensive efforts have been devoted to enhancing the robustness of deep networks via specialized learning algorithms and loss functions. In this work, we take an architectural perspective and investigate the patterns of network architectures that are resilient to adversarial attacks. To obtain the large number of networks needed for this study, we adopt one-shot neural architecture search, training a large network for once and then finetuning the sub-networks sampled therefrom. The sampled architectures together with the accuracies they achieve provide a rich basis for our study. Our “robust architecture Odyssey” reveals several valuable observations: 1) densely connected patterns result in improved robustness; 2) under computational budget, adding convolution operations to direct connection edge is effective; 3) flow of solution procedure (FSP) matrix is a good indicator of network robustness. Based on these observations, we discover a family of robust architectures (RobNets). On various datasets, including CIFAR, SVHN, Tiny-ImageNet, and ImageNet, RobNets exhibit superior robustness performance to other widely used architectures. Notably, RobNets substantially improve the robust accuracy (∼5% absolute gains) under both white-box and blackbox attacks, even with fewer parameter numbers. Code is available at https://github.com/gmh14/RobNets.

使用NAS来探索对于攻击鲁棒的模型

Towards Transferable Targeted Attack(特定类别攻击)

作者:Maosen Li, Cheng Deng1, Tengjiao Li, Junchi Yan, Xinbo Gao, Heng Huang(School of Electronic Engineering, Xidian University)

摘要:An intriguing property of adversarial examples is their transferability, which suggests that black-box attacks are feasible in real-world applications. Previous works mostly study the transferability on non-targeted setting. However, recent studies show that targeted adversarial examples are more difficult to transfer than non-targeted ones. In this paper, we find there exist two defects that lead to the difficulty in generating transferable examples. First, the magnitude of gradient is decreasing during iterative attack, causing excessive consistency between two successive noises in accumulation of momentum, which is termed as noise curing. Second, it is not enough for targeted adversarial examples to just get close to target class without moving away from true class. To overcome the above problems, we propose a novel targeted attack approach to effectively generate more transferable adversarial examples. Specifically, we first introduce the Poincare distance as the similarity metric to make the magnitude of gradient self-adaptive during iterative attack to alleviate noise curing. Furthermore, we regularize the targeted attack process with metric learning to take adversarial examples away from true label and gain more transferable targeted adversarial examples. Experiments on ImageNet validate the superiority of our approach achieving 8% higher attack success rate over other state-ofthe-art methods on average in black-box targeted attack.

特定类别攻击,使用Poincare距离来度量相似度,使用度量学习来使得对抗样本远离真实类别

Defending Against Universal Attacks Through Selective Feature Regeneration(防御)

作者:Tejas Borkar, Felix Heide, Lina Karam(Arizona State University, Princeton University)

摘要:Deep neural network (DNN) predictions have been shown to be vulnerable to carefully crafted adversarial perturbations. Specifically, image-agnostic (universal adversarial) perturbations added to any image can fool a target network into making erroneous predictions. Departing from existing defense strategies that work mostly in the image domain, we present a novel defense which operates in the DNN feature domain and effectively defends against such universal perturbations. Our approach identifies pre-trained convolutional features that are most vulnerable to adversarial noise and deploys trainable feature regeneration units which transform these DNN filter activations into resilient features that are robust to universal perturbations. Regenerating only the top 50% adversarially susceptible activations in at most 6 DNN layers and leaving all remaining DNN activations unchanged, we outperform existing defense strategies across different network architectures by more than 10% in restored accuracy. We show that without any additional modification, our defense trained on ImageNet with one type of universal attack examples effectively defends against other types of unseen universal attacks.

对DNN的特征进行操作来防止通用扰动攻击,对最易受到扰动的50%的特征进行重新生成

Adversarial Camouflage: Hiding Physical-World Attacks with Natural Styles(攻击)

作者:Ranjie Duan, Xingjun Ma, Yisen Wang, James Bailey, A. K. Qin, Yun Yang(1Swinburne University of Technology)

摘要:Deep neural networks (DNNs) are known to be vulnerable to adversarial examples. Existing works have mostly focused on either digital adversarial examples created via small and imperceptible perturbations, or physical-world adversarial examples created with large and less realistic distortions that are easily identified by human observers. In this paper, we propose a novel approach, called Adversarial Camouflage (AdvCam), to craft and camouflage physicalworld adversarial examples into natural styles that appear legitimate to human observers. Specifically, AdvCam transfers large adversarial perturbations into customized styles, which are then “hidden” on-target object or off-target background. Experimental evaluation shows that, in both digital and physical-world scenarios, adversarial examples crafted by AdvCam are well camouflaged and highly stealthy, while remaining effective in fooling state-of-the-art DNN image classifiers. Hence, AdvCam is a flexible approach that can help craft stealthy attacks to evaluate the robustness of DNNs. AdvCam can also be used to protect private information from being detected by deep learning systems. c

在物理世界进行攻击时,使得攻击图像尽可能真实,风格一致

QEBA: Query-Efficient Boundary-Based Blackbox Attack(黑盒攻击)

作者:Huichen Li, Xiaojun Xu, Xiaolu Zhang, Shuang Yang, Bo Li(University of Illinois at Urbana-Champaign, Ant Financial)

摘要:Machine learning (ML), especially deep neural networks (DNNs) have been widely used in various applications, including several safety-critical ones (e.g. autonomous driving). As a result, recent research about adversarial examples has raised great concerns. Such adversarial attacks can be achieved by adding a small magnitude of perturbation to the input to mislead model prediction. While several whitebox attacks have demonstrated their effectiveness, which assume that the attackers have full access to the machine learning models; blackbox attacks are more realistic in practice. In this paper, we propose a Query-Efficient Boundary-based blackbox Attack (QEBA) based only on model’s final prediction labels. We theoretically show why previous boundary-based attack with gradient estimation on the whole gradient space is not efficient in terms of query numbers, and provide optimality analysis for our dimension reduction-based gradient estimation. On the other hand, we conducted extensive experiments on ImageNet and CelebA datasets to evaluate QEBA. We show that compared with the state-of-the-art blackbox attacks, QEBA is able to use a smaller number of queries to achieve a lower magnitude of perturbation with 100% attack success rate. We also show case studies of attacks on real-world APIs including MEGVII Face++ and Microsoft Azure.

基于边界的黑盒攻击,使用基于降维的梯度估计来减少查询次数,提高优化效率,在人脸上做了实验

Robust Superpixel-Guided Attentional Adversarial Attack(攻击)

作者:Xiaoyi Dong, Jiangfan Han, Dongdong Chen, Jiayang Liu , Huanyu Bian, Zehua Ma, Hongsheng Li, Xiaogang Wang, Weiming Zhang, Nenghai Yu(University of Science and Technology of China)

摘要:Deep Neural Networks are vulnerable to adversarial samples, which can fool classifiers by adding small perturbations onto the original image. Since the pioneering optimization-based adversarial attack method, many following methods have been proposed in the past several years. However most of these methods add perturbations in a “pixel-wise” and “global” way. Firstly, because of the contradiction between the local smoothness of natural images and the noisy property of these adversarial perturbations, this “pixel-wise” way makes these methods not robust to image processing based defense methods and steganalysis based detection methods. Secondly, we find adding perturbations to the background is less useful than to the salient object, thus the “global” way is also not optimal. Based on these two considerations, we propose the first robust superpixel-guided attentional adversarial attack method. Specifically, the adversarial perturbations are only added to the salient regions and guaranteed to be the same within each superpixel. Through extensive experiments, we demonstrate our method can preserve the attack ability even in this highly constrained modification space. More importantly, compared to existing methods, it is significantly more robust to image processing based defense and steganalysis based detection.

在视觉注意力显著区域添加扰动(在背景区域增加扰动的效果不明显)

2019ICML

Simple Black-box Adversarial Attacks(黑盒攻击)

作者:Chuan Guo, Jacob Gardner, Yurong You, Andrew Gordon Wilson, Kilian Weinberger(Cornell University, Uber AI Labs)

摘要:We propose an intriguingly simple method for the construction of adversarial images in the black-box setting. In constrast to the white-box scenario, constructing black-box adversarial images has the additional constraint on query budget, and efficient attacks remain an open problem to date. With only the mild assumption of requiring continuous-valued confidence scores, our highly query-efficient algorithm utilizes the following simple iterative principle: we randomly sample a vector from a predefined orthonormal basis and either add or subtract it to the target image. Despite its simplicity, the proposed method can be used for both untargeted and targeted attacks – resulting in previously unprecedented query efficiency in both settings. We demonstrate the efficacy and efficiency of our algorithm on several real world settings including the Google Cloud Vision API. We argue that our proposed algorithm should serve as a strong baseline for future black-box attacks, in particular because it is extremely fast and its implementation requires less than 20 lines of PyTorch code.

随机选取预定义的正交基加减到目标图像,速度快效率高,可以同时完成特定目标和无目标黑盒攻击

Are Generative Classifiers More Robust to Adversarial Attacks?(防御)

作者:Yingzhen Li, John Bradshaw, Yash Sharma(Microsoft Research Cambridge, University of Cambridge)

摘要:There is a rising interest in studying the robustness of deep neural network classifiers against adversaries, with both advanced attack and defence techniques being actively developed. However, most recent work focuses on discriminative classifiers, which only model the conditional distribution of the labels given the inputs. In this paper, we propose and investigate the deep Bayes classifier, which improves classical naive Bayes with conditional deep generative models. We further develop detection methods for adversarial examples, which reject inputs with low likelihood under the generative model. Experimental results suggest that deep Bayes classifiers are more robust than deep discriminative classifiers, and that the proposed detection methods are effective against many recently proposed attacks.

使用生成式贝叶斯判别器,其鲁棒性比判别式好

NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks(攻击)

作者:Yandong Li, Lijun Li, Liqiang Wang, Tong Zhang, Boqing Gong(University of Central Florida, Google)

摘要:Powerful adversarial attack methods are vital for understanding how to construct robust deep neural networks (DNNs) and for thoroughly testing defense techniques. In this paper, we propose a black-box adversarial attack algorithm that can defeat both vanilla DNNs and those generated by various defense techniques developed recently. Instead of searching for an “optimal” adversarial example for a benign input to a targeted DNN, our algorithm finds a probability density distribution over a small region centered around the input, such that a sample drawn from this distribution is likely an adversarial example, without the need of accessing the DNN’s internal layers or weights. Our approach is universal as it can successfully attack different neural networks by a single algorithm. It is also strong; according to the testing against 2 vanilla DNNs and 13 defended ones, it outperforms state-of-the-art black-box or white-box attack methods for most test cases. Additionally, our results reveal that adversarial training remains one of the best defense techniques, and the adversarial examples are not as transferable across defended DNNs as them across vanilla DNNs.

使用输入图像中间小区域的概率密度来进行对抗攻击

Adversarial camera stickers: A physical camera-based attack on deep learning systems(攻击)

作者:Juncheng Li, Frank Schmidt, Zico Kolter(Carnegie Mellon University)

摘要:Recent work has documented the susceptibility of deep learning systems to adversarial examples, but most such attacks directly manipulate the digital input to a classifier. Although a smaller line of work considers physical adversarial attacks, in all cases these involve manipulating the object of interest, e.g., putting a physical sticker on an object to misclassify it, or manufacturing an object specifically intended to be misclassified. In this work, we consider an alternative question: is it possible to fool deep classifiers, over all perceived objects of a certain type, by physically manipulating the camera itself? We show that by placing a carefully crafted and mainly-translucent sticker over the lens of a camera, one can create universal perturbations of the observed images that are inconspicuous, yet misclassify target objects as a different (targeted) class. To accomplish this, we propose an iterative procedure for both updating the attack perturbation (to make it adversarial for a given classifier), and the threat model itself (to ensure it is physically realizable). For example, we show that we can achieve physically-realizable attacks that fool ImageNet classifiers in a targeted fashion 49.6% of the time. This presents a new class of physically-realizable threat models to consider in the context of adversarially robust machine learning. Our demo video can be viewed at: https://youtu.be/wUVmL33Fx54

在摄像头上添加特定的贴纸,从而完成不同类别的目标攻击

Parsimonious Black-Box Adversarial Attacks via Efficient Combinatorial Optimization(攻击)

作者:Seungyong Moon, Gaon An, Hyun Oh Song(Seoul National University)

摘要:Solving for adversarial examples with projected gradient descent has been demonstrated to be highly effective in fooling the neural network based classifiers. However, in the black-box setting, the attacker is limited only to the query access to the network and solving for a successful adversarial example becomes much more difficult. To this end, recent methods aim at estimating the true gradient signal based on the input queries but at the cost of excessive queries. We propose an efficient discrete surrogate to the optimization problem which does not require estimating the gradient and consequently becomes free of the first order update hyperparameters to tune. Our experiments on Cifar-10 and ImageNet show the state of the art black-box attack performance with significant reduction in the required queries compared to a number of recently proposed methods. The source code is available at https://github.com/snu-mllab/parsimonious-blackbox-attack.

提出有效的离散优化方法,不需要梯度估计,减少黑盒攻击时的查询次数

Transferable Clean-Label Poisoning Attacks on Deep Neural Nets(攻击)

作者:Chen Zhu, W. Ronny Huang, Hengduo Li, Gavin Taylor, Christoph Studer, Tom Goldstein(University of Maryland, Cornell University)

摘要: In this paper, we explore clean-label poisoning attacks on deep convolutional networks with access to neither the network’s output nor its architecture or parameters. Our goal is to ensure that after injecting the poisons into the training data, a model with unknown architecture and parameters trained on that data will misclassify the target image into a specific class. To achieve this goal, we generate multiple poison images from the base class by adding small perturbations which cause the poison images to trap the target image within their convex polytope in feature space. We also demonstrate that using Dropout during crafting of the poisons and enforcing this objective in multiple layers enhances transferability, enabling attacks against both the transfer learning and end-to-end training settings. We demonstrate transferable attack success rates of over 50% by poisoning only 1% of the training set.

在干净数据集中注入错误数据,从而进行模型攻击

2019ICLR

Prior Convictions: Black-box Adversarial Attacks with Bandits and Priors(黑盒攻击)

作者:Andrew Ilyas, Logan Engstrom, Aleksander Madry(MIT CSAIL)

摘要:We study the problem of generating adversarial examples in a black-box setting in which only loss-oracle access to a model is available. We introduce a framework that conceptually unifies much of the existing work on black-box attacks, and demonstrate that the current state-of-the-art methods are optimal in a natural sense. Despite this optimality, we show how to improve black-box attacks by bringing a new element into the problem: gradient priors. We give a bandit optimization-based algorithm that allows us to seamlessly integrate any such priors, and we explicitly identify and incorporate two examples. The resulting methods use two to four times fewer queries and fail two to five times less than the current state-of-the-art. The code for reproducing our work is available at https://git.io/fAjOJ.

基于bandits优化同时利用先验来进行梯度估计,从而完成黑盒攻击

Structured Adversarial Attack: Towards General Implementation and Better Interpretability(攻击)

作者:Kaidi Xu, Sijia Liu, Pu Zhao, Pin-Yu Chen, Huan Zhang, Quanfu Fan, Deniz Erdogmus, Yanzhi Wang, Xue Lin(Northeastern University, MIT-IBM Watson AI Lab, University of California)

摘要:When generating adversarial examples to attack deep neural networks (DNNs), Lp norm of the added perturbation is usually used to measure the similarity between original image and adversarial example. However, such adversarial attacks perturbing the raw input spaces may fail to capture structural information hidden in the input. This work develops a more general attack model, i.e., the structured attack (StrAttack), which explores group sparsity in adversarial perturbation by sliding a mask through images aiming for extracting key spatial structures. An ADMM (alternating direction method of multipliers)-based framework is proposed that can split the original problem into a sequence of analytically solvable subproblems and can be generalized to implement other attacking methods. Strong group sparsity is achieved in adversarial perturbations even with the same level of Lp-norm distortion (p∈ {1,2,∞}) as the state-of-the-art attacks. We demonstrate the effectiveness of StrAttack by extensive experimental results on MNIST, CIFAR-10 and ImageNet. We also show that StrAttack provides better interpretability (i.e., better correspondence with discriminative image regions) through adversarial saliency map (Paper-not et al., 2016b) and class activation map (Zhou et al., 2016).

利用组稀疏性来对抗扰动中获得空间结构,从而进行结构攻击

Adversarial Attacks on Graph Neural Networks via Meta Learning(攻击)

作者:Daniel Zügner, Stephan Günnemann(Technical University of Munich, Germany)

摘要:Deep learning models for graphs have advanced the state of the art on many tasks. Despite their recent success, little is known about their robustness. We investigate training time attacks on graph neural networks for node classification that perturb the discrete graph structure. Our core principle is to use meta-gradients to solve the bilevel problem underlying training-time attacks, essentially treating the graph as a hyperparameter to optimize. Our experiments show that small graph perturbations consistently lead to a strong decrease in performance for graph convolutional networks, and even transfer to unsupervised embeddings. Remarkably, the perturbations created by our algorithm can misguide the graph neural networks such that they perform worse than a simple baseline that ignores all relational information. Our attacks do not assume any knowledge about or access to the target classifiers.

在GNN上使用meta-gradient来进行攻击

The Limitations of Adversarial Training and the Blind-Spot Attack

作者:Huan Zhang, Hongge Chen, Zhao Song, Duane Boning, Inderjit S. Dhillon, Cho-Jui Hsieh(UCLA, MIT, UT Austin)

摘要:The adversarial training procedure proposed by Madry et al. (2018) is one of the most effective methods to defend against adversarial examples in deep neural net- works (DNNs). In our paper, we shed some lights on the practicality and the hardness of adversarial training by showing that the effectiveness (robustness on test set) of adversarial training has a strong correlation with the distance between a test point and the manifold of training data embedded by the network. Test examples that are relatively far away from this manifold are more likely to be vulnerable to adversarial attacks. Consequentially, an adversarial training based defense is susceptible to a new class of attacks, the “blind-spot attack”, where the input images reside in “blind-spots” (low density regions) of the empirical distri- bution of training data but is still on the ground-truth data manifold. For MNIST, we found that these blind-spots can be easily found by simply scaling and shifting image pixel values. Most importantly, for large datasets with high dimensional and complex data manifold (CIFAR, ImageNet, etc), the existence of blind-spots in adversarial training makes defending on any valid test examples difficult due to the curse of dimensionality and the scarcity of training data. Additionally, we find that blind-spots also exist on provable defenses including (Kolter & Wong, 2018) and (Sinha et al., 2018) because these trainable robustness certificates can only be practically optimized on a limited set of training data.

训练集中的的低密度的样本区域中的样本容易受到攻击(盲点)

PeerNets: Exploiting Peer Wisdom Against Adversarial Attacks(防御)

作者:Jan Svoboda, Jonathan Masci, Federico Monti, Michael Bronstein, Leonidas Guibas(USI, Imperial College London, Stanford University)

摘要:Deep learning systems have become ubiquitous in many aspects of our lives. Unfortunately, it has been shown that such systems are vulnerable to adversarial attacks, making them prone to potential unlawful uses.
Designing deep neural networks that are robust to adversarial attacks is a fundamental step in making such systems safer and deployable in a broader variety of applications (e.g. autonomous driving), but more importantly is a necessary step to design novel and more advanced architectures built on new computational paradigms rather than marginally building on the existing ones.
In this paper we introduce PeerNets, a novel family of convolutional networks alternating classical Euclidean convolutions with graph convolutions to harness information from a graph of peer samples. This results in a form of non-local forward propagation in the model, where latent features are conditioned on the global structure induced by the graph, that is up to 3 times more robust to a variety of white- and black-box adversarial attacks compared to conventional architectures with almost no drop in accuracy.

提出PeerNet的卷积网络结构,使得特征易受全局结构影响,在黑盒和白盒的攻击上,鲁棒性提高

Query-Efficient Hard-label Black-box Attack: An Optimization-based Approach(黑盒攻击)

作者:Minhao Cheng, Thong Le, Pin-Yu Chen, Huan Zhang, JinFeng Yi, Cho-Jui Hsieh(University of California, IBM Research AI, JD AI Research)

摘要:We study the problem of attacking machine learning models in the hard-label black-box setting, where no model information is revealed except that the attacker can make queries to probe the corresponding hard-label decisions. This is a very challenging problem since the direct extension of state-of-the-art white-box attacks (e.g., C&W or PGD) to the hard-label black-box setting will require minimizing a non-continuous step function, which is combinatorial and cannot be solved by a gradient-based optimizer. The only two current approaches are based on random walk on the boundary (Brendel et al., 2017) and random trials to evaluate the loss function (Ilyas et al., 2018), which require lots of queries and lacks convergence guarantees.
We propose a novel way to formulate the hard-label black-box attack as a real-valued optimization problem which is usually continuous and can be solved by any zeroth order optimization algorithm. For example, using the Randomized Gradient-Free method (Nesterov & Spokoiny, 2017), we are able to bound the number of iterations needed for our algorithm to achieve stationary points under mild assumptions. We demonstrate that our proposed method outperforms the previous stochastic approaches to attacking convolutional neural networks on MNIST, CIFAR, and ImageNet datasets. More interestingly, we show that the proposed algorithm can also be used to attack other discrete and non-continuous machine learning models, such as Gradient Boosting Decision Trees (GBDT).

使用基于实数值最优化的方法对困难标签进行黑盒攻击

2020ICLR

Enhancing Transformation-Based Defenses Against Adversarial Attacks with a Distribution Classifier(防御)

作者:Connie Kou, Hwee Kuan Lee, Ee-Chien Chang, Teck Khim Ng(1School of Computing, National University of Singapore)

摘要:Adversarial attacks on convolutional neural networks (CNN) have gained significant attention and there have been active research efforts on defense mechanisms. Stochastic input transformation methods have been proposed, where the idea is to recover the image from adversarial attack by random transformation, and to take the majority vote as consensus among the random samples. However, the transformation improves the accuracy on adversarial images at the expense of the accuracy on clean images. While it is intuitive that the accuracy on clean images would deteriorate, the exact mechanism in which how this occurs is unclear. In this paper, we study the distribution of softmax induced by stochastic transformations. We observe that with random transformations on the clean images, although the mass of the softmax distribution could shift to the wrong class, the resulting distribution of softmax could be used to correct the prediction. Furthermore, on the adversarial counterparts, with the image transformation, the resulting shapes of the distribution of softmax are similar to the distributions from the clean images. With these observations, we propose a method to improve existing transformation-based defenses. We train a separate lightweight distribution classifier to recognize distinct features in the distributions of softmax outputs of transformed images. Our empirical studies show that our distribution classifier, by training on distributions obtained from clean images only, outperforms majority voting for both clean and adversarial images. Our method is generic and can be integrated with existing transformation-based defenses.

研究了随机变换带来的softmax分布变化,通过单独训练一个分类器来识别softmax分类特征

Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks(防御)

作者:Tianyu Pang, Kun Xu, Jun Zhu(Dept. of Comp. Sci. & Tech., BNRist Center, Institute for AI, Tsinghua University)

It has been widely recognized that adversarial examples can be easily crafted to fool deep networks, which mainly root from the locally non-linear behavior nearby input examples. Applying mixup in training provides an effective mechanism to improve generalization performance and model robustness against adversarial perturbations, which introduces the globally linear behavior in-between training examples. However, in previous work, the mixup-trained models only passively defend adversarial attacks in inference by directly classifying the inputs, where the induced global linearity is not well exploited. Namely, since the locality of the adversarial perturbations, it would be more efficient to actively break the locality via the globality of the model predictions. Inspired by simple geometric intuition, we develop an inference principle, named mixup inference (MI), for mixup-trained models. MI mixups the input with other random clean samples, which can shrink and transfer the equivalent perturbation if the input is adversarial. Our experiments on CIFAR-10 and CIFAR-100 demonstrate that MI can further improve the adversarial robustness for the models trained by mixup and its variants.

利用混合训练模型的全局线性来打破对抗扰动的局部性

MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius(防御)

作者:Runtian Zhai, Chen Dan, Di He, Huan Zhang, Boqing Gong, Pradeep Ravikumar, Cho-Jui Hsieh, Liwei Wang(Peking University, CMU, UCLA, Google)

摘要: Adversarial training is one of the most popular ways to learn robust models but is usually attack-dependent and time costly. In this paper, we propose the MACER algorithm, which learns robust models without using adversarial training but performs better than all existing provable l2-defenses. Recent work shows that randomized smoothing can be used to provide a certified l2 radius to smoothed classifiers, and our algorithm trains provably robust smoothed classifiers via MAximizing the CErtified Radius (MACER). The attack-free characteristic makes MACER faster to train and easier to optimize. In our experiments, we show that our method can be applied to modern deep neural networks on a wide range of datasets, including Cifar-10, ImageNet, MNIST, and SVHN. For all tasks, MACER spends less training time than state-of-the-art adversarial training algorithms, and the learned models achieve larger average certified radius.

通过最大化认证半径来提高训练模型鲁棒性,不需要对抗训练

Black-Box Adversarial Attack with Transferable Model-based Embedding(黑盒攻击)

作者:Zhichao Huang, Tong Zhang(The Hong Kong University of Science and Technology)

摘要:We present a new method that combines transfer-based and scored black-box adversarial attack, improving the success rate and query efficiency of black-box adversarial attack across different network architectures.
Abstract: We present a new method for black-box adversarial attack. Unlike previous methods that combined transfer-based and scored-based methods by using the gradient or initialization of a surrogate white-box model, this new method tries to learn a low-dimensional embedding using a pretrained model, and then performs efficient search within the embedding space to attack an unknown target network. The method produces adversarial perturbations with high level semantic patterns that are easily transferable. We show that this approach can greatly improve the query efficiency of black-box adversarial attack across different target network architectures. We evaluate our approach on MNIST, ImageNet and Google Cloud Vision API, resulting in a significant reduction on the number of queries. We also attack adversarially defended networks on CIFAR10 and ImageNet, where our method not only reduces the number of queries, but also improves the attack success rate.

黑盒对抗攻击,通过集合基于迁移的和分数的方法,提高攻击成功率和查询效率

A Target-Agnostic Attack on Deep Models: Exploiting Security Vulnerabilities of Transfer Learning(黑盒攻击,在人脸识别上实验)

作者:Shahbaz Rezaei, Xin Liu(Department of Computer Science, University of California)

摘要: Due to insufficient training data and the high computational cost to train a deep neural network from scratch, transfer learning has been extensively used in many deep-neural-network-based applications. A commonly used transfer learning approach involves taking a part of a pre-trained model, adding a few layers at the end, and re-training the new layers with a small dataset. This approach, while efficient and widely used, imposes a security vulnerability because the pre-trained model used in transfer learning is usually publicly available, including to potential attackers. In this paper, we show that without any additional knowledge other than the pre-trained model, an attacker can launch an effective and efficient brute force attack that can craft instances of input to trigger each target class with high confidence. We assume that the attacker has no access to any target-specific information, including samples from target classes, re-trained model, and probabilities assigned by Softmax to each class, and thus making the attack target-agnostic. These assumptions render all previous attack models inapplicable, to the best of our knowledge. To evaluate the proposed attack, we perform a set of experiments on face recognition and speech recognition tasks and show the effectiveness of the attack. Our work reveals a fundamental security weakness of the Softmax layer when used in transfer learning settings.

认为迁移学习中由于预训练模型公开可获得,会导致攻击,提出了与目标无关的攻击方法(不知道目标类别的样本、finetune的模型、每个类别的softmax概率)

Sign Bits Are All You Need for Black-Box Attacks(黑盒攻击)

作者:Abdullah Al-Dujaili, Una-May O’Reilly(CSAIL, MIT)

摘要:We present a novel black-box adversarial attack algorithm with state-of-the-art model evasion rates for query efficiency under ℓ∞\ell_\infty and ℓ2\ell_22 metrics. It exploits a \textit{sign-based}, rather than magnitude-based, gradient estimation approach that shifts the gradient estimation from continuous to binary black-box optimization. It adaptively constructs queries to estimate the gradient, one query relying upon the previous, rather than re-estimating the gradient each step with random query construction. Its reliance on sign bits yields a smaller memory footprint and it requires neither hyperparameter tuning or dimensionality reduction. Further, its theoretical performance is guaranteed and it can characterize adversarial subspaces better than white-box gradient-aligned subspaces. On two public black-box attack challenges and a model robustly trained against transfer attacks, the algorithm’s evasion rates surpass all submitted attacks. For a suite of published models, the algorithm is 3.8×3.8\times3.8× less failure-prone while spending 2.5×2.5\times2.5× fewer queries versus the best combination of state of art algorithms. For example, it evades a standard MNIST model using just 121212 queries on average. Similar performance is observed on a standard IMAGENET model with an average of 579579579 queries.

基于符号而不使用梯度的黑盒攻击方法

Sign-OPT: A Query-Efficient Hard-label Adversarial Attack(黑盒攻击)

作者:Minhao Cheng, Simranjit Singh, Patrick H. Chen, Pin-Yu Chen, Sijia Liu, Cho-Jui Hsieh(Department of Computer Science, UCLA, IBM Research)

摘要:We study the most practical problem setup for evaluating adversarial robustness of a machine learning system with limited access: the hard-label black-box attack setting for generating adversarial examples, where limited model queries are allowed and only the decision is provided to a queried data input. Several algorithms have been proposed for this problem but they typically require huge amount (>20,000) of queries for attacking one example. Among them, one of the state-of-the-art approaches (Cheng et al., 2019) showed that hard-label attack can be modeled as an optimization problem where the objective function can be evaluated by binary search with additional model queries, thereby a zeroth order optimization algorithm can be applied. In this paper, we adopt the same optimization formulation but propose to directly estimate the sign of gradient at any direction instead of the gradient itself, which enjoys the benefit of single query.
Using this single query oracle for retrieving sign of directional derivative, we develop a novel query-efficient Sign-OPT approach for hard-label black-box attack. We provide a convergence analysis of the new algorithm and conduct experiments on several models on MNIST, CIFAR-10 and ImageNet.
We find that Sign-OPT attack consistently requires 5X to 10X fewer queries when compared to the current state-of-the-art approaches, and usually converges to an adversarial example with smaller perturbation.

在困难类别标签上使用基于估计梯度符号的方法来提高黑盒攻击时的查询效率

Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks(攻击)

作者:Jiadong Lin, Chuanbiao Song, Kun He(Huazhong University of Science and Technology) Liwei Wang(Peking University) John E. Hopcroft(Cornell University)

摘要:Deep learning models are vulnerable to adversarial examples crafted by applying human-imperceptible perturbations on benign inputs. However, under the black-box setting, most existing adversaries often have a poor transferability to attack other defense models. In this work, from the perspective of regarding the adversarial example generation as an optimization process, we propose two new methods to improve the transferability of adversarial examples, namely Nesterov Iterative Fast Gradient Sign Method (NI-FGSM) and Scale-Invariant attack Method (SIM). NI-FGSM aims to adapt Nesterov accelerated gradient into the iterative attacks so as to effectively look ahead and improve the transferability of adversarial examples. While SIM is based on our discovery on the scale-invariant property of deep learning models, for which we leverage to optimize the adversarial perturbations over the scale copies of the input images so as to avoid "overfitting” on the white-box model being attacked and generate more transferable adversarial examples. NI-FGSM and SIM can be naturally integrated to build a robust gradient-based attack to generate more transferable adversarial examples against the defense models. Empirical results on ImageNet dataset demonstrate that our attack methods exhibit higher transferability and achieve higher attack success rates than state-of-the-art gradient-based attacks.

提出NI-FGSM和SIM方法提高图像分类时对抗攻击的迁移效果

EMPIR: Ensembles of Mixed Precision Deep Networks for Increased Robustness Against Adversarial Attacks(防御)

作者:Sanchari Sen, Balaraman Ravindran, Anand Raghunathan(Purdue University)

摘要:We propose ensembles of mixed-precision DNNs as a new form of defense against adversarial attacks
Abstract: Ensuring robustness of Deep Neural Networks (DNNs) is crucial to their adoption in safety-critical applications such as self-driving cars, drones, and healthcare. Notably, DNNs are vulnerable to adversarial attacks in which small input perturbations can produce catastrophic misclassifications. In this work, we propose EMPIR, ensembles of quantized DNN models with different numerical precisions, as a new approach to increase robustness against adversarial attacks. EMPIR is based on the observation that quantized neural networks often demonstrate much higher robustness to adversarial attacks than full precision networks, but at the cost of a substantial loss in accuracy on the original (unperturbed) inputs. EMPIR overcomes this limitation to achieve the ``best of both worlds", i.e., the higher unperturbed accuracies of the full precision models combined with the higher robustness of the low precision models, by composing them in an ensemble. Further, as low precision DNN models have significantly lower computational and storage requirements than full precision models, EMPIR models only incur modest compute and memory overheads compared to a single full-precision model (<25% in our evaluations). We evaluate EMPIR across a suite of DNNs for 3 different image recognition tasks (MNIST, CIFAR-10 and ImageNet) and under 4 different adversarial attacks. Our results indicate that EMPIR boosts the average adversarial accuracies by 42.6%, 15.2% and 10.5% for the DNN models trained on the MNIST, CIFAR-10 and ImageNet datasets respectively, when compared to single full-precision models, without sacrificing accuracy on the unperturbed inputs.

通过集成混合精度的DNN模型来提高对对抗样本的防御效果

BayesOpt Adversarial Attack(黑盒攻击)

作者:Binxin Ru, Adam Cobb, Arno Blaas, Yarin Gal(University of Oxford)

摘要:Black-box adversarial attacks require a large number of attempts before finding successful adversarial examples that are visually indistinguishable from the original input. Current approaches relying on substitute model training, gradient estimation or genetic algorithms often require an excessive number of queries. Therefore, they are not suitable for real-world systems where the maximum query number is limited due to cost. We propose a query-efficient black-box attack which uses Bayesian optimisation in combination with Bayesian model selection to optimise over the adversarial perturbation and the optimal degree of search space dimension reduction. We demonstrate empirically that our method can achieve comparable success rates with 2-5 times fewer queries compared to previous state-of-the-art black-box attacks.

通过结合贝叶斯模型选择和搜索空间的降维来提高黑盒攻击时的查询效率

Query-efficient Meta Attack to Deep Neural Networks(黑盒攻击)

作者:Jiawei Du, Hu Zhang, Joey Tianyi Zhou, Yi Yang, Jiashi Feng(Dept. ECE, National University of Singapore, Singapore)

摘要:Black-box attack methods aim to infer suitable attack patterns to targeted DNN models by only using output feedback of the models and the corresponding input queries. However, due to lack of prior and inefficiency in leveraging the query and feedback information, existing methods are mostly query-intensive for obtaining effective attack patterns. In this work, we propose a meta attack approach that is capable of attacking a targeted model with much fewer queries. Its high query-efficiency stems from effective utilization of meta learning approaches in learning generalizable prior abstraction from the previously observed attack patterns and exploiting such prior to help infer attack patterns from only a few queries and outputs. Extensive experiments on MNIST, CIFAR10 and tiny-Imagenet demonstrate that our meta-attack method can remarkably reduce the number of model queries without sacrificing the attack performance. Besides, the obtained meta attacker is not restricted to a particular model but can be used easily with a fast adaptive ability to attack a variety of models. Our code will be released to the public.

基于meta learning的方法获取攻击模式中的先验,从而提高黑盒攻击时的查询效率

Optimal Strategies Against Generative Attacks(攻击+防御,理论性很强)

作者:Roy Mor, Erez Peterfreund, Matan Gavish, Amir Globerson(Tel Aviv University)

摘要:Generative neural models have improved dramatically recently. With this progress comes the risk that such models will be used to attack systems that rely on sensor data for authentication and anomaly detection. Many such learning systems are installed worldwide, protecting critical infrastructure or private data against malfunction and cyber attacks. We formulate the scenario of such an authentication system facing generative impersonation attacks, characterize it from a theoretical perspective and explore its practical implications. In particular, we ask fundamental theoretical questions in learning, statistics and information theory: How hard is it to detect a “fake reality”? How much data does the attacker need to collect before it can reliably generate nominally-looking artificial data? Are there optimal strategies for the attacker or the authenticator? We cast the problem as a maximin game, characterize the optimal strategy for both attacker and authenticator in the general case, and provide the optimal strategies in closed form for the case of Gaussian source distributions. Our analysis reveals the structure of the optimal attack and the relative importance of data collection for both authenticator and attacker. Based on these insights we design practical learning approaches and show that they result in models that are more robust to various attacks on real-world data.

通过观察真实图像来生成人脸图像进行攻击

Defending Against Physically Realizable Attacks on Image Classification(防御)

作者:Tong Wu, Liang Tong, Yevgeniy Vorobeychik(Washington University)

摘要:We study the problem of defending deep neural network approaches for image classification from physically realizable attacks. First, we demonstrate that the two most scalable and effective methods for learning robust models, adversarial training with PGD attacks and randomized smoothing, exhibit very limited effectiveness against three of the highest profile physical attacks. Next, we propose a new abstract adversarial model, rectangular occlusion attacks, in which an adversary places a small adversarially crafted rectangle in an image, and develop two approaches for efficiently computing the resulting adversarial examples. Finally, we demonstrate that adversarial training using our new attack yields image classification models that exhibit high robustness against the physically realizable attacks we study, offering the first effective generic defense against such attacks.

防御图像分类攻击中的物理实现攻击,提出基于矩形遮挡攻击的对抗训练方法

对抗样本方向(Adversarial Examples)2018-2020年最新论文调研相关推荐

  1. 2018 CVPR GAN 相关论文调研 (自己分了下类,附地址哦)

    2018 CVPR GAN 相关论文调研 风格迁移 1. PairedCycleGAN: Asymmetric Style Transfer for Applying and Removing Mak ...

  2. 2018 CVPR GAN 相关论文调研

    2018 CVPR GAN 相关论文调研 风格迁移 1. PairedCycleGAN: Asymmetric Style Transfer for Applying and Removing Mak ...

  3. 科研篇二:对抗样本(Adversarial Example)综述

    文章目录 一.写作动机与文献来源 二.术语定义 2.1.对抗样本/图片(Adversarial Example/Image) 2.2.对抗干扰(Adversarial perturbation) 2. ...

  4. 对抗样本检测adversarial detection techniques

    1.<CHARACTERIZING ADVERSARIAL SUBSPACES USING LOCAL INTRINSIC DIMENSIONALITY> 该方法背后的原理:对于对抗性子空 ...

  5. 对抗机器学习(Adversarial Machine Learning)发展现状

    目录 1. 了解对手 1. 1 攻击目标(Goal) 1. 2 知识储备(Knowledge) 1.3 能力限制(Capability) 1.4 攻击策略(Strategy) 2. 学会主动 2.1 ...

  6. OpenAI详细解析:攻击者是如何使用「对抗样本」攻击机器学习的

    原文来源:OpenAI 作者: Ian Goodfellow.Nicolas Papernot.Sandy Huang.Yan Duan.Pieter Abbeel.Jack Clark. 「雷克世界 ...

  7. 对抗样本(对抗攻击)入门

    什么是对抗样本? 从2013年开始,深度学习模型在多种应用上已经能达到甚至超过人类水平,比如人脸识别,物体识别,手写文字识别等等. 在之前,机器在这些项目的准确率很低,如果机器识别出错了,没人会觉得奇 ...

  8. 【深度学习】深度学习之对抗样本问题和知识蒸馏技术

    文章目录 1 什么是深度学习对抗样本 2 深度学习对于对抗样本表现的脆弱性产生的原因 3 深度学习的对抗训练 4 深度学习中的对抗攻击和对抗防御 5 知识蒸馏技术5.1 知识蒸馏介绍5.2 为什么要有 ...

  9. 对抗样本论文学习:Deep Neural Networks are Easily Fooled

    近日看了一些对抗样本(adversarial examples)方面的论文,在这里对这些论文进行一下整理和总结. 以下仅代表个人理解,本人能力有限难免有错,还请大家给予纠正,不胜感激.欢迎一起讨论进步 ...

最新文章

  1. 100块钱买100只鸡php,使用JS计算买100只鸡问题
  2. springContext
  3. P-GCN:Graph Convolutional Networks for Temporal Action Localization 2019 ICCV
  4. iPhone 不能读取plist文件!?
  5. 视频编解码学习(六):YUV格式学习
  6. java 数据库 模板_JAVA操作数据库的模板方法
  7. linux硬盘检测工具,利用Smartmontools工具检测Linux硬盘状况 – 安装及基本应用 | 老左笔记...
  8. 为什么微信付款服务器异常,微信付款怎么老是交易异常怎么回事?可能是这些原因...
  9. Obi Rope(Yanlz+Unity+柔性电缆+立钻哥哥+)
  10. 奔梦向前-web前端开发工具-简称DW软件适合写网页制作代码-2020-04-25
  11. helm模板开发-流程控制、作用域、循环、变量(三)
  12. nginx 负载均衡安装及升级步骤
  13. 基于STM32的RC522模块读写数据块以及电子钱包充值扣款系统的设计
  14. Android 客户端Socket 与 Java服务端ServerSocket
  15. 【GitHub】 github如何使用
  16. c语言基础输入printf,C语言输入输出 -printf()输出格式大全
  17. 软件架构设计之如何编排复杂多任务
  18. 公益运动App平台开发详情
  19. win7下安装xp双系统--ghost最简单完美版
  20. hangye5:2345网址导航百万重金求顶尖人才 意在扩展高端用户人群

热门文章

  1. WikiText数据集_自然语言处理
  2. 实现QQ表情功能(2)
  3. (Python)从零开始,简单快速学机器仿人视觉Opencv---第十九节:关于轮廓的函数
  4. java web前台概览
  5. mysql要求cpu主频还是核数_CPU主频和核数哪个更重要?电脑CPU到底主频高好还是多核好?...
  6. IEEE公布“AI十大潜力人物”名单,韩松、王威廉、杨迪一、方飞、张含望等人入选
  7. 既约分数 python一练
  8. Python实战——ESIM 模型搭建(keras版)
  9. 安能物流敲定上市发行价:取发售区间下限,工作态度有待改善
  10. 12 种编程语言的起源故事