- When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks论文笔记
When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks论文笔记 该 ...
- Poison Ink: Robust and Invisible Backdoor Attack 论文笔记
1. 论文信息 论文名称 Poison Ink: Robust and Invisible Backdoor Attack 作者 Jie Zhang(中国科学技术大学) 会议/出版社 IEEE Tra ...
- SybilFuse:Combining Local Attributes with Global Structure to Perform Robust Sybil Detect(论文笔记)
SybilFuse:Combining Local Attributes withGlobal Structure to Perform Robust Sybil Detection 1. 输入数据 ...
- [论文阅读] (02) SP2019-Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks
神经清洁:神经网络中的后门攻击识别与缓解 Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks ...
- DBA: Distributed Backdoor Attacks against Federated Learning论文笔记
作者:Chulin Xie Keli Huang Pin-Yu Chen Bo Li 来源:ICLR 2020 发表时间:May 26,2020 背景: 联邦学习能够聚合各方提供的信息, ...
- 后门触发器之频域角度——Rethinking the Backdoor Attacks’ Triggers A Frequency Perspective
Rethinking the Backdoor Attacks' Triggers A Frequency Perspective 尚未发布,收录于arxiv-- 论文链接 本文指出,现有的后门攻击在 ...
- 反知识蒸馏后门攻击:Anti-Distillation Backdoor Attacks: Backdoors Can Really Survive in Knowledge Distillation
Ge, Yunjie, et al. "Anti-Distillation Backdoor Attacks: Backdoors Can Really Survive in Knowled ...
- (翻译)DBA: DISTRIBUTED BACKDOOR ATTACKS AGAINST FEDERATED LEARNING
摘要 后门攻击旨在通过注入对抗性触发器来操纵训练数据的子集,从而使在受篡改数据集上训练的机器学习模型将在嵌入了相同触发器的测试集上进行任意(目标)错误预测.尽管联邦学习(FL)能够汇总由不同方面提供的 ...
- 《DBA: DISTRIBUTED BACKDOOR ATTACKS AGAINST FEDERATED LEARNING》阅读笔记
DBA: DISTRIBUTED BACKDOOR ATTACKS AGAINST FEDERATED LEARNING ** 本文发在ICLR 2020,针对联邦学习进行的后门攻击.其提出的方案针对 ...