Volume 50 Issue 2
Feb.  2024
Turn off MathJax
Article Contents
LI M H,JIN S,DU Y. Adversarial attack method based on loss smoothing[J]. Journal of Beijing University of Aeronautics and Astronautics,2024,50(2):663-670 (in Chinese) doi: 10.13700/j.bh.1001-5965.2022.0478
Citation: LI M H,JIN S,DU Y. Adversarial attack method based on loss smoothing[J]. Journal of Beijing University of Aeronautics and Astronautics,2024,50(2):663-670 (in Chinese) doi: 10.13700/j.bh.1001-5965.2022.0478

Adversarial attack method based on loss smoothing

doi: 10.13700/j.bh.1001-5965.2022.0478
Funds:  National Key R&D Program of China (2020YFB2103800,2020YFB2103802)
More Information
  • Corresponding author: E-mail:mhli1@bjtu.edu.cn
  • Received Date: 11 Jun 2022
  • Accepted Date: 06 Sep 2022
  • Available Online: 16 Sep 2022
  • Publish Date: 15 Sep 2022
  • Deep neural networks (DNNs) are susceptible to attacks from adversairial samples. Most existing momentum-based adversarial attack methods achieve nearly 100% attack success rates under the white-box setting, but only achieve relatively low attack success rates under the black-box setting. An adversarial attack method based on loss smoothing is proposed, which can further improve the adversarial transferability. By integrating the locally averaged gradient term into the iterative process for attacks, our methods can suppress the local oscillation of the loss surface, stabilize the update direction and escape from poor local maxima. Empirical results on the standard ImageNet dataset demonstrate that the proposed method could significantly improve the adversarial transferability by 38.07% and 27.77% under single-model setting, and 32.50% and 28.63% under ensemble-model setting than the existing methods.

     

  • loading
  • [1]
    CHEN L Y, LI S B, BAI Q, et al. Review of image classification algorithms based on convolutional neural networks[J]. Remote Sensing, 2021, 13(22): 4712-4730. doi: 10.3390/rs13224712
    [2]
    ZAIDI S S A, ANSARI M S, ASLAM A, et al. A survey of modern deep learning based object detection models[J]. Digital Signal Processing, 2021, 126: 512-523.
    [3]
    YUAN X H, SHI J F, GU L C. A review of deep learning methods for semantic segmentation of remote sensing imagery[J]. Expert Systems with Applications, 2021, 169: 114417.
    [4]
    CHAKRABORTY A, ALAM M, DEY V, et al. A survey on adversarial attacks and defences[J]. CAAI Transactions on Intelligence Technology, 2021, 6(1): 25-45.
    [5]
    GUO Y J, WEI X X, WANG G Q, et al. Meaningful adversarial stickers for face recognition in physical world[J]. IEEE Ṫransaction on Pattern Analysis and Machine Intelligence, 2023, 45(3): 2711-2725.
    [6]
    LIU A S, LIU X L, FAN J X, et al. Perceptual-sensitive GAN for generating adversarial patches[C]//Proceedings of the AAAI Conference on Artificial Intelligence. Washton, D.C.: AAAI, 2019, 33(1): 1028-1035.
    [7]
    SHU M, LIU C, QIU W, et al. Identifying model weakness with adversarial examiner[C]//Proceedings of the 34th AAAI Conference on Artificial Intelligence. New York: AAAI, 2022: 11998-12006.
    [8]
    WANG Y, MA X, BAILEY J, et al. On the convergence and robustness of adversarial training[C]//Proceedings of the 36rd International Conference on Machine Learning. New York: ICML, 2019: 11426-11438.
    [9]
    DEMONTIS A, MELIS M, PINTOR M, et al. Why do adversarial attacks transfer? Explaining transferability of evasion and poisoning attacks[C]//Proceedings of the 28th USENIX Conference on Security Symposium. New York: ACM, 2018: 321-338.
    [10]
    GOODFELLOW I J, SHLENS J, SZEGEDY C . Explaining and harnessing adversarial examples[C]//Proceedings of the 3rd International Conference on Learning Representations. Washington, D.C.: ICLR, 2015: 1-11.
    [11]
    KURAKIN A, GOODFELLOW I, BENGIO S . Adversarial examples in the physical world[C]//Proceedings of the 5th International Conference on Learning Representations. Washington, D.C.: ICLR, 2017: 99-112.
    [12]
    MADRY A, MAKELOV A, SCHMIDT L, et al. Towards deep learning models resistant to adversarial attacks[C]//Proceedings of Machine Learning Research. Baltimore: MD, 2017, 162: 52-65.
    [13]
    BAI T, LUO J Q, ZHAO J, et al. Recent advances in adversarial training for adversarial robustness[C]//Proceedings of the 30th International Joint Conference on Artificial Intelligence. Freiburg: IJCAI, 2021: 4312-4321.
    [14]
    CHEN J X, FENG X, JIANG L, et al. State of charge estimation of lithium-ion battery using denoising autoencoder and gated recurrent unit recurrent neural network[J]. Energy, 2021, 227(9): 1-8.
    [15]
    LIU Y L, GAO Y, YIN W . An improved analysis of stochastic gradient descent with momentum[C]//Proceedings of the 34th Conference on Neural Information Processing Systems. Cambridge: NeurIPS, 2020: 18261-18271.
    [16]
    QU G N, LI N . Accelerated distributed nesterov gradient descent[J]. IEEE Transactions on Automatic Control, 2020, 65(6): 2566-2581.
    [17]
    HANG J, HAN K J, CHEN H, et al. Ensemble adversarial black-box attacks against deep learning systems[J]. Pattern Recognition, 2019, 101: 107184.
    [18]
    YANG K, YAU J, FEI-FEI L, et al. A study of face obfuscation in imageNet[C]//Proceedings of the 39rd International Conference on Machine Learning. New York: ICML, 2022.
    [19]
    DONG Y, LIAO F, PANG T, et al. Boosting adversarial attacks with mommtum[C]//Proceedings of the 2018 Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2018: 9185-9193.
    [20]
    LIN J D, SONG C B, HE K, et al. Nesterov accelerated gradient and scale invariance for improving transferability of adversarial examples[C]//Proceedings of the 8th International Conference on Learning Representations. Washington, D.C.: ICLR, 2019: 1-12.
    [21]
    STUTZ D, HEIN M, SCHIELE B. Disentangling adversarial robustness and generalization[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake: CVPR, 2019: 6976-6987.
    [22]
    TRAMER F, KURAKIN A, PAPERNOT N, et al. Ensemble adversarial training: Attacks and defenses[C]//Proceedings of the 6th International Conference on Learning Representations. Washington, D.C.: ICLR, 2018: 1-22.
    [23]
    KURAKIN A, GOODFELLOW I, BENGIO S. Adversarial machine learning at scale[C]//Proceedings of the International Conference on Learning Representations. Washington, D.C.: ICLR, 2016: 1-17
    [24]
    GUO C, RANA M, CISSE M, et al. Countering adversarial images using input transformations[C]//Proceedings of the International Conference on Learning Representations. Washington, D.C.: ICLR, 2018: 1-12.
    [25]
    XIE C H, WANG J Y, ZHANG Z S, et al. Mitigating adversarial effects through randomization[C]//Proceedings of the 6th International Conference on Learning Representations. Washington, D.C.: ICLR, 2018: 1-16.
    [26]
    LIAO F Z, LIANG M, DONG Y P, et al. Defense against adversarial attacks using high-Level representation guided denoiser[C]//Proceedings of the 31th IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake: CVPR, 2018: 1778-1787.
    [27]
    SZEGEDY C, VANHOUCKE V, IOFFE S, et al. Rethinking the inception architecture for computer vision[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2016: 2818-2826.
    [28]
    SZEGEDY C, IOFFE S, VANHOUCKE V, et al. Inception-v4, inception-resNet and the impact of residual connections on learning[J]. CoRR, 2016, 31(4): 4278-4284.
    [29]
    HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake:CVPR, 2016: 770-778.
    [30]
    WANG H J,WANG Y S. Self-ensemble adversarial training for improved robustness[C]//Proceedings of the International Conference on Learning Representations. Washington, D.C.: ICLR, 2021:1-18.
    [31]
    LIU Y P, CHEN X Y, LIU C, et al. Delving into transferable adversarial examples and black-box attacks[C]//Proceedings of the 5th International Conference on Learning Representations. Washington, D.C.: ICLR, 2017: 1-24.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(5)  / Tables(2)

    Article Metrics

    Article views(1439) PDF downloads(7) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return