<th id="5nh9l"></th><strike id="5nh9l"></strike><th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th><strike id="5nh9l"></strike>
<progress id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"><noframes id="5nh9l">
<th id="5nh9l"></th> <strike id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span>
<progress id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span><strike id="5nh9l"><noframes id="5nh9l"><strike id="5nh9l"></strike>
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"></span><span id="5nh9l"><video id="5nh9l"></video></span>
<th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th>
<progress id="5nh9l"><noframes id="5nh9l">

針對視頻分類模型的共軛梯度攻擊

Adversarial attacks on videos based on the conjugate gradient method

  • 摘要: 基于深度神經網絡的視頻分類模型目前應用廣泛,然而最近的研究表明,深度神經網絡極易受到對抗樣本的欺騙. 這類對抗樣本含有對人類來說難以察覺的噪聲,而其存在對深度神經網絡的安全性構成嚴重威脅. 盡管目前已經針對圖像的對抗樣本產生了相當多的研究,針對視頻的對抗攻擊仍存在復雜性. 通常的對抗攻擊采用快速梯度符號方法(FGSM),然而該方法生成的對抗樣本攻擊成功率低,以及易被察覺,隱蔽性不足. 為解決這兩個問題,本文受非線性共軛梯度下降法(FR–CG)啟發,提出一種針對視頻模型的非線性共軛梯度攻擊方法. 該方法通過松弛約束條件,令搜索步長滿足強Wolfe條件,保證了每次迭代的搜索方向與目標函數損失值上升的方向一致. 針對UCF-101的實驗結果表明,在擾動上界設置為3/255時,本文攻擊方法具有91%的攻擊成功率. 同時本文方法在各個擾動上界下的攻擊成功率均比FGSM方法高,且具有更強的隱蔽性,在攻擊成功率與運行時間之間實現了良好的平衡.

     

    Abstract: Deep neural network-based video classification models enjoy widespread use because of their superior performance on visual tasks. However, with its broad-based application comes a deep-rooted concern about its security aspect. Recent research signals highlight these models’ high susceptibility to deception by adversarial examples. These adversarial examples, subtly laced with humanly imperceptible noise, escape the scope of human detection while posing a substantial risk to the integrity and security of these deep neural network constructs. Considerable research has been directed toward image-based adversarial examples, resulting in notable advances in understanding and combating such adversarial attacks within that scope. However, video-based adversarial attacks highlight a different landscape of complexities and challenges. The nuances of motion information, temporal coherence, and frame-to-frame correlation introduce a multidimensional battlefield, necessitating purpose-built solutions. The most straightforward implementation of adversarial attacks uses the fast gradient sign method (FGSM). Unfortunately, FGSM attacks lack several respects: the attack success rates are far from satisfactory, they are frequently easily identifiable, and their stealth measures do not pass muster in rigorous environments. Therefore, this study introduces a novel nonlinear conjugate gradient attack method inspired by the nonlinear conjugate gradient descent method. By relaxing the search step size constraints to comply with the strong Wolfe conditions, we aimed to maintain pace with the increasing loss value of our objective function. This critical enhancement helps maintain the trajectory of each iteration’s search direction and the simultaneous increase in the loss value, thereby yielding more consistent results, which ensures that our attack method can achieve a high attack success rate and concealment after each iteration. Further invigorating testament to our approach’s efficacy came through experimental results on the UCF101 dataset, underlining an impressive 91% attack success rate when the perturbation upper limit is 3/255. Our method consistently and markedly outshone the FGSM in attack success rates across various perturbation thresholds—even as it offered superior stealth. More critically, it allowed us to strike an effective balance between the attack success rate and runtime, a potent recipe for a disruptive contribution to the fraternity of adversarial attacks in video classification models. This adversarial attack method considers generating video adversarial examples from an optimization perspective. This represents a step forward in the ongoing drive to develop robust, reliable, and efficient techniques to understand adversarial attacks, specifically for deep neural network-based video classification models.

     

/

返回文章
返回
<th id="5nh9l"></th><strike id="5nh9l"></strike><th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th><strike id="5nh9l"></strike>
<progress id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"><noframes id="5nh9l">
<th id="5nh9l"></th> <strike id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span>
<progress id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span><strike id="5nh9l"><noframes id="5nh9l"><strike id="5nh9l"></strike>
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"></span><span id="5nh9l"><video id="5nh9l"></video></span>
<th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th>
<progress id="5nh9l"><noframes id="5nh9l">
259luxu-164