<th id="5nh9l"></th><strike id="5nh9l"></strike><th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th><strike id="5nh9l"></strike>
<progress id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"><noframes id="5nh9l">
<th id="5nh9l"></th> <strike id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span>
<progress id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span><strike id="5nh9l"><noframes id="5nh9l"><strike id="5nh9l"></strike>
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"></span><span id="5nh9l"><video id="5nh9l"></video></span>
<th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th>
<progress id="5nh9l"><noframes id="5nh9l">

基于DeepInsight和遷移學習的入侵檢測技術

Network intrusion detection technology based on DeepInsight and transfer learning

  • 摘要: 針對入侵檢測研究中,入侵檢測訓練樣本較少、樣本不平衡等問題,本文提出一種基于DeepInsight和遷移學習的入侵檢測方法DI–TL–CNN (DeepInsight–transfer learning–convolutional neural network,DI–TL–CNN). 分析采用DeepInsight方法將入侵數據轉換為適合CNN模型輸入的圖像數據集的過程;研究基于VGG16模型的訓練方法,并進一步利用遷移學習開展目標域入侵檢測的過程. 通過凍結和微調CNN模型中不同模塊參數,比較研究了6種遷移方案,并基于數據集實驗研究,獲得優化方案. 采用以UNSW-NB15為基礎的不平衡數據集作為方法驗證對象,進行網絡的入侵檢測分析,驗證本文提出的DI–TL–CNN方法的正確性;進一步實驗比較研究本文提出的方法與其他方法的檢測性能,實驗結果表明,DI–TL–CNN方法更加適用于樣本較小和不平衡數據情況下的入侵檢測,其準確率和召回率等性能指標均優于其他檢測方法,具有良好的應用前景.

     

    Abstract: In the dynamic field of the internet in modern life, networks are increasingly vulnerable to a diverse range of cyberattacks. Conventional intrusion detection systems based on machine learning techniques require a large number of samples for training. However, in some scenarios, only a limited number of malicious samples can be collected. To address the issue of insufficient training samples and unbalanced sample classes for intrusion detection system in real network environments, this paper proposes an intrusion detection method named DeepInsight–transfer learning–convolutional neural network (DI–TL–CNN), which is based on DI and TL. First, the DI method is used to convert the intrusion dataset into an image form suitable for CNN model input. The DI method can transform text while maintaining the semantic relationships between data points, thereby providing high-quality images. In this step, we map the 1D feature vector representation of the input data onto the 2D image representation using T-SNE and construct 2D grayscale images. In the second step, we train and optimize the VGG16 model through TL and fine-tuning, enhancing the model’s adaptability and performance. We propose six TL schemes by freezing and fine-tuning the parameters of different modules in the CNN model to enhance intrusion detection performance. In the TL process, the VGG16 model, pretrained on the ImageNet dataset, demonstrates promising results for generic image classification tasks. The bottom layers of CNN models often learn basic feature patterns that are applicable to various tasks, while the features acquired by the top layers of the model are specific to the target domain intrusion dataset. Fine-tuning allows the model to adjust the pretrained architecture’s higher-order features to better match the targeted dataset. During the training process, the bottom layers of the pretrained architecture are frozen, whereas the top layers are unfrozen for fine-tuning. The optimal intrusion detection model is determined through a comparison of the performance of the six TL schemes. Finally, the correctness and effectiveness of the proposed DI–TL–CNN method are validated on a dataset with insufficient training samples, using metrics such as accuracy, precision, recall, and F1-score. In the experiments, compared with existing state-of-the-art models for intrusion detection, the proposed method considerably enhances accuracy in the detection of network traffic data. The experimental results show that the DI–TL–CNN method is suitable for intrusion detection with small samples and unbalanced data, demonstrating the good application prospects of the method in complex networks.

     

/

返回文章
返回
<th id="5nh9l"></th><strike id="5nh9l"></strike><th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th><strike id="5nh9l"></strike>
<progress id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"><noframes id="5nh9l">
<th id="5nh9l"></th> <strike id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span>
<progress id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span><strike id="5nh9l"><noframes id="5nh9l"><strike id="5nh9l"></strike>
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"></span><span id="5nh9l"><video id="5nh9l"></video></span>
<th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th>
<progress id="5nh9l"><noframes id="5nh9l">
259luxu-164