-
摘要: 通過低氧實驗提出一種快速識別人體低氧狀態的方法.通過搭建深層神經網絡訓練實驗數據識別氧氣體積分數(16%~21%)與人體可耐受極端低氧氣體積分數(15.5%~16%)條件下光電容積脈搏波(photoplethysmography, PPG)信號, 獲得人體生理狀態的模式識別網絡.經測試該網絡的識別正確率可達92.8%.利用混淆矩陣及接受者操作性能(receiver operating characteristic, ROC)曲線分析, 混淆矩陣的訓練集、驗證集、測試集、全集識別正確率分別達到97.9%、94.8%、92.8%和96.3%, AUC (area under curve)值接近1, 認為該網絡分類性能優良, 并且可在4 s內完成整個識別過程.Abstract: Due to the development of industrialization, low-oxygen environment has become common in the confined spaces of construction industries, chemical industries, military, urban underground spaces, and poorly ventilated crowed areas and caused a large number of hypoxic injuries. The traditional method of preventing hypoxic injuries is to monitor the oxygen concentration in the environment without considering the difference in oxygen tolerance limits when the human body is in different physiological states. Photoplethysmography (PPG) can comprehensively reflect physiological information, including heart rate, blood pressure, blood oxygen saturation, cardiovascular blood flow parameters, and respiratory rate. When the human body enters a hypoxic environment, the physiological parameters change rapidly, resulting in a change in the PPG signal. By measuring the PPG signal of the human body, the physiological state is considered to determine whether the human body reaches the oxygen tolerance limit. This study proposed a method for quickly identifying the hypoxic state of the human body using hypoxia experiment. According to the latest research on aviation medicine, mountain medicine and naval submarine medicine, 15.5% oxygen volume fraction can guarantee the basic life safety of personnel. Through the training experimental data of a constructed deep neural network, the PPG signal of a human in normal oxygen volume fraction (16% -21%) and extremely low-oxygen volume fraction (15.5% -16%) was determined to obtain the pattern recognition network of human physiological state. After testing, the recognition accuracy of the network could reach 92.8%. Using the confusion matrix and receiver operating characteristic curve analysis, the accuracy rate of training set, verification set, test set, and ensemble recognition of the confusion matrix reached 97.9%, 94.8%, 92.8%, and 96.3%, respectively. The area under the curve value is close to 1, the network classification performance is excellent, and the entire identification process could be completed within 4 s.
-
Key words:
- confined space /
- hypoxic injury /
- photoplethysmography /
- deep learning /
- state recognition
-
表 1 神經網絡正確率
Table 1. Neural network correct rates ?
% 神經元數目 網絡層數 2 3 4 5 6 7 92.20 90.50 86.60 91.80 88.60 8 83.35 87.30 90.80 92.80 87.30 9 91.50 92.80 91.80 88.90 83.70 10 88.90 91.20 90.50 91.20 92.20 表 2 AUC與準確性表
Table 2. AUC and accuracy
AUC值 準確性 0.5 ~ 0.7 較低準確性 0.7 ~ 0.9 準確性一般 0.9 ~ 1.0 較高準確性 259luxu-164 -
參考文獻
[1] Burlet-Vienney D, Chinniah Y, Bahloul A, et al. Occupational safety during interventions in confined spaces. Safety Sci, 2015, 79: 19 doi: 10.1016/j.ssci.2015.05.003 [2] Zang T Z, Zhang L J, Zhang L, et al. Causes and countermeasures of casualty accident induced by unexpected factors in limited job space. J Nanjing Univ Technol Nat Sci Ed, 2015, 27(3): 103 doi: 10.3969/j.issn.1671-7627.2005.03.026臧鐵柱, 張禮敬, 張麗, 等. 有限空間作業意外傷亡事故的成因及其對策. 南京工業大學學報(自然科學版), 2005, 27(3): 103 doi: 10.3969/j.issn.1671-7627.2005.03.026 [3] Sun Y J, Shen H G. Industrial Ventilation. 4th Ed. Beijing: China Architecture & Building Press, 2010孫一堅, 沈恒根. 工業通風. 4版. 北京: 中國建筑工業出版社, 2010 [4] Zhou X F, Yang F, Guo L, et al. Analysis and preventive recommendation of national confined space accidents due to asphyxiation and poisoning from 2014 to 2015. J Eniviron Occup Med, 2018, 35(8): 735 https://www.cnki.com.cn/Article/CJFDTOTAL-LDYX201808013.htm周興藩, 楊鳳, 郭玲, 等. 2014-2015年全國有限空間作業中毒與窒息事故分析及預防建議. 環境與職業醫學, 2018, 35(8): 735 https://www.cnki.com.cn/Article/CJFDTOTAL-LDYX201808013.htm [5] Mejías C, Jiménez D, Mu?oz A, et al. Clinical response of 20 people in a mining refuge: study and analysis of functional parameters. Safety Sci, 2014, 63: 204 doi: 10.1016/j.ssci.2013.11.011 [6] Selman J, Spickett J, Jansz J, et al. An investigation into the rate and mechanism of incident of work-related confined space fatalities. Safety Sci, 2018, 109: 333 doi: 10.1016/j.ssci.2018.06.014 [7] Li G J. Research on Human Heat Tolerance under Extreme Hot, Humid and Low-oxygen Environment[Dissertaion]. Tianjin: Tianjin University, 2008李國建. 高溫高濕低氧環境下人體熱耐受性研究[學位論文]. 天津: 天津大學, 2008 [8] Kamshilin A A, Nippolainen E, Sidorov I S, et al. A new look at the essence of the imaging photoplethysmography. Sci Rep, 2015, 5: 10494 doi: 10.1038/srep10494 [9] Shin H, Min S D. Feasibility study for the non-invasive blood pressure estimation based on ppg morphology: normotensive subject study. Biomed Eng Online, 2017, 16(1): 10 doi: 10.1186/s12938-016-0302-y [10] Zhou Q, Tu H, Zuo J X. Blood oxygen saturation real-time monitoring system research based on PPG. Inform Res, 2017, 43(3): 75 https://www.cnki.com.cn/Article/CJFDTOTAL-DZGS201703014.htm張強, 涂浩, 左佳鑫. 基于PPG的血氧飽和度實時監測系統研究. 信息化研究, 2017, 43(3): 75 https://www.cnki.com.cn/Article/CJFDTOTAL-DZGS201703014.htm [11] Njoum H, Kyriacou P A. Photoplethysmography for the assessment of haemorheology. Sci Rep, 2017, 7(1): 1406 doi: 10.1038/s41598-017-01636-0 [12] Ma J L, Wang C, Li Z J, et al. Study of measuring heart rate and respiration rate based on PPG. Opt Tech, 2011, 37(3): 309 https://www.cnki.com.cn/Article/CJFDTOTAL-GXJS201103013.htm馬俊領, 王成, 李章俊, 等. 基于PPG的心率和呼吸頻率的測量研究. 光學技術, 2011, 37(3): 309 https://www.cnki.com.cn/Article/CJFDTOTAL-GXJS201103013.htm [13] Tamura T, Maeda Y, Sekine M, et al. Wearable photoplethysmographic sensors-past and present. Electronics, 2014, 3(2): 282 doi: 10.3390/electronics3020282 [14] Khadse C B, Chaudhari M A, Borghate V B. Electromagnetic compatibility estimator using scaled conjugate gradient back-propagation based artificial neural network. IEEE Trans Ind Inform, 2017, 13(3): 1036 doi: 10.1109/TII.2016.2605623 [15] Akhtar N, Mian A. Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access, 2018, 6: 14410 doi: 10.1109/ACCESS.2018.2807385 [16] Razzak M I, Naz S, Zaib A. Deep learning for medical image processing: overview, challenges and the future//Classification in Bioapps: Automation of Decision Making. Springer, 2018 [17] Wang H, Zhao Y, Xu Y M, et al. Cross-language speech attribute detection and phone recognition for Tibetan using deep learning//The 9th International Symposium on Chinese Spoken Language Processing. Singapore, 2014: 474 [18] Hoffman J, Pathak D, Tzeng E, et al. Large scale visual recognition through adaptation using joint representation and multiple instance learning. J Mach Learn Res, 2016, 17(1): 1 doi: 10.5555/2946645.3007095 [19] LeCun Y, Bengio Y, Hinton G. Deep learning. Nature, 2015, 521(7553): 436 doi: 10.1038/nature14539 [20] Bronstein M M, Bruna J, LeCun Y, et al. Geometric deep learning: going beyond euclidean data. IEEE Signal Process Mag, 2017, 34(4): 18 doi: 10.1109/MSP.2017.2693418 [21] Craig A, Tran Y, Wijesuriya N, et al. A controlled investigation into the psychological determinants of fatigue. Biol Psychol, 2006, 72(1): 78 doi: 10.1016/j.biopsycho.2005.07.005 [22] Wang Q. Multi Objective Learning and Ensembling for Deep Neura Network Based Speech Enhancement[Dissertation]. Hefei: University of Science and Technology China, 2018王青. 基于深層神經網絡的多目標學習和融合的語言增強研究[學位論文]. 合肥: 中國科學技術大學, 2018 -