<th id="5nh9l"></th><strike id="5nh9l"></strike><th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th><strike id="5nh9l"></strike>
<progress id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"><noframes id="5nh9l">
<th id="5nh9l"></th> <strike id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span>
<progress id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span><strike id="5nh9l"><noframes id="5nh9l"><strike id="5nh9l"></strike>
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"></span><span id="5nh9l"><video id="5nh9l"></video></span>
<th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th>
<progress id="5nh9l"><noframes id="5nh9l">

基于灰度信息和支持向量機的人眼檢測方法

Eye detection method using gray intensity information and support vector machines

  • 摘要: 提出一種基于灰度信息和支持向量機的人眼檢測方法.首先,利用人眼區域灰度變化比人臉其他部位灰度變化明顯的特征,采用圖像灰度二階矩(方差)建立人眼方差濾波器,在固定人眼搜索區域內,應用人眼方差濾波器搜索候選人眼圖像;然后,使用訓練的支持向量機分類器精確檢測人眼區域位置;最后,采用圖像灰度信息率定位人眼中心(虹膜中心).該方法在BioID、FERET和IMM人臉數據庫中的測試結果顯示:沒有佩戴眼鏡人臉圖像正確率分別為98.2%、97.8%和98.9%,406幅佩戴眼鏡人臉圖像正確率為94.9%;人眼中心定位正確率分別為90.5%、88.3%和96.1%.通過與目前方法比較,該方法獲得較好的檢測效果.

     

    Abstract: This article introduces an efficient eye detection method based on gray intensity information and support vector machines (SVM). Firstly, using the evidence that gray intensity variation in the eye region is obvious, an eye variance filter (EVF) was constructed. Within the selected eye search region, the eye variance filter was used to find out eye candidate regions. Secondly, a trained support vector machine classifier was employed to detect the precise eye location among these eye candidate regions. Lastly, the eye center, i. e., iris center, could be located by the proposed gray intensity information rate. The proposed method was evaluated on the BioID, FERET, and IMM face databases, respectively. The correct rates of eye detection on face images without glasses are 98.2%, 97.8% and 98.9% respectively and that with glasses is 94.9%. The correct rates of eye center localization are 90.5%, 88.3% and 96.1%, respectively. Compared with state-of-the-art methods, the proposed method achieves good detection performance.

     

/

返回文章
返回
<th id="5nh9l"></th><strike id="5nh9l"></strike><th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th><strike id="5nh9l"></strike>
<progress id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"><noframes id="5nh9l">
<th id="5nh9l"></th> <strike id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span>
<progress id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span><strike id="5nh9l"><noframes id="5nh9l"><strike id="5nh9l"></strike>
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"></span><span id="5nh9l"><video id="5nh9l"></video></span>
<th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th>
<progress id="5nh9l"><noframes id="5nh9l">
259luxu-164