نوع مقاله : پژوهشی

نویسندگان

1 گروه مهندسی برق- دانشگاه فنی و حرفه‌ای، تهران، ایران

2 دانشکده مهندسی برق و کامپیوتر- دانشگاه بیرجند، بیرجند، ایران

چکیده

در این مطالعه، الگوریتمی موثر و کارآمد برای تشخیص نقشه برجستگی تصویر بر اساس مدل­سازی پاسخ سریع سیستم بینایی انسان به تغییرات شدت روشنائی، بافت و رنگ ارائه شده است. برخی موارد مانند الهام گرفتن از عملکرد سیستم بینایی انسان، عدم نیاز به آموزش، کاهش تعداد رنگ، کاهش کانال­های رنگی و استفاده صحیح از حداقل اطلاعات بافت در االگوریتم باعث افزایش کارایی آن شده است. در روش پیشنهادی در مرحله اول، با توجه به حساسیت سیستم بینایی انسان به سیگنال­های با کنتراست بالاتر، فقط کانال با کنتراست بالاتر برای استخراج نقشه برجستگی رنگ استفاده و سپس با استفاده از مولفه شدت روشنایی در فضای رنگ Lab و با استفاده از مدل محاسباتی سلول ساده کورتکس بینایی نقشه برجستگی شدت روشنائی و نقشه برجستگی بافت استخراج می­شوند. در نهایت، با ترکیب نقشه­های برجستگی رنگ، شدت روشنائی و بافت، نقشه برجستگی به­دست می­آید. روش پیشنهادی و روش­های موجود برروی پایگاه داده­های MSRA10K و ECSSD آزمایش شده است. نتایج پیاده­سازی­ها نشان می­دهد که الگوریتم ترکیبی پیشنهادی برای تشخیص نقشه برجستگی با استفاده از ویژگی­های رنگ و بافت غالب، در پایگاه داده ECSSD به ترتیب دارای میانگین خطای مطلق، امتیاز معیار F و سطح زیر منحنی ROC ، 173/0 ، 789/0 و 891/0 و در پایگاه داده MSRA10K به ترتیب 178/0، 790/0 و 919/0 است که در مقایسه با سایر مدل­ها بیانگر عملکرد بهتر روش پیشنهادی نسبت  به سایر روش­ها است.

چکیده تصویری

استخراج مؤثر نقشه برجستگی تصویر با استفاده از تقویت تباین رنگ و بافت غالب

تازه های تحقیق

- الهام گرفتن از عملکرد سیستم بینایی انسان و عدم نیاز به آموزش

- کاهش تعداد رنگ، کاهش کانال­های رنگی و استفاده صحیح از حداقل اطلاعات بافت در االگوریتم

- کاهش دقت داده‌ها درحد مورد نیاز الگوریتم و جلوگیری از ورود داده‌های اضافی به الگوریتم  

- توسعه الگوریتم ارائه شده مبتنی بر بستر اطلاعاتی شدت روشنایی و بافت

کلیدواژه‌ها

موضوعات

عنوان مقاله [English]

Effective Visual Saliency Detection Method Using Reduced Color and Texture Features

نویسندگان [English]

  • Masoud khazaee Fadafen 1
  • Naser Mehrshad 2
  • Seyyed Mohammad Razavi 2

1 Department of Electrical Engineering- Technical and Vocational University, Tehran, Iran

2 Department of Electrical and Computer Engineering- Birjand University, Birjand, Iran

چکیده [English]

In this study, an effective and efficient algorithm for detection a saliency map is presented based on the modeling of the rapid response of the human visual system to changes in the intensity, texture and color. Some cases such as inspiration from performance of human visual system, requiring no training, reduce number of image colors, reduce color channels and Proper use of the least texture information in this algorithm have increased its efficiency. In the proposed method in the first step , Due to sensitivity of the human visual system to higher contrast signals, only higher contrast channel has been used to extract the color saliency map, Then the intensity saliency map as well as the texture saliency map are extracted using the intensity component in lab color space using Simple cell computational model of the visual cortex and finally, with the perfect combination of the saliency maps of the color, the intensity, and the texture, object saliency map is obtained. The proposed method and existing methods have been tested on MSRA10K and ECSSD databases. The results of the implementations show that the proposed hybrid algorithm for the detection saliency map using the dominant color and texture features, On the ECSSD database, the mean absolute error, F-measure score and the area under the ROC curve are 0.173, 0.789 and 0.891, respectively, and on the MSRA10K database are 0.178, 0.790 and 0.919, respectively, compared to other models, it indicates better performance of the proposed method than other methods.

کلیدواژه‌ها [English]

  • computational model of simple cell
  • Feature extraction
  • Human visual system
  • Saliency Map

Citation: M. Khazaee-Fadafen, N. Mehrshad, S.M. Razavi, "Effective visual saliency detection method using reduced color and texture features", Journal of Intelligent Procedures in Electrical Technology, vol. 14, no. 54, pp. 109-120, September 2023 (in Persian).

[1] R. Nasiripour, H. Farsi, S. Mohamadzadeh, "Visual saliency object detection using sparse learning", IET Image Processing, vol. 13, no. 13, pp. 2436-2447, Nov, 2019 (doi: /10.1049/iet-ipr.2018.6613).
[2] P. Etezadifar, H. Farsi, "Scalable video summarization via sparse dictionary learning and selection simultan­eously", Multimedia Tools and Applications, vol. 76, no. 6, pp. 7947-7971, 2017 (doi: 10.1007/s110­42-0­16-3433-z).
[3] M. Guo, Y. Zhao, C. Zhang, Z. Chen, "Fast object detection based on selective visual attention", Neurocom­put­ing, vol. 144, pp. 184-197, Nov. 2014 (doi: 10.1016/j.neucom.2014.04.054).
[4] V. Bhateja, M. Nigam, A.S. Bhadauria, A. Arya, E.Y.D. Zhang, "Human visual system based optimized mathematical morphology approach for enhancement of brain MR images", Journal of Ambient Intelligence and Humanized Computing, pp. 1-9, July 2019 (doi: 10.1007/s12652-019-01386-z).
[5] Y. Ding, "Human visual system and vision modeling", Visual Quality Assessment for Natural and Medical Image, pp. 27-43, 2018 (doi: 10.1007/978-3-662-56497-4_3).
[6] P. Lv, S. Sun, C. Lin, G. Liu, "A method for weak target detection based on human visual contrast mechanism", IEEE Geoscience and Remote Sensing Letters, vol. 16, no. 2, pp. 261-265, Feb 2018 (doi: 10.1109/LGRS.­2018.2866154).
[7] M.H. Karimi, R. Ebrahimpour, N. Bagheri, "A human visual system based temporal model for semantic levels categorization", IEEE Access, vol. 9, pp. 32873-32881, 2020 (doi: 10.1109/ACCESS.2020.2966507).
[8] P. Sharma, "Perceptual image difference metrics, saliency maps and eye tracking", Master's Thesis, 2008.
[9] Z.H. Chen, Y. Liu, B. Sheng, J.N. Liang, J. Zhang, Y.B. Yuan, "Image saliency detection using gabor texture cues", Multimedia Tools and Applications, vol. 75, no. 24, pp. 16943-16958, 2016 (doi: 10.1007/s11042-015-2965-y).
[10] Y. Chen, J. Tao, Q. Zhang, K. Yang, X. Chen, J. Xiong, R. Xia, J. Xie, "Saliency detection via the improved hierarchical principal component analysis method", Wireless Communications and Mobile Computing, vol. 2020, Article Number: 8822777,  May 2020 (doi: 10.1155/2020/8822777).
[11] K. Zolna, K.J. Geras, K. Cho, "Classifier-agnostic saliency map extraction", Computer Vision and Image Understanding, vol. 19, Article Number: 102969, July 2020 (doi: 10.1016/j.cviu.2020.102969).
[12] K. Duncan, S. Sarkar, "REM: Relational entropy-based measure of saliency", Proceedings of the ICCVGIP, pp. 40-47, 2010 (doi: 10.1145/1924559.1924565).
[13] O.L. Meur, P.L. Callet, D. Barba, D. Thoreau, "A coherent computational approach to model bottom-up visual attention", IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 28, no. 5, pp. 802-817, May 2006 (doi: 10.1109/TPAMI.2006.86).
[14] Y.F. Ma, H.J. Zhang, "Contrast-based image attention analysis by using fuzzy growing", Proceedings of    ACM, pp. 374-381, Nov. 2003 (doi: 10.1145/957013.957094).
[15] Z. Yu, H.S. Wong, "A rule based technique for extraction of visual attention regions based on real-time clustering", IEEE Trans. on Multimedia, vol. 9, no. 4, pp. 766-784, June 2007 (doi: 10.1109/TM­M.200­7.893351).
[16] D. Chen, T. Jia, C. Wu, "Visual saliency detection: from space to frequency,"  Signal Processing: Image Communication, vol. 44, pp. 57-68, May 2016 (doi: 10.1016/j.image.2016.03.003).
[17] M. Zhang, Y. Wu, Y. Du, L. Fang, Y. Pang, "Saliency detection integrating global and local information", Journal of Visual Communication and Image Representation, vol. 53, pp. 215-223, May 2018 (doi: 10.1016/j.jvcir.2018.03.019).
[18] Z. Rahman, Y.F. Pu, M. Aamir, F. Ullah, "A framework for fast automatic image cropping based on deep saliency map detection and gaussian filter", International Journal of Computers and Applications, vol. 41, no. 3, pp. 207-217, 2019 (doi: 10.1080/1206212X.2017.1422358).
[19] S. Goferman, L. Zelnik-Manor, A. Tal, "Context-aware saliency detection", IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 34, no. 10, pp. 1915-1926, Oct 2012 (doi: 10.1109/TPAMI.2011.272).
[20] M. Sadeghi, H. marvi, A. Ahmadifard, "A new and efficient feature extraction method for robust speech recognition based on fractional fourier transform and differential evolution optimizer", Journal of Modeling in Engineering, vol. 18, no. 61, pp. 85-96, 2020 (doi: 10.22075/JME.2020.19267.1821).
[21] P.F. Felzenszwalb, D.P. Huttenlocher, "Efficient graph-based image segmentation", International journal of Computer Vision, vol. 59, no. 2, pp. 167-181, 2004 (doi: 10.1023/B:VISI.0000022288.19776.77).
[22] L. Itti, C. Koch, E. Niebur, "A model of saliency-based visual attention for rapid scene analysis", IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 20, no. 11, pp. 1254-1259, Nov. 1998 (doi: 10.1109/3­4.73­0558).
[23] L. Itti, C. Koch, "Computational modelling of visual attention", Nature Reviews Neuroscience, vol. 2, no. 3, pp. 194-203, 2001 (doi: 10.1038/35058500).
[24] R. Achanta, S. Hemami, F. Estrada, S. Susstrunk, "Frequency-tuned salient region detection", Proceeding of the IEEE/CVPR, pp. 1597-1604, Miami, FL, USA, June 2009 (doi: 10.1109/CVPR.2009.5206596).
[25] L. Zhang, L. Yang, T. Luo, "Unified saliency detection model using color and texture features", PloS One, vol. 11, no. 2, Article Number: e0149328, Feb. 2016 (doi: 10.1371/journal.pone.0149328).
[26] H. Zhang, W. Wang, G. Su, L. Duan, "A simple and effective saliency detection approach", Pattern Recognition (ICPR),  pp. 186-189, Tsukuba, Japan, Nov. 2012.
[27] F. Perazzi, P. Krähenbühl, Y. Pritch, A. Hornung, "Saliency filters: Contrast based filtering for salient region detection", Proceeding of the IEEE/CVPR), pp. 733-740, Providence, RI, USA, June 2012 (doi: 10.1109/CV­PR­.2012.6247743).
[28] L. Itti, C. Koch, "Feature combination strategies for saliency-based visual attention systems", Journal of Electronic imaging, vol. 10, no. 1, pp. 161-169, 2001 (doi: 10.1117/1.1333677).
[29] X. Hou, L. Zhang, "Saliency detection: A spectral residual approach", Proceeding of the IEEE/CVPR, pp. 1-8, Minneapolis, MN, USA, June 2007 (doi: 10.1109/CVPR.2007.383267).
[30] G. Yildirim, S. Süsstrunk, "FASA: Fast, accurate, and size-aware salient object detection", Proceeding of the in ACCV, pp. 514-528, Singapore, Singapore,Nov. 2014 (doi: 10.1007/978-3-319-16811-1_34).
[30] X. Xu, N. Mu, L. Chen, X. Zhang, "Hierarchical salient object detection model using contrast-based saliency and color spatial distribution", Multimedia Tools and Applications, vol. 75, no. 5, pp. 2667-2679, 2016 (doi: 10.1007/s11042-015-2570-0).
[32] L. Zhang, J. Dai, H. Lu, Y. He, G. Wang, "A bi-directional message passing model for salient object detection", Proceeding of the IEEE/CVPR, pp. 1741-1750, Lake City, UT, USA, June 2018 (doi: 10.110­9/CVP­R.2018.00187).
[33] Z. Luo, A. Mishra, A. Achkar, J. Eichel, S. Li, P.M. Jodoin, "Non-local deep features for salient object detection", Proceeding of the IEEE/CVPR, pp. 6609-6617, Honolulu, HI, USA, July 2017 (doi: 10.110­9/C­VP­R.2017.698).
[34] S. Jia, N.D. Bruce, "Richer and deeper supervision network for salient object detection", Computer Vision and Pattern Recognition, Article Number: arXiv:1901.02425, Jan. 2019 (doi: 10.48550/arXiv.1901.02425).
[35] R.C. Gonzalez , R.E. Woods, "Object recognition", Digital Image Processing, 3rd Edition, Pearson, pp. 861-909, 2008.
[36] N. Chaji,  H. Ghassemian, "Texture-gradient-based contour detection", EURASIP Journal on Applied Signal Processing, vol. 2006, pp. 1-8, 2006 (doi: 10.1155/ASP/2006/21709).
[37] M.M. Cheng, N.J. Mitra, X. Huang, P.H. Torr, S.M. Hu, "Global contrast based salient region detection", IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 37, no. 3, pp. 569-582, 2015 (doi: 10.1109/TPAMI.2014.2345401).
[38] Q. Yan, L. Xu, J. Shi, J. Jia, "Hierarchical saliency detection", Proceeding of the IEEE/CVPR, pp. 1155-1162, Portland, OR, USA, June 2013 (doi: 10.1109/CVPR.2013.153).
[39] L. Zhang, Z. Gu, H. Li, "SDSP: A novel saliency detection method by combining simple priors", Processing of the IEEE/ICIP, pp. 171-175, Melbourne, VIC, Australia, Sept. 2013 (doi: 10.1109/ICIP.2013.6738036).