ارزیابی روش‌های تفریق زمینه بر پایه الگوریتم سیگما دلتا به منظور تشخیص حرکت

نوع مقاله: مقاله پژوهشی

نویسندگان

1 گروه برق و مکاترونیک، واحد سمنان، دانشگاه آزاد اسلامی، سمنان، ایران

2 گروه برق، واحد سمنان، دانشگاه آزاد اسلامی، سمنان، ایران.

چکیده

پردازش توالی تصاویر ویدئویی برای قسمت‌بندی اجسام دارای حرکت (پیشنما) از قسمت‌های ثابت (زمینه) توالی تصاویر،یک مرحله اساسی در بسیاری از کاربردهای بینایی ماشین به ویژه تشخیص حرکت می‌باشد. یکی از روش‌های مرسوم، بکار بردن رویکرد تفریق زمینه است که اجسام متحرک را از مقایسه هر فریم با فریم زمینه بدست آمده، ایجاد می‌کند.در این مقاله، به بررسیروش‌های تفریق زمینه بازگشتی مبتنی بر فیلتر سیگمادلتا (الگوریتم سیگمادلتا) می‌پردازیم. الگوریتم تفریق زمینه یک تقریب بسیار سریع و ساده از زمینه فراهم می‌آورد و همچنین دارای این مزیت است که به منابع بسیار کمی از حافظه نیاز دارد. به دلیل غیر خطی بودن این الگوریتم، ویژگی جالب آن مقاومت زیاد در مقایسه با میانگین‌های بازگشتی خطی و هزینه‌ محاسباتی بسیار کم است. اما، از طرف دیگر الگوریتم اصلی سیگمادلتا، در صحنه-های پیچیده و شلوغ با اجسام دارای حرکت آهسته و یا موقتا متوقف شده، آلوده می‌شود. همچنین، در این الگوریتم اثر روح و اثر روزنه‌ای به وضوح قابل مشاهده است. این مقاله به ارزیابی این الگوریتم و بررسی روش‌های تکمیلی و رویکردهای مختلف ارائه شده برای آن می‌پردازد. در این مقاله،تمام الگوریتم‌ها به صورت گام به گام اجرا و پیاده سازی شده است. هدف این مکمل‌ها و رویکردها، رفع و یا کاهش معایب و مشکلات الگوریتم اصلی است. در انتها یک تحلیل کمی بین این رویکردها انجام می‌شود و بهبودهای انجام شده، مزایا و معایب هر الگوریتم مورد ارزیابی قرار می‌گیرد و مقایسه بین الگوریتم اصلی سیگما‌دلتا و سایر الگوریتم‌های مرتبط ارائه می‌شود.

کلیدواژه‌ها

موضوعات


عنوان مقاله [English]

Background Subtraction Techniques Evaluation based on ∑-∆ Algorithm for Motion Detection

نویسندگان [English]

  • Mohammadreza Mahvidi 1
  • Vahid Ghods 2
1 Semnan branch, Islamic Azad University
2 Department of Electrical Engineering, Semnan Branch, Islamic Azad University, Semnan, Iran.
چکیده [English]

Processing a video stream to segment moving objects (foreground) from the static scene (background) is a critical first step in many computer vision applications. One of the common methods is using background subtraction approach, which detects moving objects by comparing each frame with the obtained background frame. In this paper, we examine background subtraction algorithm based on sigma-delta filter. This algorithm provides a simple and very fast approximation of the median and has the advantage of having low memory requirements. The interest of this method lies in the robustness provided by the non-linearity compared to the linear recursive average, and in the very low computational cost. However in the basic sigma-delta algorithm, the background model quickly degrades in complex urban scenes because it is easily “contaminated” by slow-moving or temporarily stopped objects. And in this algorithm ghost effect and aperture effect is clearly visible. This paper is a review to this algorithm and various approaches and improvements proposed for it. In this paper, first basic sigma-delta and then its important approaches is described. The purpose of this approaches and improvements is to eliminate or reduce the defects and disadvantages of the main algorithm. In the end, a quantitative comparison between these algorithms is carried out and improvements and advantage and disadvantages of each algorithm are evaluated.

کلیدواژه‌ها [English]

  • Motion detection
  • Background subtraction
  • Sigma-Delta algorithm

[1] A. Manzanera, J.C. Richefeu, “A new motion detection algorithm based on Σ–Δ background estimation”, Pattern Recognition Letters, Vol. 28, No. 3, pp. 320-328, 2007.

[2] T. Bouwmans, “Traditional and recent approaches in background modeling for foreground detection: An overview”, Computer Science Review, Vol. 11, pp. 31-66, 2014.

[3] A. Gandhamal, S. Talbar, “Evaluation of background subtraction algorithms for object extraction”, Proceeding of the IEEE International Conference on Pervasive Computing (ICPC), pp. 1-6, 2015.

[4] C.R. Wren, A. Azarbayejani, T.J. Darrell, A.P. Pentland, “Pfinder: Real-time tracking of the human body”, IEEE Trans. Pattern Anal. Mach. Intell., Vol. 19, No. 7, pp. 780–785, Jul. 1997.

[5] D. Koller, J. Weber, T. Huang, J. Malik, G. Ogasawara,  B. Rao, S. Russell, “Towards  robust automatic traffic scene analysis in real time”,  Proceeding of the ICPR, pp. 126–131, Nov. 1994.

[6] R. Cucchiara, C. Grana, M. Piccardi, A. Prati, “Detecting movingobjects, ghosts, and shadows in video streams”, IEEE Trans. Pattern Anal. Mach. Intell., Vol. 25, No. 10, pp. 1337–1342, Oct. 2003.

[7] S.C. Cheung, C. Kamath, “Robust techniques for background subtractionin urban traffic video”, Proceeding of the SPIE Electron. Imaging Video Commun. Image Process., pp. 1–12, Jan. 2004.

[8] B.P.L. Lo, S.A. Velastin, “Automatic congestion detection systemfor underground platforms”, Proceeding of the ISIMP, pp. 158–161, May 2001.

[9] Q. Zhou, J. Aggarwal, “Tracking and classifying moving objects fromvideos”, Proceeding of the IEEE Workshop Perform. Eval. Tracking Surveillance, pp. 52–59, 2001.

[10] C. Stauffer, W. Grimson, “Adaptive background mixture models forreal-time tracking”, Proceeding of the IEEE Conf. Comput. Vis. Pattern Recog, Vol. 2, pp. 246–252, 1999.

[11] M. Harville, “A framework for high-level feedback to adaptive, per-pixel,mixture-of-Gaussian background models”, Proceeding of the Eur. Conf. Comput.Vis., Vol. 3, pp. 543–560, May 2002.

[12] W. Power, J.A. Schoonees, “Understanding background mixture models for foreground segmentation,” Proceeding of the IVCNZ, pp. 267–271, Nov. 2002.

[13] A. Elgammal, D. Harwood, L. Davis, “Non-parametric model for background subtraction”, Proceeding of the IEEE ICCV Frame-Rate Workshop, pp. 1–15, Sep. 1999.

[14] F. Porikli, O. Tuzel, “Bayesian background modeling for foreground detection”, Proceeding of the ACM Vis. Surveillance Sens. Netw., pp. 55–58, 2005.

[15] K. Kim, T.H. Chalidabhongse, D. Harwood, L. Davis, “Real-time foreground–background segmentation using codebook model”, Real-Time Imaging, Vol. 11, No. 3, pp. 172–185, Jun. 2005.

[16] K.P. Karmann, A. Brandt, “Moving object recognition using an adaptive background memory”, Time-Varying Image Processing and Moving Object Recognition, in V. Cappellini, Ed. Amsterdam, The Netherlands: Elsevier, pp. 289–307, 1990.

[17] D. Koller, J. Weber, J. Malik, “Robust multiple car tracking with occlusion reasoning”, Proceeding of the ECCV, pp. 189–196, Sweden, May 1994.

[18] K. Toyama, J. Krumm, B. Brumitt, B. Meyers, “Wallflower: Principles and practice of background maintenance”, Proceeding of the ICCV, pp. 255–261, Greece, Sep. 1999.

[19] A. Monnet, A. Mittal, A. Paragios, V. Ramesh, “Background modeling and subtraction of dynamic scenes”, Proceeding of the ICCV, pp. 1305–1312, France, Oct. 2003.

[20] J. Zhong, S. Sclaroff, “Segmenting foreground objects from adynamic, textured background via a robust Kalman filter”, Proceeding of the ICCV, pp. 44–50, France, Oct. 2003.

[21] N.M. Oliver, B. Rosario, A.P. Pentland, “A Bayesian computer vision system for modeling human interactions”, IEEE Trans. Pattern Anal. Mach. Intell., Vol. 22, No. 8, pp. 831–843, Aug. 2000.

[22] H. Bhaskara, K. Dwivedic, D.P. Dograd, M. Al-Muallaa, L. Mihaylovae, “Autonomous detection and tracking under illumination changes, occlusions and moving camera”, Signal processing, Vol. 117, pp. 343–354, 2015.

[23] D. Volpi, M.H. Sarhan, R. Ghotbi, N. Navab, D. Mateus, S. DemirciInternational, “Online tracking of interventional devices for endovascular aortic repair”, Journal of Computer Assisted Radiology and Surgery, Vol. 10, No. 6, pp. 773–781, 2015.

[24] N.B. Erichson, C. Donovan, “Randomized low-rank Dynamic Mode Decomposition for motion detection”, Computer Vision and Image Understanding, Vol. 146, pp. 40–50, 2016.

[25] Z. Zhao, X. Zhang, Y. Fang, “Stacked Multilayer Self-Organizing Map for Background Modeling”, IEEE Transactions on Image Processing, Vol. 24, No. 9, pp. 2841-2850, 2015.

[26] K. Wang, Y. Liu, C. Gou, F.Y. Wang, “A multi-view learning approach to foreground detection for traffic surveillance applications”, IEEE Trans. on Vehicular Technology, Vol. 65, No. 6, pp. 4144-4158, June 2016.

[27] D.D. Bloisi, A. Pennisia, L. Iocchia, “Parallel multi-modal background modeling”, Pattern Recognition Letters, Vol. 96, pp. 45-54, 2017.

[28] G. Han, J. Wang, X. Cai, “Background subtraction based on modified online robust principal component analysis”, International Journal of Machine Learning and Cybernetics, Vol. 8, No. 6, pp. 1839-1852, 2017.

[29] X. Ye, J. Yang, X. Sun, K. Li, C. Hou, Y. Wang, “Foreground–background separation from video clips via motion-assisted matrix restoration”, IEEE Trans. on Circuits and Systems for Video Technology, Vol. 25, No. 11, pp. 1721–1734, Jan. 2015.

[30] D. Jeyabharathi, D. Dejey, “Vehicle Tracking and Speed Measurement system (VTSM) based on novel feature descriptor: Diagonal Hexadecimal Pattern (DHP)”, Journal of Visual Communication and Image Representation, Vol. 40, pp. 816–830, 2016.

[31] B.H. Chen, S.C. Huang, J.Y. Yen, “Counter-propagation artificial neural network-based motion detection algorithm for static-camera surveillance scenarios”, Neurocomputing, Vol. 273, pp.481-493, 2018.

[32] J. Guo, P. Zheng, J. Huang, “An efficient motion detection and tracking scheme for encrypted surveillance videos”, ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), Vol.13, No.4, pp.1-23, 2017.

[33] K. Yun,    J. Lim, J.Y. Choi, “Scene conditional background update for moving object detection in a moving camera”, Pattern Recognition Letters, Vol. 88, pp. 57-63, 2017.

[34] N. McFarlane, C. Schofield, “Segmentation and tracking of pigglets in images”, Mach. Vis. Appl., Vol. 8, No. 3, pp. 187–193, May 1995.

[35] i-LIDS Dataset for AVSS 2007. [Online]. Available: ftp://motinas.elec.qmul.ac.uk/pub/iLids

[36] L. Lacassagne, A. Manzanera, A. Dupret, “Motion detection: Fast and robust algorithms for embedded systems”, Proceeding of the ICIP, pp. 3265-3268, 2009.

[37] J. Denoulet, G. Mostafaoui, L. Lacassagne, A. Merigot, “Implementing motion markov detection on general purpose processor and associative mesh”, Proceeding of the CAMP, 2005.

[38] A. Caplier, C. Dumontier, F. Luthon, P. Coulon, “Mrf based motion detection algorithm image processing board implementation”, Traitement du signal, Vol.13, No. 2, pp. 177–190 (in French), 1996.

[39] L. Lacassagne, M. Milgram, P. Garda, “Motion detection, labeling, data association and tracking in real-time on risc computer”, Proceeding of the IEEE ICIAP, pp. 520–525, 1999.

[40] H.J.A.M. Heijmans, “Connected morphological operators for binary images,” Comput. Vis. Image Understand., Vol. 73, No. 1, pp. 99–120, Jan. 1999.

[41] L. Vincent, “Morphological grayscale reconstruction in image analysis: Applications and efficient algorithms”, IEEE Trans. on Image Processing, Vol. 2, No. 2, pp. 176–201, April 1993.

[42] P. Salembier, J. Ruiz, “Connected operators based on reconstruction process for size and motion simplification”, Proceeding of the IEEE Int. Conf. Acoust., Speech, Signal Process, Vol. 4, pp. 3289–3292., 2002.

[43] A. Manzanera, J.C. Richefeu, “A robust and computationally efficient motion detection algorithm based on sigma-delta background estimation”, Proceeding of the ICVGIP, pp. 46–51, Dec. 2004.

[44] M. Vargas, J.M. Milla, S.L. Toral, F. Barrero, “An enhanced background estimation algorithm for vehicle detection in urban traffic scenes”, IEEE Transactions on Vehicular Technology, Vol. 59, No. 8, pp. 3694-3709, 2010.

[45] G.K. Zipf, “Human behavior and the principle of least-effort”, Addison-Wesley, 1949.

[46] Y. Caron, P. Makris, N. Vincent, “A method for detecting artificial objects in natural environments”, Proceeding of the Int. Conf. in Pattern Recognition, pp. 600–603, 2002.

[47] A. Manzanera, “Σ–Δ Background Subtraction and the Zipf Law”, Berlin, Germany: Springer-Verlag, pp. 42–51, 2008.

[48] P. Sneath, R. Sokal, “Numerical Taxonomy: The Principle and Practice of Numerical Classification”, San Francisco, CA: Freeman, 1973.

[49] A. Ilyas, M. Scuturici, S. Miguet, “Real time foreground-background segmentation using a modified Codebook model”,  Proceeding of the 6th IEEE Int. Conf. AVSS, pp. 454–459, 2009.