AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (9.2 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Article | Open Access

Exploring Biomedical Video Source Identification: Transitioning from Fuzzy-Based Systems to Machine Learning Models

Surjeet Singh1( )Vivek Kumar Sehgal1
Department of Computer Science and Engineering and Information Technology, Jaypee University of Information Technology, Solan 173234, India
Show Author Information

Abstract

In recent years, the field of biomedical video source identification has witnessed a significant evolution driven by advances in both fuzzy-based systems and machine learning models. This paper presents a comprehensive survey of the current state of the art in this domain, highlighting the transition from traditional fuzzy-based approaches to the emerging dominance of machine learning techniques. Biomedical videos have become integral in various aspects of healthcare, from medical imaging and diagnostics to surgical procedures and patient monitoring. The accurate identification of the sources of these videos is of paramount importance for quality control, accountability, and ensuring the integrity of medical data. In this context, source identification plays a critical role in establishing the authenticity and origin of biomedical videos. This survey delves into the evolution of source identification methods, covering the foundational principles of fuzzy-based systems and their applications in the biomedical context. It explores how linguistic variables and expert knowledge were employed to model video sources, and discusses the strengths and limitations of these early approaches. By surveying existing methodologies and databases, this paper contributes to a broader understanding of the field’s progress and challenges.

References

[1]

K. Muhammad, M. S. Obaidat, and T. Hussain, J. Del Ser, N. Kumar, M. Tanveer, and F. Doctor, Fuzzy logic in surveillance big video data analysis: Comprehensive review, challenges, and research directions, ACM Comput. Surv., vol. 54, no. 3, p. 68, 2021.

[2]

Y. Akbari, S. Al-maadeed, O. Elharrouss, F. Khelifi, A. Lawgaly, and A. Bouridane, Digital forensic analysis for source video identification: A survey, Forensic Sci. Int. Digit. Investig., vol. 41, p. 301390, 2022.

[3]

Y. Huang, J. Zhang, and H. Huang, Camera model identification with unknown models, IEEE Trans. Inf. Forensics Secur., vol. 10, no. 12, pp. 2692–2704, 2015.

[4]
P. Corcoran and P. Bigioi, Consumer imaging I—Processing pipeline, focus and exposure, in Handbook of Visual Display Technology, K. Blankenbach, Q. Yan, and R. J. O’Brien, eds. Berlin, Germany: Springer, 2015, pp. 1–25.
[5]

J. G. Chen, N. Wadhwa, Y. J. Cha, F. Durand, W. T. Freeman, and O. Buyukozturk, Modal identification of simple structures with high-speed video using motion magnification, J. Sound Vib., vol. 345, pp. 58–71, 2015.

[6]

M. Silva, B. Martinez, E. Figueiredo, J. C. W. A. Costa, Y. Yang, and D. Mascareñas, Nonnegative matrix factorization-based blind source separation for full-field and high-resolution modal identification from video, J. Sound Vib., vol. 487, p. 115586, 2020.

[7]
K. Kurosawa, K. Kuroki, and N. Saitoh, Basic study on identification of video camera models by videotaped images, in Proc. 6th Indo Pacific Congress on Legal Medicine and Forensic Sciences, Kobe, Japan, 1998, pp. 26–30.
[8]
K. Kurosawa, K. Kuroki, and N. Saitoh, CCD fingerprint method-identification of a video camera from videotaped images, in Proc. 1999 Int. Conf. Image Processing, Kobe, Japan, 1999, pp. 537–540.
[9]
M. Kharrazi, H. T. Sencar, and N. Memon, Blind source camera identification, in Proc. 2004 Int. Conf. Image Processing, ICIP '04, Singapore, 2005, pp. 709–712.
[10]

J. Lukas, J. Fridrich, and M. Goljan, Digital camera identification from sensor pattern noise, IEEE Trans. Inf. Forensics Secur., vol. 1, no. 2, pp. 205–214, 2006.

[11]

G. S. Bennabhaktula, D. Timmerman, E. Alegre, and G. Azzopardi, Source camera device identification from videos, SN Comput. Sci., vol. 3, no. 4, p. 316, 2022.

[12]

D. L. Donoho and I. M. Johnstone, Ideal spatial adaptation by wavelet shrinkage, Biometrika, vol. 81, no. 3, pp. 425–455, 1994.

[13]
G. Kaiser and L. H. Hudgins, A Friendly Guide to Wavelets. Boston, MA, USA: Birkhäuser, 1994.
[14]

W. van Houten and Z. Geradts, Source video camera identification for multiply compressed videos originating from YouTube, Digit. Investig., vol. 6, nos. 1&2, pp. 48–60, 2009.

[15]
D. K. Hyun, C. H. Choi, and H. K. Lee, Camcorder identification for heavily compressed low-resolution videos, in Computer Science and Convergence, J. J. Park, H. C. Chao, M. S. Obaidat, and J. S. Kim, eds. Dordrecht, the Netherlands: Springer, 2012, pp. 695–701.
[16]

A. Mahalanobis, B. V. Kumar, and D. Casasent, Minimum average correlation energy filters, Appl. Opt., vol. 26, no. 17, pp. 3633–3640, 1987.

[17]

L. J. G. Villalba, A. L. S. Orozco, R. R. López, and J. H. Castro, Identification of smartphone brand and model via forensic video analysis, Expert Syst. Appl., vol. 55, no. C, pp. 59–69, 2016.

[18]
M. Al-Athamneh, F. Kurugollu, D. Crookes, and M. Farid, Digital video source identification based on green-channel photo response non-uniformity (G-PRNU), in Proc. Int. Conf. Comput. Sci., San Diego, CA, USA, 2016, pp. 1−9.
[19]
G. S. Wales, Proposed framework for digital video authentication, Master dissertation, University of Colorado Denver, Denver, CO, USA, 2019.
[20]

W. C. Yang, J. Jiang, and C. H. Chen, A fast source camera identification and verification method based on PRNU analysis for use in video forensic investigations, Multimed. Tools Appl., vol. 80, no. 5, pp. 6617–6638, 2021.

[21]

D. Shullani, M. Fontani, M. Iuliani, O. Al Shaya, and A. Piva, VISION: A video and image dataset for source identification, EURASIP J. Inf. Secur., vol. 2017, no. 1, p. 15, 2017.

[22]

R. R. López, E. A. Luengo, A. L. S. Orozco, and L. J. G. Villalba, Digital video source identification based on container’s structure analysis, IEEE Access, vol. 8, pp. 36363–36375, 2020.

[23]

C. T. Li, Source camera identification using enhanced sensor pattern noise, IEEE Trans. Inf. Forensics Secur., vol. 5, no. 2, pp. 280–287, 2010.

[24]
S. McCloskey, Confidence weighting for sensor fingerprinting, in Proc. IEEE Computer Society Conf. Computer Vision and Pattern Recognition Workshops, Anchorage, AK, USA, 2008, pp. 1–6.
[25]
W. H. Chuang, H. Su, and M. Wu, Exploring compression effects for improved source camera identification using strongly compressed video, in Proc. 18th IEEE Int. Conf. Image Processing, Brussels, Belgium, 2011, pp. 1953–1956.
[26]
M. Goljan, M. Chen, P. Comesaña, and J. Fridrich, Effect of compression on sensor-fingerprint based camera identification, http://en.wikipedia, 2016.
[27]

R. Ramos López, A. L. Sandoval Orozco, and L. J. García Villalba, Compression effects and scene details on the source camera identification of digital videos, Expert Syst. Appl., vol. 170, p. 114515, 2021.

[28]
T. Höglund, P. Brolund, and K. Norell, Identifying camcorders using noise patterns from video clips recorded with image stabilisation, in Proc. 7th Int. Symp. Image Signal Processing and Analysis (ISPA), Dubrovnik, Croatia, 2011, pp. 668–671.
[29]
S. Taspinar, M. Mohanty, and N. Memon, Source camera attribution using stabilized video, in Proc. IEEE Int. Workshop on Information Forensics and Security (WIFS), Abu Dhabi, United Arab Emirates, 2016, pp. 1–6.
[30]

M. Iuliani, M. Fontani, D. Shullani, and A. Piva, Hybrid reference-based video source identification, Sensors, vol. 19, no. 3, p. 649, 2019.

[31]

S. Mandelli, P. Bestagini, L. Verdoliva, and S. Tubaro, Facing device attribution problem for stabilized video sequences, IEEE Trans. Inf. Forensics Secur., vol. 15, pp. 14–27, 2019.

[32]

E. Altinisik and H. T. Sencar, Source camera verification for strongly stabilized videos, IEEE Trans. Inf. Forensics Secur., vol. 16, pp. 643–657, 2021.

[33]
P. Ferrara and L. Beslay, Robust video source recognition in presence of motion stabilization, in Proc. 8th Int. Workshop on Biometrics and Forensics (IWBF), Porto, Portugal, 2020, pp. 1–6.
[34]

S. Taspinar, M. Mohanty, and N. Memon, Camera identification of multi-format devices, Pattern Recognit. Lett., vol. 140, pp. 288–294, 2020.

[35]
M. Brouwers and R. Mousa, Automatic comparison of photo response non uniformity (PRNU) on YouTube, https://api.semanticscholar.org/CorpusID:33672350, 2017.
[36]

I. Amerini, R. Caldelli, A. Del Mastio, A. Di Fuccia, C. Molinari, and A. P. Rizzo, Dealing with video source identification in social networks, Signal Process. Image Commun., vol. 57, pp. 1–7, 2017.

[37]

C. Meij and Z. Geradts, Source camera identification using photo response non-uniformity on WhatsApp, Digit. Investig., vol. 24, pp. 142–154, 2018.

[38]

E. K. Kouokam and A. E. Dirik, PRNU-based source device attribution for YouTube videos, Digit. Investig., vol. 29, pp. 91–100, 2019.

[39]

A. Pande, S. Chen, P. Mohapatra, and J. Zambreno, Hardware architecture for video authentication using sensor pattern noise, IEEE Trans. Circuits Syst. Video Technol., vol. 24, no. 1, pp. 157–167, 2014.

[40]

S. Chen, A. Pande, K. Zeng, and P. Mohapatra, Live video forensics: Source identification in lossy wireless networks, IEEE Trans. Inf. Forensics Secur., vol. 10, no. 1, pp. 28–39, 2015.

[41]

J. Kaur and D. K. K. Randhawa, Source identification of videos transmitted in lossy wireless networks, IJIREEICE, vol. 5, no. 5, pp. 331–339, 2017.

[42]
C. Sammut and G. I. Webb, Encyclopedia of Machine Learning. Boston, MA, USA: Springer, 2010.
[43]
Y. Akbari, N. Almaadeed, S. Al-Maadeed, F. Khelifi, and A. Bouridane, PRNU-Net: A deep learning approach for source camera model identification based on videos taken with smartphone, in Proc. 26th Int. Conf. Pattern Recognit. (ICPR), Montreal, Canada, 2022, pp. 599–605.
[44]
A. Lawgaly, F. Khelifi, A. Bouridane, S. Al-Maaddeed, and Y. Akbari, PRNU estimation based on weighted averaging for source smartphone video identification, in Proc. 8th Int. Conf. Control, Decision and Information Technologies (CoDIT), Istanbul, Turkey, 2022, pp. 75–80.
[45]

S. Taspinar, M. Mohanty, and N. Memon, PRNU-based camera attribution from multiple seam-carved images, IEEE Trans. Inf. Forensics Secur., vol. 12, no. 12, pp. 3065–3080, 2017.

[46]

P. Ferrara, M. Iuliani, and A. Piva, PRNU-based video source attribution: Which frames are you using, J. Imaging, vol. 8, no. 3, p. 57, 2022.

[47]
R. Caldelli, I. Amerini, F. Picchioni, and M. Innocenti, Fast image clustering of unknown source images, in Proc. IEEE Int. Workshop on Information Forensics and Security, Seattle, WA, USA, 2010, pp. 1–5.
[48]

J. Xu, H. W. Chang, S. Yang, and M. Wang, Fast feature-based video stabilization without accumulative global motion estimation, IEEE Trans. Consum. Electron., vol. 58, no. 3, pp. 993–999, 2012.

[49]
M. Grundmann, V. Kwatra, and I. Essa, Auto-directed video stabilization with robust L1 optimal camera paths, in Proc. CVPR, Colorado Springs, CO, USA, 2011, pp. 225–232.
[50]
Y. Hu, J. Dai, and J. He, Method and device for identifying flashing light source, US Patent US10895799B2, 19 January 2002.
[51]

F. Liu, M. Gleicher, H. Jin, and A. Agarwala, Content-preserving warps for 3D video stabilization, ACM Trans. Graph., vol. 28, no. 3, p. 44, 2009.

[52]

Z. Wang, L. Zhang, and H. Huang, High-quality real-time video stabilization using trajectory smoothing and mesh-based warping, IEEE Access, vol. 6, pp. 25157–25166, 2018.

[53]
A. Karaküҫük, A. E. Dirik, H. T. Sencar, and N. D. Memon, Recent advances in counter PRNU based source attribution and beyond, in Proc. SPIE 9409, Media Watermarking, Security, and Forensics 2015, San Francisco, CA, USA, 2015, p. 94090N.
[54]
M. Goljan and J. Fridrich, Camera identification from cropped and scaled images, in Proc. SPIE 6819, Security, Forensics, Steganography, and Watermarking of Multimedia Contents X, San Jose, CA, USA, 2008, p. 68190E.
[55]
M. Goljan, J. Fridrich, and T. Filler, Large scale test of sensor fingerprint camera identification, in Proc. Media Forensics and Security, San Jose, CA, USA, 2009, pp. 170−181.
[56]

M. I. Chacon-Murguia and S. Gonzalez-Duarte, An adaptive neural-fuzzy approach for object detection in dynamic backgrounds for surveillance systems, IEEE Trans. Ind. Electron., vol. 59, no. 8, pp. 3286–3298, 2012.

[57]

Q. Liang and J. M. Mendel, MPEG VBR video traffic modeling and classification using fuzzy technique, IEEE Trans. Fuzzy Syst., vol. 9, no. 1, pp. 183–193, 2001.

[58]

J. Navarro, F. Doctor, V. Zamudio, R. Iqbal, A. K. Sangaiah, and C. Lino, Fuzzy adaptive cognitive stimulation therapy generation for Alzheimer’s sufferers: Towards a pervasive dementia care monitoring platform, Future Gener. Comput. Syst., vol. 88, pp. 479–490, 2018.

[59]

Y. Su, J. Xu, B. Dong, J. Zhang, and Q. Liu, A novel source mpeg-2 video identification algorithm, Int. J. Patt. Recogn. Artif. Intell., vol. 24, no. 8, pp. 1311–1328, 2010.

[60]
S. Yahaya, A. T. S. Ho, and A. A. Wahab, Advanced video camera identification using conditional probability features, in Proc. IET Conf. Image Processing (IPR 2012), London, UK, 2012, pp. 1–5.
[61]
M. Kirchner and C. Johnson, SPN-CNN: Boosting sensor-based source camera attribution with deep learning, in Proc. IEEE Int. Workshop on Information Forensics and Security (WIFS), Delft, the Netherlands, 2019, pp. 1–6.
[62]
B. Hosler, O. Mayer, B. Bayar, X. Zhao, C. Chen, J. A. Shackleford, and M. C. Stamm, A video camera model identification system using deep learning and fusion, in Proc. ICASSP 2019 - 2019 IEEE Int. Conf. Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 2019, pp. 8271–8275.
[63]
D. Timmerman, S. Bennabhaktula, E. Alegre, and G. Azzopardi, Video camera identification from sensor pattern noise with a constrained ConvNet, arXiv preprint arXiv: 2012.06277, 2020.
[64]
O. Mayer, B. Hosler, and M. C. Stamm, Open set video camera model verification, in Proc. ICASSP 2020 - 2020 IEEE Int. Conf. Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 2020, pp. 2962–2966.
[65]
A. W. Wahab, J. A. Briffa, H. G. Schaathun, and A. T. S. Ho, Conditional probability based steganalysis for JPEG steganography, in Proc. Int. Conf. Signal Process. Syst., Singapore, 2009, pp. 205–209.
[66]

K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising, IEEE Trans. Image Process., vol. 26, no. 7, pp. 3142–3155, 2017.

[67]
B. Bayar and M. C. Stamm, A deep learning approach to universal image manipulation detection using a new convolutional layer, in Proc. 4th ACM Workshop on Information Hiding and Multimedia Security, Vigo, Spain, 2016, pp. 5–10.
[68]

O. Mayer and M. C. Stamm, Forensic similarity for digital images, IEEE Trans. Inf. Forensics Secur., vol. 15, pp. 1331–1346, 2019.

[69]
C. Galdi, F. Hartung, and J. L. Dugelay, SOCRatES: A database of realistic data for source camera recognition on smartphones, presented at 8th Int. Conf. Pattern Recognition Applications and Methods, Prague, Czech Republic, 2019.
[70]

M. Javaid, A. Haleem, R. P. Singh, and R. Suman, Sustaining the healthcare systems through the conceptual of biomedical engineering: A study with recent and future potentials, Biomed. Technol., vol. 1, pp. 39–47, 2023.

[71]

J. N. Acosta, G. J. Falcone, P. Rajpurkar, and E. J. Topol, Multimodal biomedical AI, Nat. Med., vol. 28, no. 9, pp. 1773–1784, 2022.

Fuzzy Information and Engineering
Pages 33-48
Cite this article:
Singh S, Sehgal VK. Exploring Biomedical Video Source Identification: Transitioning from Fuzzy-Based Systems to Machine Learning Models. Fuzzy Information and Engineering, 2024, 16(1): 33-48. https://doi.org/10.26599/FIE.2023.9270030
Part of a topical collection:

310

Views

42

Downloads

1

Crossref

0

Web of Science

0

Scopus

Altmetrics

Received: 20 October 2023
Revised: 21 November 2023
Accepted: 10 December 2023
Published: 30 March 2024
© The Author(s) 2024. Published by Tsinghua University Press.

This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).

Return