AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (18.5 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

Deep Feature Learning for Intrinsic Signature Based Camera Discrimination

Department of Idustrial & Systems Engineering, University of Central Florida, Orlando, FL 32816, USA
Department of Computer Science, University of Alabama in Huntsville, Huntsville, AL 35806, USA
Air Force Research Labs, United States Air Force, Eglin Air Force Base, Shalimar, FL 32579, USA
Show Author Information

Abstract

In this paper we consider the problem of "end-to-end" digital camera identification by considering sequence of images obtained from the cameras. The problem of digital camera identification is harder than the problem of identifying its analog counterpart since the process of analog to digital conversion smooths out the intrinsic noise in the analog signal. However it is known that identifying a digital camera is possible by analyzing the camera's intrinsic sensor artifacts that are introduced into the images/videos during the process of photo/video capture. It is known that such methods are computationally intensive requiring expensive pre-processing steps. In this paper we propose an end-to-end deep feature learning framework for identifying cameras using images obtained from them. We conduct experiments using three custom datasets: the first containing two cameras in an indoor environment where each camera may observe different scenes having no overlapping features, the second containing images from four cameras in an outdoor setting but where each camera observes scenes having overlapping features and the third containing images from two cameras observing the same checkerboard pattern in an indoor setting. Our results show that it is possible to capture the intrinsic hardware signature of the cameras using deep feature representations in an end-to-end framework. These deep feature maps can in turn be used to disambiguate the cameras from each another. Our system is end-to-end, requires no complicated pre-processing steps and the trained model is computationally efficient during testing, paving a way to have near instantaneous decisions for the problem of digital camera identification in production environments. Finally we present comparisons against the current state-of-the-art in digital camera identification which clearly establishes the superiority of the end-to-end solution.

References

[1]
P. Rai and M. Rehman, ESP32 based smart surveillance system, in Proc. 2019 2nd Int. Conf. Computing, Mathematics and Engineering Technologies, Sukkur, Pakistan, 2019, pp. 1-3.
[2]
M. A. Alsmirat, Y. Jararweh, I. Obaidat, and B. B. Gupta, Internet of surveillance: A cloud supported large-scale wireless surveillance system, J. Supercomput., vol. 73, no. 3, pp. 973-992, 2017.
[3]
A. Koutsia, T. Semertzidis, K. Dimitropoulos, N. Grammalidis, and K. Georgouleas. Intelligent traffic monitoring and surveillance with multiple cameras, in Proc. 2008 Int. Workshop Content-Based Multimedia Indexing, London, UK, 2008, pp. 125-132.
[4]
M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. K. Zhang, et al., End to end learning for self-driving cars, arXiv preprint arXiv: 1604.07316, 2016.
[5]
S. Haji and A. Varol, Real time face recognition system (RTFRS), in Proc. 2016 4th Int. Symp. Digital Forensic and Security, Little Rock, AR, USA, 2016, pp. 107-111.
[6]
I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. Cambridge, MA, USA: MIT Press, 2016.
[7]
A. Krizhevsky, I. Sutskever, and G. E. Hinton, ImageNet classification with deep convolutional neural networks, in Proc. 25th Int. Conf. Neural Information Processing Systems, Lake Tahoe, NV, USA, 2012, pp. 1097-1105.
[8]
D. Roy, T. Mukherjee, M. Chatterjee, E. Blasch, and E. Pasiliao, RFAL: Adversarial learning for RF transmitter identification and classification. IEEE Trans. Cognit. Commun. Netw., vol. 6, no. 2, pp. 783-801, 2020.
[9]
Z. Mao, A. D. Jagtap, and G. E. Karniadakis, Physics-informed neural networks for high-speed flows, Comput. Methods Appl. Mech. Eng., vol. 360, p. 112789, 2020.
[10]
Q. Wang, J. R. Hopgood, N. Finlayson, G. O. S. Williams, S. Fernandes, E. Williams, A. Akram, K. Dhaliwal, and M. Vallejo, Deep learning in ex-vivo lung cancer discrimination using fluorescence lifetime endomicroscopic images, in Proc. 2020 42nd Annu. Int. Conf. IEEE Engineering in Medicine & Biology Society, Montreal, Canada, 2020, pp. 1891-1894.
[11]
X. Li, M. He, H. Li, and H. Shen, A combined loss-based multiscale fully convolutional network for high-resolution remote sensing image change detection, IEEE Geosci. Remote Sens. Lett., vol. 19, p. 8017505, 2021.
[12]
B. Chesney and D. Citron, Deep fakes: A looming challenge for privacy, democracy, and national security, Calif. Law Rev., vol. 107, pp. 1753-1820, 2019.
[13]
H. Farid, Creating, weaponizing, and detecting deep fakes, https://www.usenix.org/conference/usenixsecurity19/presentation/farid, 2019.
[14]
J. Lukas, J. Fridrich, and M. Goljan, Digital camera identification from sensor pattern noise, IEEE Trans. Inf. Forensics Secur., vol. 1, no. 2, pp. 205-214, 2006.
[15]
S. Bayram, H. Sencar, N. Memon, and I. Avcibas, Source camera identification based on CFA interpolation, in Proc. IEEE Int. Conf. Image Processing 2005, Genova, Italy, 2005, p. III-69.
[16]
K. R. Akshatha, A. K. Karunakar, H. Anitha, U. Raghavendra, and D. Shetty, Digital camera identification using PRNU: A feature based approach, Digital Invest., vol. 19, pp. 69-77, 2016.
[17]
J. Bernacki, Robustness of digital camera identification with convolutional neural networks, Multimed. Tools Appl., vol. 80, no. 19, pp. 29657-29673, 2021.
[18]
D. Freire-Obregón, F. Narducci, S. Barra, and M. Castrillón-Santana, Deep learning for source camera identification on mobile devices, Pattern Recogn. Lett., vol. 126, pp. 86-91, 2019.
[19]
M. De Marsico, M. Nappi, D. Riccio, and H. Wechsler, Mobile iris challenge evaluation (MICHE)-I, biometric iris dataset and protocols, Pattern Recogn. Lett., vol. 57, pp. 17-23, 2015.
[20]
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, Going deeper with convolutions, in Proc. 2015 IEEE Conf. Computer Vision and Pattern Recognition, Boston, MA, USA, 2015, pp. 1-9.
[21]
N. Sharma, V. Jain, and A. Mishra, An analysis of convolutional neural networks for image classification. Procedia Comput. Sci., vol. 132, pp. 377-384, 2018.
[22]
X. Glorot and Y. Bengio, Understanding the difficulty of training deep feedforward neural networks, in Proc. 13th Int. Conf. Artificial Intelligence and Statistics, Sardinia, Italy, 2010, pp. 249-256.
[23]
G. Bradski and A. Kaehler, Learning OpenCV: Computer Vision with the OpenCV Library, Sebastopol, CA, USA: O’Reilly Media, 2008.
[24]
K. P. Murphy, Machine Learning: A Probabilistic Perspective. Cambridge, MA, USA: MIT Press, 2012.
[25]
L. van der Maaten and G. Hinton, Visualizing data using t-SNE, J. Mach. Learn. Res., vol. 9, no. 86, pp. 2579-2605, 2008.
[26]
R. Livni, S. Shalev-Shwartz, and O. Shamir, On the computational efficiency of training neural networks, in Proc. 27th Int. Conf. Neural Information Processing Systems, Montreal, Canada, 2014, pp. 855-863.
[27]
K. He and J. Sun, Convolutional neural networks at constrained time cost, in Proc. 2015 IEEE Conf. Computer Vision and Pattern Recognition, Boston, MA, USA, 2015, pp. 5353-5360.
Big Data Mining and Analytics
Pages 206-227
Cite this article:
Banerjee C, Doppalapudi TK, Pasiliao E, et al. Deep Feature Learning for Intrinsic Signature Based Camera Discrimination. Big Data Mining and Analytics, 2022, 5(3): 206-227. https://doi.org/10.26599/BDMA.2022.9020006

1321

Views

117

Downloads

1

Crossref

1

Web of Science

2

Scopus

0

CSCD

Altmetrics

Received: 05 November 2021
Revised: 02 March 2022
Accepted: 04 March 2022
Published: 09 June 2022
© The author(s) 2022.

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return