Volume 13, Issue 4 (12-2021)                   itrc 2021, 13(4): 36-42 | Back to browse issues page


XML Print


Download citation:
BibTeX | RIS | EndNote | Medlars | ProCite | Reference Manager | RefWorks
Send citation to:

i Shakibian H, Moghadam Charkari N. A Multilayered Complex Network Model for Image Retrieval. itrc. 2021; 13 (4) :36-42
URL: http://ijict.itrc.ac.ir/article-1-497-en.html
1- Department of Computer Engineering Faculty of Engineering, Alzahra University Tehran, Iran
2- Faculty of Electrical and Computer Engineering Tarbiat Modares University Tehran, Iran , moghadam@modares.ac.ir
Abstract:   (369 Views)
In this study, an image retrieval system is proposed based on complex network model. Assuming a prior image categorization, firstly, a multilayered complex network is constructed between the images of each category according to the color, texture, and shape features. Secondly, by defining a meta-path as the way of connecting two images in the network, a set of informative meta-paths are composed to find the similar images by exploring the network. The established complex network provides an efficient way to benefit from the image correlations to enhance the similarity search of the images. On the other hand, employing diverse meta-paths with different semantics leads to measuring the image similarities based on effective image features for each category. The primary results indicate the efficiency and validity of the proposed approach.
Full-Text [PDF 708 kb]   (112 Downloads)    

References
1. [1] Ordonez, Vicente, et al. ”Large scale retrieval and generation of image descriptions.” International Journal of Computer Vision 119.1 (2016): 46-59. [2] Zamiri, Mona, Tahereh Bahraini, and Hadi Sadoghi Yazdi. "MVDF-RSC: Multi-view data fusion via robust spectral clustering for geo-tagged image tagging." Expert Systems with Applications 173 (2021): 114657. [3] Kobayashi, Kazuma, et al. "Decomposing normal and abnormal features of medical images for content-based image retrieval of glioma imaging." Medical image analysis 74 (2021): 102227. [4] P. Shamoi, A. Inoue, H. Kawanaka, ”Deep Color Semantics for E-commerce Content-based Image Retrieval”, The Proc. of the Int. Joint Conf. on Artif. Intelligence (IJCAI), 2015. [5] Gordo, Albert, et al. ”End-to-end learning of deep visual representations for image retrieval.” International Journal of Computer Vision 124.2 (2017): 237-254. [6] Yue, Jun, et al. ”Content-based image retrieval using color and texture fused features.” Mathematical and Computer Modelling 54.3-4 (2011): 1121-1127. [7] Yasmin, Mussarat, et al. ”An Efficient Content Based Image Retrieval using El Classification and Color Features.” Journal of applied research and technology 12.5 (2014): 877-885. [8] Akakin, Hatice Cinar, and Metin N. Gurcan. ”Content-based micro-scopic image retrieval system for multi-image queries.” IEEE transactions on information technology in biomedicine 16.4 (2012): 758-769. [9] Varish, Naushad, Jitesh Pradhan, and Arup Kumar Pal. ”Image retrieval based on non-uniform bins of color histogram and dual tree complex wavelet transform.” Multimedia Tools and Applications 76.14 (2017): 15885-15921. [10] Nazir, Atif, et al. ”Content based image retrieval system by using HSV color histogram, discrete wavelet transform and edge histogram descriptor.” 2018 International Conference on Computing, Mathematics and Engineering Technologies (iCoMET). IEEE, 2018. 34 [11] Raja, Rohit, Sandeep Kumar, and Md Rashid Mahmood. "Color object detection based image retrieval using ROI segmentation with multi-feature method." Wireless Personal Communications 112, no. 1 (2020): 169-192. [12] Prasad, R. Durga, et al. ”Content based image retrieval using Dominant color and Texture features.” International Journal for Modern Trends in Science and Technology 2.04 (2016). [13] Sebastian, V., A. Unnikrishnan, and Kannan Balakrishnan. ”Gray level co-occurrence matrices: generalisation and some new features.” arXiv preprint arXiv:1205.4831 (2012). [14] Lloyd, Kaelon, et al. ”Detecting violent and abnormal crowd activity using temporal analysis of grey level co-occurrence matrix (GLCM)-based texture measures.” Machine Vision and Applications 28.3-4 (2017): 361-539 371 [15] Dhingra, Shefali, and Poonam Bansal. "Experimental analogy of different texture feature extraction techniques in image retrieval systems." Multimedia Tools and Applications 79, no. 37 (2020): 27391-27406. [16] Wang, Xiang-Yang, Yong-Jian Yu, and Hong-Ying Yang. ”An effective image retrieval scheme using color, texture and shape features.” Computer Standards & Interfaces 33.1 (2011): 59-68. [17] ElAlami, M. Esmel. ”A novel image retrieval model based on the most relevant features.” Knowledge-Based Systems 24.1 (2011): 23-32. 35 [18] Li, Yali, et al. ”A survey of recent advances in visual feature detection.” Neurocomputing 149 (2015): 736-751. [19] Zhou, Wengang, Houqiang Li, and Qi Tian. ”Recent Advance in Content-based Image Retrieval: A Literature Survey.” arXiv preprint arXiv:1706.06064 (2017). [20] Bansal, Monika, Munish Kumar, and Manish Kumar. "2D object recognition: a comparative analysis of SIFT, SURF and ORB feature descriptors." Multimedia Tools and Applications 80, no. 12 (2021): 18839-18857. [21] Bay, Herbert, et al. ”Speeded-up robust features (SURF).” Computer vision and image understanding 110.3 (2008): 346-359. [22] Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. ”Imagenet classification with deep convolutional neural networks.” Advances in neural information processing systems. 2012. [23] Donahue, Jeff, et al. ”Decaf: A deep convolutional activation feature for generic visual recognition.” International conference on machine learning. 2014. [24] Piras, Luca, and Giorgio Giacinto. ”Information fusion in content based image retrieval: A comprehensive overview.” Information Fusion 37 (2017): 50-60. [25] Backes, Andr Ricardo, Dalcimar Casanova, and Odemir Martinez Bruno. ”A complex network-based approach for boundary shape analysis.” Pattern Recognition 42.1 (2009): 54-67. [26] Backes, Andr Ricardo, and Odemir Martinez Bruno. ”Shape classification using complex network and multi-scale fractal dimension.” Pattern Recognition Letters 31.1 (2010): 44-51. [27] Cuadros, Oscar, et al. ”Segmentation of large images with complex networks.” Graphics, Patterns and Images (SIBGRAPI), 2012 25th SIB-GRAPI Conference on. IEEE, 2012. [28] Backes, Andr Ricardo, Dalcimar Casanova, and Odemir Martinez Bruno. ”Texture analysis and classification: A complex network-based approach.” Information Sciences 219 (2013): 168-180. [29] Kang, Jieqi, et al. ”A complex network based feature extraction for image retrieval.” Image Processing (ICIP), 2014 IEEE International Conference on. IEEE, 2014. [30] Couto, Leandro N., Andre R. Backes, and Celia AZ Barcelos. ”Texture characterization via deterministic walks direction histogram applied to a complex network-based image transformation.” Pattern Recognition Letters 97 (2017): 77-83. [31] Zhou, Ju-Xiang, et al. ”A new fusion approach for content based image retrieval with color histogram and local directional pattern.” International Journal of Machine Learning and Cybernetics 9.4 (2018): 677-603 [32] Agarwal, A., and S. S. Bhadouria. ”An Evaluation of Dominant Color descriptor and Wavelet Transform on YCbCr Color Space for CBIR.” International Journal of Scientific Research in Computer Science and Engineering 5.2 (2017): 56-62. [33] Selvarajah, S., and S. R. Kodituwakku. ”Analysis and comparison of texture features for content based image retrieval.” International Journal of Latest Trends in Computing2.1 (2011). [34] Regniers, Olivier, et al. ”Supervised classification of very high resolution optical images using wavelet-based textural features.” IEEE Transactions on Geoscience and Remote Sensing 54.6 (2016): 3722-3735. [35] Yang, Mingqiang, Kidiyo Kpalma, and Joseph Ronsin. ”A survey of shape feature extraction techniques.” (2008): 43-90. [36] Harris, Chris, and Mike Stephens. ”A combined corner and edge detector.” Alvey vision conference. Vol. 15. No. 50. 1988. [37] Newman, Mark EJ. ”Assortative mixing in networks.” Physical review letters 89.20 (2002): 208701. [38] Kintali, Shiva. ”Betweenness centrality: Algorithms and lower bounds.” arXiv preprint arXiv:0809.1906 (2008). [39] Brandes, Ulrik. ”A faster algorithm for betweenness centrality.” Journal of mathematical sociology 25.2 (2001): 163-177. [40] Watts, Duncan J., and Steven H. Strogatz. ”Collective dynamics of small-world networks.” nature 393.6684 (1998): 440 [41] Honey, C. J., et al. ”Predicting human resting-state functional connectivity from structural connectivity.” Proceedings of the National Academy of Sciences 106.6 (2009): 2035-2040. [42] Newman, Mark EJ. ”Modularity and community structure in networks.” Proceedings of the national academy of sciences 103.23 (2006): 8577-8582. [43] Boldi, Paolo, Massimo Santini, and Sebastiano Vigna. ”PageRank: functional dependencies.” ACM Transactions on Information Systems (TOIS) 27.4 (2009): 19. [44] Shakibian, Hadi, Nasrollah Moghadam Charkari, and Saeed Jalili. ”A multilayered approach for link prediction in heterogeneous complex networks.” Journal of Computational Science 17 (2016): 73-82. [45] Sun, Yizhou, et al. ”Pathsim: Meta path-based top-k similarity search in heterogeneous information networks.” Proceedings of the VLDB Endowment 4.11 (2011): 992-1003. 40 [46] Shakibian, Hadi, and Nasrollah Moghadam Charkari. ”Mutual information model for link prediction in heterogeneous complex networks.” Scientific Reports 7 (2017): 44981. [47] Shakibian, Hadi, and Nasrollah Moghadam Charkari. ”Statistical similarity measures for link prediction in heterogeneous complex networks.” Physica A: Statistical Mechanics and its Applications 501 (2018): 248-263. [48] Shakibian, Hadi, Nasrollah Moghadam Charkari, and Saeed Jalili. ”Multi-kernel one class link prediction in heterogeneous complex networks.” Applied Intelligence (2018): 1-18. [49] Fu, Gang, et al. ”Predicting drug target interactions using meta-path-based semantic network analysis.” BMC bioinformatics 17.1 (2016): 160. [50] Shang, Jingbo, et al. ”Meta-path guided embedding for similarity search in large-scale heterogeneous information networks.” arXiv preprint arXiv:1610.09769 (2016). [51] Niu, Dongmei, et al. "A novel image retrieval method based on multi-features fusion." Signal Processing: Image Communication 87 (2020): 115911. [52] Pradhan, Jitesh, et al. "A hierarchical CBIR framework using adaptive tetrolet transform and novel histograms from color and shape features." Digital Signal Processing 82 (2018): 258-281. [53] Pradhan, Jitesh, Arup Kumar Pal, and Haider Banka. "Principal texture direction based block level image reordering and use of color edge features for application of object based image retrieval." Multimedia Tools and Applications 78.2 (2019): 1685-1717. [54] Zeng, Shan, et al. "Image retrieval using spatiograms of colors quantized by gaussian mixture models." Neurocomputing 171 (2016): 673-684. [55] Wang, Xingyuan, and Zongyu Wang. "The method for image retrieval based on multi-factors correlation utilizing block truncation coding." Pattern recognition 47.10 (2014): 3293-3303. [56] Pradhan, Jitesh, et al. "Fusion of region based extracted features for instance-and class-based CBIR applications." Applied Soft Computing 102 (2021): 107063. [57] Yang, Jiachen, et al. ”A fast image retrieval method designed for network big data.” IEEE Transactions on Industrial Informatics (2017). [58] Sharif, Uzma, et al. "Scene analysis and search using local features and support vector machine for effective content-based image retrieval." Artificial Intelligence Review 52.2 (2019): 901-925. [59] ElAlami, M. Esmel. "A new matching strategy for content based image retrieval system." Applied Soft Computing 14 (2014): 407-418. [60] Vimina, E. R., and M. O. Divya. "Maximal multi-channel local binary pattern with colour information for CBIR." Multimedia Tools and Applications 79.35 (2020): 25357-25377.

Add your comments about this article : Your username or Email:
CAPTCHA

Send email to the article author


Rights and permissions
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.