In the Internet of Things (IoT) environment, user-service interaction data are often stored in multiple distributed platforms. In this situation, recommender systems need to integrate the distributed user-service interaction data across different platforms for making a comprehensive recommendation decision, during which user privacy is probably disclosed. Moreover, as user-service interaction records accumulate over time, they significantly reduce the efficiency of recommendations. To tackle these issues, we propose a lightweight and privacy-preserving service recommendation approach named SerRecL2H. In SerRecL2H, we employ Learning to Hash (L2H) to encapsulate sensitive user-service interaction data into less-sensitive user indices, which facilitates identifying users with similar preferences efficiently for accurate recommendations. We then validate the feasibility of our proposed SerRecL2H approach through massive experiments conducted on the popular WS-DREAM dataset. The comparative analysis with other competitive approaches demonstrates that our proposal surpasses other approaches in terms ofrecommendation accuracy and efficiency while protecting user privacy.
- Article type
- Year
- Co-author


In the digital era, social media platforms play a crucial role in forming user communities, yet the challenge of protecting user privacy remains paramount. This paper proposes a novel framework for identifying and analyzing user communities within social media networks, emphasizing privacy protection. In detail, we implement a social media-driven user community finding approach with hashing named MCF to ensure that the extracted information cannot be traced back to specific users, thereby maintaining confidentiality. Finally, we design a set of experiments to verify the effectiveness and efficiency of our proposed MCF approach by comparing it with other existing approaches, demonstrating its effectiveness in community detection while upholding stringent privacy standards. This research contributes to the growing field of social network analysis by providing a balanced solution that respects user privacy while uncovering valuable insights into community dynamics on social media platforms.

The increasing number of available Web Application Programming Interfaces (APIs) in various service sharing communities have enabled software developers to develop their interested multimedia mashups quickly and conveniently. In this situation, a multimedia mashup with complex functionalities could be achieved by composing a set of pre-selected Web APIs by software developers. However, due to the APIs diversity in terms of development organization, programming language, invocation interface, etc, it is often difficult to determine the compatibility between the APIs selected by multimedia mashup developers beforehand especially when the developers have little background knowledge of APIs, which significantly decreases the successful rate of subsequent multimedia mashup development. In response to this challenge, we propose a subgraph matching-based compatible API’s composition recommendation method, called SubMCWACR· The advantage of SubMCWACR is that it can directly search for the API’s subgraphs that not only meet the functional requirements of the multimedia mashup but also are compatible with each other, thus boosting the effectiveness of multimedia mashup development. Through extensive experiments on a real dataset crawled from the Web API sharing platform ProgrammableWeb.com, we have demonstrated that our proposed recommendation method achieves significant improvements in terms of recommendation precision and compatibility compared with other competitive API recommendation methods.

Finding more specific subcategories within a larger category is the goal of fine-grained image classification (FGIC), and the key is to find local discriminative regions of visual features. Most existing methods use traditional convolutional operations to achieve fine-grained image classification. However, traditional convolution cannot extract multi-scale features of an image and existing methods are susceptible to interference from image background information. Therefore, to address the above problems, this paper proposes an FGIC model (Attention-PCNN) based on hybrid attention mechanism and pyramidal convolution. The model feeds the multi-scale features extracted by the pyramidal convolutional neural network into two branches capturing global and local information respectively. In particular, a hybrid attention mechanism is added to the branch capturing global information in order to reduce the interference of image background information and make the model pay more attention to the target region with fine-grained features. In addition, the mutual-channel loss (MC-LOSS) is introduced in the local information branch to capture fine-grained features. We evaluated the model on three publicly available datasets CUB-200-2011, Stanford Cars, FGVC-Aircraft, etc. Compared to the state-of-the-art methods, the results show that Attention-PCNN performs better.

The rapid development of the internet has ushered the real world into a “media-centric” digital era where virtually everything serves as a medium. Leveraging the new attributes of interactivity, immediacy, and personalization facilitated by online communication, folklore has found a broad avenue for dissemination. Among these, online social networks have become a vital channel for propagating folklore. By using social network theory, we devise a comprehensive approach known as SocialPre. Firstly, we utilize embedding techniques to capture users’ low-level and high-level social relationships. Secondly, by applying an automatic weight assignment mechanism based on the embedding representations, multi-level social relationships are aggregated to assess the likelihood of a social interaction between any two users. These experiments demonstrate the ability to classify different social groups. In addition, we delve into the potential directions of folklore evolution, thus laying a theoretical foundation for future folklore communication.

Air pollution is a severe environmental problem in urban areas. Accurate air quality prediction can help governments and individuals make proper decisions to cope with potential air pollution. As a classic time series forecasting model, the AutoRegressive Integrated Moving Average (ARIMA) has been widely adopted in air quality prediction. However, because of the volatility of air quality and the lack of additional context information, i.e., the spatial relationships among monitor stations, traditional ARIMA models suffer from unstable prediction performance. Though some deep networks can achieve higher accuracy, a mass of training data, heavy computing, and time cost are required. In this paper, we propose a hybrid model to simultaneously predict seven air pollution indicators from multiple monitoring stations. The proposed model consists of three components: (1) an extended ARIMA to predict matrix series of multiple air quality indicators from several adjacent monitoring stations; (2) the Empirical Mode Decomposition (EMD) to decompose the air quality time series data into multiple smooth sub-series; and (3) the truncated Singular Value Decomposition (SVD) to compress and denoise the expanded matrix. Experimental results on the public dataset show that our proposed model outperforms the state-of-art air quality forecasting models in both accuracy and time cost.

With the ever-increasing number of natural disasters warning documents in document databases, the document database is becoming an economic and efficient way for enterprise staffs to learn and understand the contents of the natural disasters warning through searching for necessary text documents. Generally, the document database can recommend a mass of documents to the enterprise staffs through analyzing the enterprise staff’s precisely typed keywords. In fact, these recommended documents place a heavy burden on the enterprise staffs to learn and select as the enterprise staffs have little background knowledge about the contents of the natural disasters warning. Thus, the enterprise staffs fail to retrieve and select appropriate documents to achieve their desired goals. Considering the above drawbacks, in this paper, we propose a fuzzy keywords-driven Natural Disasters Warning Documents retrieval approach (named NDWDkeyword). Through the text description mining of documents and the fuzzy keywords searching technology, the retrieval approach can precisely capture the enterprise staffs’ target requirements and then return necessary documents to the enterprise staffs. Finally, a case study is run to explain our retrieval approach step by step and demonstrate the effectiveness and feasibility of our proposal.