A hybrid collaborative filtering algorithm based on the user preferences and item features is proposed. A thorough investigation of Collaborative Filtering (CF) techniques preceded the development of this algorithm. The proposed algorithm improved the user-item similarity approach by extracting the item feature and applying various item features’ weight to the item to confirm different item features. User preferences for different item features were obtained by employing user evaluations of the items. It is expected that providing better recommendations according to preferences and features would improve the accuracy and efficiency of recommendations and also make it easier to deal with the data sparsity. In addition, it is expected that the potential semantics of the user evaluation model would be revealed. This would explain the recommendation results and increase accuracy. A portion of the MovieLens database was used to conduct a comparative experiment among the proposed algorithms, i.e., the collaborative filtering algorithm based on the item and the collaborative filtering algorithm based on the item feature. The Mean Absolute Error (MAE) was utilized to conduct performance testing. The experimental results show that employing the proposed personalized recommendation algorithm based on the preference-feature would significantly improve the accuracy of evaluation predictions compared to two previous approaches.
- Article type
- Year
- Co-author
The Internet of Things (IoT) implies a worldwide network of interconnected objects uniquely addressable, via standard communication protocols. The prevalence of IoT is bound to generate large amounts of multisource, heterogeneous, dynamic, and sparse data. However, IoT offers inconsequential practical benefits without the ability to integrate, fuse, and glean useful information from such massive amounts of data. Accordingly, preparing us for the imminent invasion of things, a tool called data fusion can be used to manipulate and manage such data in order to improve process efficiency and provide advanced intelligence. In order to determine an acceptable quality of intelligence, diverse and voluminous data have to be combined and fused. Therefore, it is imperative to improve the computational efficiency for fusing and mining multidimensional data. In this paper, we propose an efficient multidimensional fusion algorithm for IoT data based on partitioning. The basic concept involves the partitioning of dimensions (attributes), i.e., a big data set with higher dimensions can be transformed into certain number of relatively smaller data subsets that can be easily processed. Then, based on the partitioning of dimensions, the discernible matrixes of all data subsets in rough set theory are computed to obtain their core attribute sets. Furthermore, a global core attribute set can be determined. Finally, the attribute reduction and rule extraction methods are used to obtain the fusion results. By means of proving a few theorems and simulation, the correctness and effectiveness of this algorithm is illustrated.
The Internet of Things emphasizes the concept of objects connected with each other, which includes all kinds of wireless sensor networks. An important issue is to reduce the energy consumption in the sensor networks since sensor nodes always have energy constraints. Deployment of thousands of wireless sensors in an appropriate pattern will simultaneously satisfy the application requirements and reduce the sensor network energy consumption. This article deployed a number of sensor nodes to record temperature data. The data was then used to predict the temperatures of some of the sensor node using linear programming. The predictions were able to reduce the node sampling rate and to optimize the node deployment to reduce the sensor energy consumption. This method can compensate for the temporarily disabled nodes. The main objective is to design the objective function and determine the constraint condition for the linear programming. The result based on real experiments shows that this method successfully predicts the values of unknown sensor nodes and optimizes the node deployment. The sensor network energy consumption is also reduced by the optimized node deployment.
The Kepler General Purpose GPU (GPGPU) architecture was developed to directly support GPU virtualization and make GPGPU cloud computing more broadly applicable by providing general purpose computing capability in the form of on-demand virtual resources. This paper describes a baseline GPGPU cloud system built on Kepler GPUs, for the purpose of exploring hardware potential while improving task performance. This paper elaborates a general scheme which defines the whole cloud system into a cloud layer, a server layer, and a GPGPU layer. This paper also illustrates the hardware features, task features, scheduling mechanism, and execution mechanism of each layer. Thus, this paper provides a better understanding of general-purpose computing on a GPGPU cloud.