The rise of machine learning applications at the network edge demands real-time prediction with limited resources, requiring low computational complexity and accurate models. Also, these devices may have poor learning ability due to the lack of samples. Using federated learning enables devices to train machine learning models without sharing their private data. Meta-learning algorithms, particularly few-shot learning, are ideal for federated environments with highly personalized and decentralized training data due to their fast adaptation and good generalization to new tasks. Despite recent advances, the use of metric-based meta-learning methods remains ambiguous due to their simplicity, as well as their development to improve model learning ability and accuracy in a federated environment. This paper introduces ResFed, a federated meta-learning method specifically tailored for few-shot classification. The approach involves leveraging a pre-trained model and implementing data augmentation within the federated meta-learner, leading to favorable performance outcomes. The experimental results show that our approach, specifically designed for limited data scenarios in the federated environments, significantly improves convergence speed and accuracy. The values of accuracy obtained in CIFAR-100 and Omniglot datasets are 77.44% and 98.24%, respectively. Additionally, when compared to alternative methods, there is a notable reduction in resource costs ranging from 0.4 to 0.61 in diverse scenarios.


The network switches in the data plane of Software Defined Networking (SDN) are empowered by an elementary process, in which enormous number of packets which resemble big volumes of data are classified into specific flows by matching them against a set of dynamic rules. This basic process accelerates the processing of data, so that instead of processing singular packets repeatedly, corresponding actions are performed on corresponding flows of packets. In this paper, first, we address limitations on a typical packet classification algorithm like Tuple Space Search (TSS). Then, we present a set of different scenarios to parallelize it on different parallel processing platforms, including Graphics Processing Units (GPUs), clusters of Central Processing Units (CPUs), and hybrid clusters. Experimental results show that the hybrid cluster provides the best platform for parallelizing packet classification algorithms, which promises the average throughput rate of 4.2 Million packets per second (Mpps). That is, the hybrid cluster produced by the integration of Compute Unified Device Architecture (CUDA), Message Passing Interface (MPI), and OpenMP programming model could classify 0.24 million packets per second more than the GPU cluster scheme. Such a packet classifier satisfies the required processing speed in the programmable network systems that would be used to communicate big medical data.

Energy management in smart homes is one of the most critical problems for the Quality of Life (QoL) and preserving energy resources. One of the relevant issues in this subject is environmental contamination, which threatens the world’s future. Green computing-enabled Artificial Intelligence (AI) algorithms can provide impactful solutions to this topic. This research proposes using one of the Recurrent Neural Network (RNN) algorithms known as Long Short-Term Memory (LSTM) to comprehend how it is feasible to perform the cloud/fog/edge-enabled prediction of the building’s energy. Four parameters of power electricity, power heating, power cooling, and total power in an office/home in cold-climate cities are considered as our features in the study. Based on the collected data, we evaluate the LSTM approach for forecasting parameters for the next year to predict energy consumption and online monitoring of the model’s performance under various conditions. Towards implementing the AI predictive algorithm, several existing tools are studied. The results have been generated through simulations, and we find them promising for future applications.