Discover the SciOpen Platform and Achieve Your Research Goals with Ease.
Search articles, authors, keywords, DOl and etc.
Open Air Interface (OAI) alliance recently introduced a new disaggregated Open Radio Access Networks (O-RAN) framework for next generation telecommunications and networks. This disaggregated architecture is open, automated, software defined, virtual, and supports the latest advanced technologies like Artificial Intelligence (AI) Machine Learning (AI/ML). This novel intelligent architecture enables programmers to design and customize automated applications according to the business needs and to improve quality of service in fifth generation (5G) and Beyond 5G (B5G). Its disaggregated and multivendor nature gives the opportunity to new startups and small vendors to participate and provide cheap hardware software solutions to keep the market competitive. This paper presents the disaggregated and programmable O-RAN architecture focused on automation, AI/ML services, and applications with Flexible Radio access network Intelligent Controller (FRIC). We schematically demonstrate the reinforcement learning, external applications (xApps), and automation steps to implement this disaggregated O-RAN architecture. The idea of this research paper is to implement an AI/ML enabled automation system for software defined disaggregated O-RAN, which monitors, manages, and performs AI/ML-related services, including the model deployment, optimization, inference, and training.
M. Polese, L. Bonati, S. Oro, S. Basagni, and T. Melodia, Understanding O-RAN: Architecture, interfaces, algorithms, security, and research challenges, IEEE Communications Surveys & Tutorials, vol. 25, no. 2, pp. 1376–1410, 2023.
Y. Shi, Y. E. Sagduyu, T. Erpek, and M. C. Gursoy, How to attack and defend NextG radio access network slicing with reinforcement learning, IEEE Open J. Veh. Technol., vol. 4, pp. 181–192, 2022.
A. Lacava, M. Polese, R. Sivaraj, R. Soundrarajan, B. S. Bhati, T. Singh, T. Zugno, F. Cuomo, and T. Melodia, Programmable and customized intelligence for traffic steering in 5G networks using open RAN architectures, IEEE Trans. Mobile Comput.. doi:10.1109/TMC.2023.3266642.
S. Mondal and M. Ruffini, Fairness guaranteed and auction-based x-haul and cloud resource allocation in multi-tenant O-RANs, IEEE Trans. Commun., vol. 71, no. 6, pp. 3452–3468, 2023.
A. Filali, B. Nour, S. Cherkaoui, and A. Kobbane, Communication and computation O-RAN resource slicing for URLLC services using deep reinforcement learning, IEEE Commun. Stand. Mag., vol. 7, no. 1, pp. 66–73, 2023.
A. Abouaomar, A. Taik, A. Filali, and S. Cherkaoui, Federated deep reinforcement learning for open RAN slicing in 6G networks, IEEE Commun. Mag., vol. 61, no. 2, pp. 126–132, 2023.
F. Rezazadeh, L. Zanzi, F. Devoti, H. Chergui, X. Costa-Pérez, and C. Verikoukis, On the specialization of FDRL agents for scalable and distributed 6G RAN slicing orchestration, IEEE Trans. Veh. Technol., vol. 72, no. 3, pp. 3473–3487, 2023.
S. Kumar, K. Cengiz, S. Vimal, and A. Suresh, Energy efficient resource migration based load balance mechanism for high traffic applications IoT, Wirel. Pers. Commun., vol. 127, no. 1, pp. 385–403, 2022.
B. Kim, Y. E. Sagduyu, K. Davaslioglu, T. Erpek, and S. Ulukus, Channel-aware adversarial attacks against deep learning-based wireless signal classifiers, IEEE Trans. Wirel. Commun., vol. 21, no. 6, pp. 3868–3880, 2022.
R. Joda, T. Pamuklu, P. E. Iturria-Rivera, and M. Erol-Kantarci, Deep reinforcement learning-based joint user association and CU-DU placement in O-RAN, IEEE Trans. Netw. Serv. Manag., vol. 19, no. 4, pp. 4097–4110, 2022.
P. Li, J. Thomas, X. Wang, A. Khalil, A. Ahmad, R. Inacio, S. Kapoor, A. Parekh, A. Doufexi, A. Shojaeifard et al., RLOps: Development life-cycle of reinforcement learning aided open RAN, IEEE Access, vol. 10, pp. 113808–113826, 2022.
Y. Azimi, S. Yousefi, H. Kalbkhani, and T. Kunz, Applications of machine learning in resource management for RAN-slicing in 5G and beyond networks: A survey, IEEE Access, vol. 10, pp. 106581–106612, 2022.
S. Kumar, P. Ranjan, R. Ramaswami, and M. R. Tripathy, Resource efficient clustering and next hop knowledge based routing in multiple heterogeneous wireless sensor networks, Int. J. Grid High Perform. Comput., vol. 9, no. 2, pp. 1–20, 2017.
A. Punhani, N. Faujdar, and S. Kumar,, Design and evaluation of cubic torus network-on-chip architecture, International Journal of Innovative Technology and Exploring Engineering (IJITEE), vol. 8, no. 6, pp. 2278–3075, 2019.
M. M. Azari, S. Solanki, S. Chatzinotas, O. Kodheli, H. Sallouha, A. Colpaert, J. F. Mendoza Montoya, S. Pollin, A. Haqiqatnejad, A. Mostaani, et al., Evolution of non-terrestrial networks from 5G to 6G: A survey, IEEE Commun. Surv. Tutorials, vol. 24, no. 4, pp. 2633–2672, 2022.
S. Kumar, P. Ranjan, R. Radhakrishnan, and M. R. Tripathy, Energy aware distributed protocol for heterogeneous wireless sensor network, Int. J. Contr. Autom., vol. 8, no. 10, pp. 421–430, 2015.
C. Pham, F. Fami, K. K. Nguyen, and M. Cheriet, When RAN intelligent controller in O-RAN meets multi-UAV enable wireless network, IEEE Trans. Cloud Comput., vol. 11, no. 3, pp. 2245–2259, 2023.
S. F. Abedin, A. Mahmood, N. H. Tran, Z. Han, and M. Gidlund, Elastic O-RAN slicing for industrial monitoring and control: A distributed matching game and deep reinforcement learning approach, IEEE Trans. Veh. Technol., vol. 71, no. 10, pp. 10808–10822, 2022.
M. Polese, L. Bonati, S. D’Oro, S. Basagni, and T. Melodia, ColO-RAN: Developing machine learning-based xApps for open RAN closed-loop control on programmable experimental platforms, IEEE Trans. Mob. Comput., vol. 22, no. 10, pp. 5787–5800, 2023.
Z. Peng, Z. Zhang, L. Kong, C. Pan, L. Li, and J. Wang, Deep reinforcement learning for RIS-aided multiuser full-duplex secure communications with hardware impairments, IEEE Internet Things J., vol. 9, no. 21, pp. 21121–21135, 2022.
J. Huang, Y. Yang, Z. Gao, D. He, and D. W. K. Ng, Dynamic spectrum access for D2D-enabled Internet of Things: A deep reinforcement learning approach, IEEE Internet Things J., vol. 9, no. 18, pp. 17793–17807, 2022.
K. M. Faisal and W. Choi, Machine learning approaches for reconfigurable intelligent surfaces: A survey, IEEE Access, vol. 10, pp. 27343–27367, 2022.
A. Alwarafy, M. Abdallah, B. S. Çiftler, A. Al-Fuqaha, and M. Hamdi, The frontiers of deep reinforcement learning for resource management in future wireless HetNets: Techniques, challenges, and research directions, IEEE Open J. Commun. Soc., vol. 3, pp. 322–365, 2022.
E. Kim, H. -H. Choi, H. Kim, J. Na, and H. Lee, Optimal resource allocation considering non-uniform spatial traffic distribution in ultra-dense networks: A multi-agent reinforcement learning approach, IEEE Access, vol. 10, pp. 20455–20464, 2022.
N. Nomikos, S. Zoupanos, T. Charalambous, and I. Krikidis, A survey on reinforcement learning-aided caching in heterogeneous mobile edge networks, IEEE Access, vol. 10, pp. 4380–4413, 2022.
Y. Cao, S. Y. Lien, Y. C. Liang, K. C. Chen, and X. Shen, User access control in open radio access networks: A federated deep reinforcement learning approach, IEEE Trans. Wirel. Commun., vol. 21, no. 6, pp. 3721–3736, 2022.
R. A. Addad, D. L. C. Dutra, T. Taleb, and H. Flinck, AI-based network-aware service function chain migration in 5G and beyond networks, IEEE Trans. Netw. Serv. Manag., vol. 19, no. 1, pp. 472–484, 2022.
J. A. Ayala-Romero, A. Garcia-Saavedra, M. Gramaglia, X. Costa-Pérez, A. Banchs, and J. J. Alcaraz, vrAIn: Deep learning based orchestration for computing and radio resources in vRANs, IEEE Trans. Mob. Comput., vol. 21, no. 7, pp. 2652–2670, 2022.
Y. Sun, M. Peng, Y. Ren, L. Chen, L. Yu, and S. Suo, Harmonizing artificial intelligence with radio access networks: Advances. case study, and open issues, IEEE Netw., vol. 35, no. 4, pp. 144–151, 2021.
721
Views
192
Downloads
0
Crossref
0
Web of Science
0
Scopus
0
CSCD
Altmetrics
The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).