Journal Home > Volume 7 , Issue 2

Open Air Interface (OAI) alliance recently introduced a new disaggregated Open Radio Access Networks (O-RAN) framework for next generation telecommunications and networks. This disaggregated architecture is open, automated, software defined, virtual, and supports the latest advanced technologies like Artificial Intelligence (AI) Machine Learning (AI/ML). This novel intelligent architecture enables programmers to design and customize automated applications according to the business needs and to improve quality of service in fifth generation (5G) and Beyond 5G (B5G). Its disaggregated and multivendor nature gives the opportunity to new startups and small vendors to participate and provide cheap hardware software solutions to keep the market competitive. This paper presents the disaggregated and programmable O-RAN architecture focused on automation, AI/ML services, and applications with Flexible Radio access network Intelligent Controller (FRIC). We schematically demonstrate the reinforcement learning, external applications (xApps), and automation steps to implement this disaggregated O-RAN architecture. The idea of this research paper is to implement an AI/ML enabled automation system for software defined disaggregated O-RAN, which monitors, manages, and performs AI/ML-related services, including the model deployment, optimization, inference, and training.


menu
Abstract
Full text
Outline
About this article

AI/ML Enabled Automation System for Software Defined Disaggregated Open Radio Access Networks: Transforming Telecommunication Business

Show Author's information Sunil Kumar1( )
Institute for Communication Systems, University of Surrey, Guildford, GU2 7XH, UK

Abstract

Open Air Interface (OAI) alliance recently introduced a new disaggregated Open Radio Access Networks (O-RAN) framework for next generation telecommunications and networks. This disaggregated architecture is open, automated, software defined, virtual, and supports the latest advanced technologies like Artificial Intelligence (AI) Machine Learning (AI/ML). This novel intelligent architecture enables programmers to design and customize automated applications according to the business needs and to improve quality of service in fifth generation (5G) and Beyond 5G (B5G). Its disaggregated and multivendor nature gives the opportunity to new startups and small vendors to participate and provide cheap hardware software solutions to keep the market competitive. This paper presents the disaggregated and programmable O-RAN architecture focused on automation, AI/ML services, and applications with Flexible Radio access network Intelligent Controller (FRIC). We schematically demonstrate the reinforcement learning, external applications (xApps), and automation steps to implement this disaggregated O-RAN architecture. The idea of this research paper is to implement an AI/ML enabled automation system for software defined disaggregated O-RAN, which monitors, manages, and performs AI/ML-related services, including the model deployment, optimization, inference, and training.

Keywords: Artificial Intelligence (AI), Reinforcement Learning (RL), Open Radio Access Networks (O-RAN), Flexible Radio access network Intelligent Controller (FRIC), external Applications (xApps), Machine Learning (ML), sixth generation (6G)

References(64)

[1]

M. Polese, L. Bonati, S. Oro, S. Basagni, and T. Melodia, Understanding O-RAN: Architecture, interfaces, algorithms, security, and research challenges, IEEE Communications Surveys & Tutorials, vol. 25, no. 2, pp. 1376–1410, 2023.

[2]
Y. Cao, S. Y. Lien, Y. C. Liang, and K. C. Chen, Federated deep reinforcement learning for user access control in open radio access networks, in Proc. ICC 2021 - IEEE Int. Conf. Communications, Montreal, Canada, 2021, pp. 1–6.
DOI
[3]
O-RAN Working Group 3, “O-RAN Near-RT RIC Architecture 5.0”, O-RAN.WG3.RICARCH-R003-v05.00, https://orandownloadsweb.azurewebsites.net/specifications, 2023.
[4]

Y. Shi, Y. E. Sagduyu, T. Erpek, and M. C. Gursoy, How to attack and defend NextG radio access network slicing with reinforcement learning, IEEE Open J. Veh. Technol., vol. 4, pp. 181–192, 2022.

[5]
L. Chinchilla-Romero, J. Prados-Garzon, P. Muñoz, P. Ameigeiras, and J. J. Ramos-Munoz, Autonomous radio resource provisioning in Multi-WAT private 5G RANs based on DRL, in Proc. IEEE Wireless Communications and Networking Conf. (WCNC), Glasgow, UK, 2023, pp. 1–6.
DOI
[6]

A. Lacava, M. Polese, R. Sivaraj, R. Soundrarajan, B. S. Bhati, T. Singh, T. Zugno, F. Cuomo, and T. Melodia, Programmable and customized intelligence for traffic steering in 5G networks using open RAN architectures, IEEE Trans. Mobile Comput.. doi:10.1109/TMC.2023.3266642.

[7]

S. Mondal and M. Ruffini, Fairness guaranteed and auction-based x-haul and cloud resource allocation in multi-tenant O-RANs, IEEE Trans. Commun., vol. 71, no. 6, pp. 3452–3468, 2023.

[8]

A. Filali, B. Nour, S. Cherkaoui, and A. Kobbane, Communication and computation O-RAN resource slicing for URLLC services using deep reinforcement learning, IEEE Commun. Stand. Mag., vol. 7, no. 1, pp. 66–73, 2023.

[9]
N. F. Cheng, T. Pamuklu, and M. Erol-Kantarci, Reinforcement learning based resource allocation for network slices in O-RAN midhaul, in Proc. IEEE 20th Consumer Communications & Networking Conf. (CCNC), Las Vegas, NV, USA, 2023, pp. 140–145.
DOI
[10]

A. Abouaomar, A. Taik, A. Filali, and S. Cherkaoui, Federated deep reinforcement learning for open RAN slicing in 6G networks, IEEE Commun. Mag., vol. 61, no. 2, pp. 126–132, 2023.

[11]

F. Rezazadeh, L. Zanzi, F. Devoti, H. Chergui, X. Costa-Pérez, and C. Verikoukis, On the specialization of FDRL agents for scalable and distributed 6G RAN slicing orchestration, IEEE Trans. Veh. Technol., vol. 72, no. 3, pp. 3473–3487, 2023.

[12]
Y. C. Huang, S. Y. Lien, C. C. Tseng, D. J. Deng, and K. C. Chen, Universal vertical applications adaptation for open RAN: A deep reinforcement learning approach, in Proc. 25th Int. Symp. on Wireless Personal Multimedia Communications (WPMC), Herning, Denmark, 2022, pp. 92–97.
DOI
[13]
F. Lotfi, O. Semiari, and F. Afghah, Evolutionary deep reinforcement learning for dynamic slice management in O-RAN, in Proc. IEEE Globecom Workshops (GC Wkshps), Rio de Janeiro, Brazil, 2022, pp. 227–232.
DOI
[14]
M. Kouchaki and V. Marojevic, Actor-critic network for O-RAN resource allocation: XApp design, deployment, and analysis, in Proc. IEEE Globecom Workshops (GC Wkshps), Rio de Janeiro, Brazil, 2022, pp. 968–973.
DOI
[15]
H. Zhang, H. Zhou, and M. Erol-Kantarci, Federated deep reinforcement learning for resource allocation in O-RAN slicing, in Proc. GLOBECOM 2022 - 2022 IEEE Global Communications Conf., Rio de Janeiro, Brazil, 2022, pp. 958–963.
DOI
[16]
S. Kumar, P. Ranjan, R. Radhakrishnan, and M. R. Tripathy, An NS3 implementation of Physical layer based on 802.11 for utility maximization of WSN, in Proc. Int. Conf. on Computation alIntelligence and Communication Networks, Jabalpur, India, 2015, pp. 79−84.
DOI
[17]
R. Han, H. Li, E. J. Knoblock, M. R. Gasper, and R. D. Apaza, Dynamic spectrum sharing in cellular based urban air mobility via deep reinforcement learning, in Proc. GLOBECOM 2022 - 2022 IEEE Global Communications Conf., Rio de Janeiro, Brazil, 2022, pp. 1332–1337.
DOI
[18]

S. Kumar, K. Cengiz, S. Vimal, and A. Suresh, Energy efficient resource migration based load balance mechanism for high traffic applications IoT, Wirel. Pers. Commun., vol. 127, no. 1, pp. 385–403, 2022.

[19]
M. Alsenwi, E. Lagunas, and S. Chatzinotas, Coexistence of eMBB and URLLC in open radio access networks: A distributed learning framework, in Proc. GLOBECOM 2022 - 2022 IEEE Global Communications Conf., Rio de Janeiro, Brazil, 2022, pp. 4601–4606.
DOI
[20]
M. Sharara, T. Pamuklu, S. Hoteit, V. Vèque, and M. Erol-Kantarci, Policy-gradient-based reinforcement learning for computing resources allocation in O-RAN, in Proc. IEEE 11th Int. Conf. Cloud Networking (CloudNet), Paris, France, 2022, pp. 229–236.
DOI
[21]

B. Kim, Y. E. Sagduyu, K. Davaslioglu, T. Erpek, and S. Ulukus, Channel-aware adversarial attacks against deep learning-based wireless signal classifiers, IEEE Trans. Wirel. Commun., vol. 21, no. 6, pp. 3868–3880, 2022.

[22]

R. Joda, T. Pamuklu, P. E. Iturria-Rivera, and M. Erol-Kantarci, Deep reinforcement learning-based joint user association and CU-DU placement in O-RAN, IEEE Trans. Netw. Serv. Manag., vol. 19, no. 4, pp. 4097–4110, 2022.

[23]

P. Li, J. Thomas, X. Wang, A. Khalil, A. Ahmad, R. Inacio, S. Kapoor, A. Parekh, A. Doufexi, A. Shojaeifard et al., RLOps: Development life-cycle of reinforcement learning aided open RAN, IEEE Access, vol. 10, pp. 113808–113826, 2022.

[24]
S. Kumar, P. Ranjan, R. Radhakrishnan, and M. R. Tripathy, Energy efficient multichannel MAC protocol for high traffic applications in heterogeneous wireless sensor networks, Recent Adv. in Electr. Electron. Eng., vol. 10, no. 3, pp. 223−232, 2017.
DOI
[25]

Y. Azimi, S. Yousefi, H. Kalbkhani, and T. Kunz, Applications of machine learning in resource management for RAN-slicing in 5G and beyond networks: A survey, IEEE Access, vol. 10, pp. 106581–106612, 2022.

[26]
S. K. Vankayala, S. Kumar, V. Shah, A. Mathur, D. Thirumulanathan, and S. Yoon, Reinforcement learning framework for dynamic power transmission in cloud RAN systems, in Proc. IEEE Int. Conf. Electronics, Computing and Communication Technologies (CONECCT), Bangalore, India, 2022, pp. 1–6.
DOI
[27]

S. Kumar, P. Ranjan, R. Ramaswami, and M. R. Tripathy, Resource efficient clustering and next hop knowledge based routing in multiple heterogeneous wireless sensor networks, Int. J. Grid High Perform. Comput., vol. 9, no. 2, pp. 1–20, 2017.

[28]
J. Song, I. Z. Kovács, M. Butt, J. Steiner, and K. I. Pedersen, Intra-RAN online distributed reinforcement learning for uplink power control in 5G cellular networks, in Proc. IEEE 95th Vehicular Technology Conference: (VTC2022-Spring), Helsinki, Finland, 2022, pp. 1–7.
DOI
[29]

A. Punhani, N. Faujdar, and S. Kumar,, Design and evaluation of cubic torus network-on-chip architecture, International Journal of Innovative Technology and Exploring Engineering (IJITEE), vol. 8, no. 6, pp. 2278–3075, 2019.

[30]

M. M. Azari, S. Solanki, S. Chatzinotas, O. Kodheli, H. Sallouha, A. Colpaert, J. F. Mendoza Montoya, S. Pollin, A. Haqiqatnejad, A. Mostaani, et al., Evolution of non-terrestrial networks from 5G to 6G: A survey, IEEE Commun. Surv. Tutorials, vol. 24, no. 4, pp. 2633–2672, 2022.

[31]
H. Xu, X. Sun, H. H. Yang, Z. Guo, P. Liu, and T. Q. S. Quek, Fair coexistence in unlicensed band for next generation multiple access: The art of learning, in Proc. ICC 2022 - IEEE Int. Conf. Communications, Seoul, Republic of Korea, 2022, pp. 2132–2137.
DOI
[32]
C. C. Zhang, K. K. Nguyen, and M. Cheriet, Joint routing and packet scheduling for URLLC and eMBB traffic in 5G O-RAN, in Proc. ICC 2022 - IEEE Int. Conf. Communications, Seoul, Republic of Korea, 2022, pp. 1900–1905.
DOI
[33]

S. Kumar, P. Ranjan, R. Radhakrishnan, and M. R. Tripathy, Energy aware distributed protocol for heterogeneous wireless sensor network, Int. J. Contr. Autom., vol. 8, no. 10, pp. 421–430, 2015.

[34]
R. Sedar, C. Kalalas, J. Alonso-Zarate, and F. Vázquez-Gallego, Multi-domain denial-of-service attacks in internet-of-vehicles: Vulnerability insights and detection performance, in Proc. IEEE 8th Int. Conf. Network Softwarization (NetSoft), Milan, Italy, 2022, pp. 438–443.
DOI
[35]
P. Singh, A. Bansal, A. E. Kamal, and S. Kumar, Road surface quality monitoring using machine learning algorithm.
[36]
N. Sen and A. F. A, Intelligent admission and placement of O-RAN slices using deep reinforcement learning, in Proc. IEEE 8th Int. Conf. Network Softwarization (NetSoft), Milan, Italy, 2022, pp. 307–311.
DOI
[37]

C. Pham, F. Fami, K. K. Nguyen, and M. Cheriet, When RAN intelligent controller in O-RAN meets multi-UAV enable wireless network, IEEE Trans. Cloud Comput., vol. 11, no. 3, pp. 2245–2259, 2023.

[38]
A. Sharma, Y. Awasthi, and S. Kumar, The role of blockchain, AI and IoT for smart road traffic management system, in Proc. IEEE India Council Int. Subsections Conf. (INDISCON), Visakhapatnam, India, 2020, pp. 289–296.
DOI
[39]

S. F. Abedin, A. Mahmood, N. H. Tran, Z. Han, and M. Gidlund, Elastic O-RAN slicing for industrial monitoring and control: A distributed matching game and deep reinforcement learning approach, IEEE Trans. Veh. Technol., vol. 71, no. 10, pp. 10808–10822, 2022.

[40]

M. Polese, L. Bonati, S. D’Oro, S. Basagni, and T. Melodia, ColO-RAN: Developing machine learning-based xApps for open RAN closed-loop control on programmable experimental platforms, IEEE Trans. Mob. Comput., vol. 22, no. 10, pp. 5787–5800, 2023.

[41]

Z. Peng, Z. Zhang, L. Kong, C. Pan, L. Li, and J. Wang, Deep reinforcement learning for RIS-aided multiuser full-duplex secure communications with hardware impairments, IEEE Internet Things J., vol. 9, no. 21, pp. 21121–21135, 2022.

[42]
S. Kumar, P. Ranjan, P. Singh, and M. R. Tripathy, Design and implementation of fault tolerance technique for Internet of Things (IoT), in Proc. 12th Int. Conf. Computational Intelligence and Communication Networks (CICN), Bhimtal, India, 2020, pp. 154–159.
DOI
[43]
T. D. Tran, K. -K. Nguyen, and M. Cheriet, Joint route selection and content caching in O-RAN architecture, in Proc. IEEE Wireless Communications and Networking Conf. (WCNC), Austin, TX, USA, 2022, pp. 2250–2255.
DOI
[44]

J. Huang, Y. Yang, Z. Gao, D. He, and D. W. K. Ng, Dynamic spectrum access for D2D-enabled Internet of Things: A deep reinforcement learning approach, IEEE Internet Things J., vol. 9, no. 18, pp. 17793–17807, 2022.

[45]
P. Singh, A. Bansal, and S. Kumar, Performance analysis of various information platforms for recognizing the quality of Indian roads, in Proc. 10th Int. Conf. Cloud Computing, Data Science & Engineering (Confluence), Noida, India, 2020, pp. 63–76.
DOI
[46]

K. M. Faisal and W. Choi, Machine learning approaches for reconfigurable intelligent surfaces: A survey, IEEE Access, vol. 10, pp. 27343–27367, 2022.

[47]
S. Reghu and S. Kumar, Development of robust infrastructure in networking to survive a disaster, in Proc. 4th Int. Conf. Information Systems and Computer Networks (ISCON), Mathura, India, 2019, pp. 250–255.
DOI
[48]

A. Alwarafy, M. Abdallah, B. S. Çiftler, A. Al-Fuqaha, and M. Hamdi, The frontiers of deep reinforcement learning for resource management in future wireless HetNets: Techniques, challenges, and research directions, IEEE Open J. Commun. Soc., vol. 3, pp. 322–365, 2022.

[49]
S. Kumar, P. Ranjan, R. Ramaswami, and M. R. Tripathy, A utility maximization approach to MAC layer channel access and forwarding, in Proc. Progress in Electromagnetics Research Symposium, Prague, Czech Republic, 2015, pp. 2363−2367.
[50]

E. Kim, H. -H. Choi, H. Kim, J. Na, and H. Lee, Optimal resource allocation considering non-uniform spatial traffic distribution in ultra-dense networks: A multi-agent reinforcement learning approach, IEEE Access, vol. 10, pp. 20455–20464, 2022.

[51]

N. Nomikos, S. Zoupanos, T. Charalambous, and I. Krikidis, A survey on reinforcement learning-aided caching in heterogeneous mobile edge networks, IEEE Access, vol. 10, pp. 4380–4413, 2022.

[52]
S. Kumar, P. Ranjan, and R. Radhakrishnan, EMEEDP: Enhanced multi-hop energy efficient distributed protocol for heterogeneous wireless sensor network, in Proc. Fifth Int. Conf. Communication Systems and Network Technologies, Gwalior, India, 2015, pp. 194–200.
DOI
[53]

Y. Cao, S. Y. Lien, Y. C. Liang, K. C. Chen, and X. Shen, User access control in open radio access networks: A federated deep reinforcement learning approach, IEEE Trans. Wirel. Commun., vol. 21, no. 6, pp. 3721–3736, 2022.

[54]

R. A. Addad, D. L. C. Dutra, T. Taleb, and H. Flinck, AI-based network-aware service function chain migration in 5G and beyond networks, IEEE Trans. Netw. Serv. Manag., vol. 19, no. 1, pp. 472–484, 2022.

[55]
S. Kumar, P. Ranjan, and R. Ramaswami, Energy optimization in distributed localized wireless sensor networks, in Proceedings of the International Conferenceon Issues and Challenges Intelligent Computing Technique (ICICT).
[56]

J. A. Ayala-Romero, A. Garcia-Saavedra, M. Gramaglia, X. Costa-Pérez, A. Banchs, and J. J. Alcaraz, vrAIn: Deep learning based orchestration for computing and radio resources in vRANs, IEEE Trans. Mob. Comput., vol. 21, no. 7, pp. 2652–2670, 2022.

[57]
O. Orhan, V. N. Swamy, T. Tetzlaff, M. Nassar, H. Nikopour, and S. Talwar, Connection management xAPP for O-RAN RIC: A graph neural network and reinforcement learning approach, in Proc. 20th IEEE Int. Conf. Machine Learning and Applications (ICMLA), Pasadena, CA, USA, 2021, pp. 936–941.
DOI
[58]
S. Sudhakaran, S. Kumar, P. Ranjan, and M. R. Tripathy, Blockchain-based transparent and secure decentralized algorithm, in Proc. International Conference on Intelligent Computing and Smart Communication, Singapore, 2019, pp. 327–336,
DOI
[59]
H. Lee, Y. Jang, J. Song, and H. Yeon, O-RAN AI/ML workflow implementation of personalized network optimization via reinforcement learning, in Proc. IEEE Globecom Workshops (GC Wkshps), Madrid, Spain, 2021, pp. 1–6.
DOI
[60]
S. Kumar, M. C. Trivedi, P. Ranjan, and A. Punhani, Evolution of Software-Defined Networking Foundations for IoT and 5G Mobile Networks. Hershey, PA, USA: IGI Publisher, 2021.
DOI
[61]
W. Silveira, F. H. Cabrini, F. V. Filho, A. C. Barros Filho, and S. T. Kofuji, Intelligent OpenRAN orchestration assisted by a reinforcement learning resource management policy, in Proc. 2nd Sustainable Cities Latin America Conf. (SCLA), Medellin, Colombia, 2021, pp. 1–6.
DOI
[62]

Y. Sun, M. Peng, Y. Ren, L. Chen, L. Yu, and S. Suo, Harmonizing artificial intelligence with radio access networks: Advances. case study, and open issues, IEEE Netw., vol. 35, no. 4, pp. 144–151, 2021.

[63]
T. Pamuklu, M. Erol-Kantarci, and C. Ersoy, Reinforcement learning based dynamic function splitting in disaggregated green open RANs, in Proc. ICC 2021 - IEEE Int. Conf. Communications, Montreal, Canada, 2021, pp. 1–6.
DOI
[64]
J. Hu, S. K. Moorthy, A. Harindranath, Z. Guan, N. Mastronarde, E. S. Bentley, and S. Pudlewski, SwarmShare: Mobility-resilient spectrum sharing for swarm UAV networking in the 6 GHz band, in Proc. 18th Annual IEEE Int. Conf. Sensing, Communication, and Networking (SECON), Rome, Italy, 2021, pp. 1–9.
DOI
Publication history
Copyright
Rights and permissions

Publication history

Received: 14 June 2023
Revised: 13 October 2023
Accepted: 09 November 2023
Published: 22 April 2024
Issue date: June 2024

Copyright

© The author(s) 2023.

Rights and permissions

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return