Machine learning (ML) continues to show its potential and efficacy in automating network management tasks, such as failure management. However, as ML deployment considerations broaden, aspects that go beyond predictive performance, such as a model’s computational complexity (CC), start to gain significance, as higher CC incurs higher costs and energy consumption. Balancing high predictive performance with reduced CC is an important aspect, and therefore, it needs more investigation, especially in the context of optical networks. In this work, we focus on the problem of reducing the CC of ML models, specifically neural networks (NNs), for the use case of failure identification in optical networks. We propose an approach that exploits the relative activity of neurons in NNs to reduce their size (and hence, their CC). Our proposed approach, referred to as iterative neural removal (INR), iteratively computes neurons’ activity and removes neurons with no activity until reaching a predefined stopping condition. We also propose another approach, referred to as guided knowledge distillation (GKD), that combines INR with knowledge distillation (KD), a known technique for compression of NNs. GKD inherently determines the size of the compressed NN without requiring any manual suboptimal selection or other time-consuming optimization strategies, as in traditional KD. To quantify the effectiveness of INR and GKD, we evaluate their performance against pruning (i.e., a well-known NN compression technique) in terms of impact on predictive performance and reduction in CC and memory footprint. For the considered scenario, experimental results on testbed data show that INR and GKD are more effective than pruning in reducing CC and memory footprint.

Toward low-complexity neural networks for failure management in optical networks

Zar Khan, Lareb;Sgambelluri, Andrea;De Marinis, Lorenzo;Sambo, Nicola
2025-01-01

Abstract

Machine learning (ML) continues to show its potential and efficacy in automating network management tasks, such as failure management. However, as ML deployment considerations broaden, aspects that go beyond predictive performance, such as a model’s computational complexity (CC), start to gain significance, as higher CC incurs higher costs and energy consumption. Balancing high predictive performance with reduced CC is an important aspect, and therefore, it needs more investigation, especially in the context of optical networks. In this work, we focus on the problem of reducing the CC of ML models, specifically neural networks (NNs), for the use case of failure identification in optical networks. We propose an approach that exploits the relative activity of neurons in NNs to reduce their size (and hence, their CC). Our proposed approach, referred to as iterative neural removal (INR), iteratively computes neurons’ activity and removes neurons with no activity until reaching a predefined stopping condition. We also propose another approach, referred to as guided knowledge distillation (GKD), that combines INR with knowledge distillation (KD), a known technique for compression of NNs. GKD inherently determines the size of the compressed NN without requiring any manual suboptimal selection or other time-consuming optimization strategies, as in traditional KD. To quantify the effectiveness of INR and GKD, we evaluate their performance against pruning (i.e., a well-known NN compression technique) in terms of impact on predictive performance and reduction in CC and memory footprint. For the considered scenario, experimental results on testbed data show that INR and GKD are more effective than pruning in reducing CC and memory footprint.
2025
File in questo prodotto:
File Dimensione Formato  
Toward_low-complexity_neural_networks_for_failure_management_in_optical_networks.pdf

solo utenti autorizzati

Tipologia: Documento in Pre-print/Submitted manuscript
Licenza: Creative commons (selezionare)
Dimensione 1.49 MB
Formato Adobe PDF
1.49 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11382/582042
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
social impact