The rapid advancements in AI, particularly in deep neural networks (DNNs), have prompted the research community to face complex safety and security challenges, which must be carefully addressed to ensure the correct integration of AI algorithms into human-centric systems. AI threats can range from intentionally crafted samples, such as adversarial perturbations or real-world adversarial objects, to unexpected out-of-distribution samples. The presence of these threats raises numerous questions and considerations about the security vulnerabilities and safety requirements of the models and applications under analysis. Accordingly, it is crucial to thoroughly understand and design testing methodologies and mitigation strategies, taking into account specific aspects and requirements of each application scenario.

Towards Trustworthy AI

Rossolini, Giulio
Primo
2025-01-01

Abstract

The rapid advancements in AI, particularly in deep neural networks (DNNs), have prompted the research community to face complex safety and security challenges, which must be carefully addressed to ensure the correct integration of AI algorithms into human-centric systems. AI threats can range from intentionally crafted samples, such as adversarial perturbations or real-world adversarial objects, to unexpected out-of-distribution samples. The presence of these threats raises numerous questions and considerations about the security vulnerabilities and safety requirements of the models and applications under analysis. Accordingly, it is crucial to thoroughly understand and design testing methodologies and mitigation strategies, taking into account specific aspects and requirements of each application scenario.
2025
9791280111739
File in questo prodotto:
File Dimensione Formato  
126-Libro manoscritto-328-1-10-20251229.pdf

accesso aperto

Tipologia: PDF Editoriale
Licenza: Dominio pubblico
Dimensione 4.79 MB
Formato Adobe PDF
4.79 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11382/584092
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
social impact