Pruning is a technique in machine learning used to simplify models, reduce over-fitting, and improve efficiency. It works by reducing the complexity of a model through the removal of certain components. In the context of neural networks, traditional weight pruning involves setting the smallest weights to zero, effectively eliminating their contribution to the network’s output.Structural pruning, on the other hand, takes this concept further by removing not just individual weights, but entire neurons, connections, or layers. This leads to a change in the network architecture itself, potentially resulting in a model that’s more efficient, easier to understand, and simpler to implement on hardware. The challenge lies in achieving the right balance, removing enough components to gain efficiency without sacrificing too much model performance.In the context of pruning, a dependency graph can help determine which parts of a neural network can be removed without disrupting the remaining architecture. The graph visualizes how different operations and layers of the network rely on each other. By examining these dependencies, we can identify nodes or connections that, if removed, would not affect the overall data flow, or would only minimally impact the model’s performance. This makes dependency graphs a valuable tool for optimizing the process of structural pruning in neural networks.On one hand, a pruned network could potentially be more robust against adversarial attacks. Its reduced complexity might limit the avenues an attacker can exploit. Additionally, the increased interpretability could help in identifying and understanding potential vulnerabilities, pruning could also potentially make a model more vulnerable if important defensive features are pruned away. Also, the change in data flow and dependencies from the pruning process could open up new vulnerabilities.

On the Robustness of Prunnig Algorithms to Adversarial Attacks

Tajwar, Muhammad
2023/2024

Abstract

Pruning is a technique in machine learning used to simplify models, reduce over-fitting, and improve efficiency. It works by reducing the complexity of a model through the removal of certain components. In the context of neural networks, traditional weight pruning involves setting the smallest weights to zero, effectively eliminating their contribution to the network’s output.Structural pruning, on the other hand, takes this concept further by removing not just individual weights, but entire neurons, connections, or layers. This leads to a change in the network architecture itself, potentially resulting in a model that’s more efficient, easier to understand, and simpler to implement on hardware. The challenge lies in achieving the right balance, removing enough components to gain efficiency without sacrificing too much model performance.In the context of pruning, a dependency graph can help determine which parts of a neural network can be removed without disrupting the remaining architecture. The graph visualizes how different operations and layers of the network rely on each other. By examining these dependencies, we can identify nodes or connections that, if removed, would not affect the overall data flow, or would only minimally impact the model’s performance. This makes dependency graphs a valuable tool for optimizing the process of structural pruning in neural networks.On one hand, a pruned network could potentially be more robust against adversarial attacks. Its reduced complexity might limit the avenues an attacker can exploit. Additionally, the increased interpretability could help in identifying and understanding potential vulnerabilities, pruning could also potentially make a model more vulnerable if important defensive features are pruned away. Also, the change in data flow and dependencies from the pruning process could open up new vulnerabilities.
2023-10-16
File in questo prodotto:
File Dimensione Formato  
888394-1274637.pdf

accesso aperto

Tipologia: Altro materiale allegato
Dimensione 3.33 MB
Formato Adobe PDF
3.33 MB Adobe PDF Visualizza/Apri

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14247/4221