In recent years, decision support systems have become more and more perva- sive in our society, playing an important role in our everyday life. But these systems, often called black-box models, are extremely complex and it may be impossible to understand or explain how they work in a human interpretable way. This lack of explainability is an issue: ethically because we have to be sure that our system is fair and reasonable; practically because people tend to trust more what they understand. However, substituting black-box model with a more interpretable one in the process of decision making may be impossible: interpretable model may not work as well as the original one or training data may be no longer available. In this thesis we focus on forests of decision trees, which are particular cases of black-box models. If fact, trees are interpretable models, but forests are composed by thousand of trees that cooperate to take decisions, making the final model too complex to comprehend its behavior. In this work we show that Generalized Additive Models (GAMs) can be used to explain forests of decision trees with a good level of accuracy. In fact, GAMs are linear combination of single-features or pair-features mod- els, called shape functions. Since shape functions can be only one- or two- dimensional functions, they can be easily visualized and interpreted by user. At the same time, shape functions can be arbitrarily complex, making GAMs as powerful as other more complex models.
ExplainableAI: on explaining forest of decision trees by using generalized additive models
De Zan, Martina
2021/2022
Abstract
In recent years, decision support systems have become more and more perva- sive in our society, playing an important role in our everyday life. But these systems, often called black-box models, are extremely complex and it may be impossible to understand or explain how they work in a human interpretable way. This lack of explainability is an issue: ethically because we have to be sure that our system is fair and reasonable; practically because people tend to trust more what they understand. However, substituting black-box model with a more interpretable one in the process of decision making may be impossible: interpretable model may not work as well as the original one or training data may be no longer available. In this thesis we focus on forests of decision trees, which are particular cases of black-box models. If fact, trees are interpretable models, but forests are composed by thousand of trees that cooperate to take decisions, making the final model too complex to comprehend its behavior. In this work we show that Generalized Additive Models (GAMs) can be used to explain forests of decision trees with a good level of accuracy. In fact, GAMs are linear combination of single-features or pair-features mod- els, called shape functions. Since shape functions can be only one- or two- dimensional functions, they can be easily visualized and interpreted by user. At the same time, shape functions can be arbitrarily complex, making GAMs as powerful as other more complex models.File | Dimensione | Formato | |
---|---|---|---|
846036-1230828.pdf
accesso aperto
Tipologia:
Altro materiale allegato
Dimensione
2.84 MB
Formato
Adobe PDF
|
2.84 MB | Adobe PDF | Visualizza/Apri |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14247/7234