Regional, Lattice and Logical Representations of Neural Networks
Regional, Lattice and Logical Representations of Neural Networks
02/Sep/2025
02/Sep/2025
Speaker:
Sandro Preto, ScD
Sandro Preto, ScD
Institution:
Center for Mathematics, Computing and Cognition Federal University of ABC (UFABC), Brazil
Center for Mathematics, Computing and Cognition Federal University of ABC (UFABC), Brazil
Language
:
EN
EN
Type
:
Attending seminar
Attending seminar
Description :
A possible path to the interpretability of neural networks is to (approximately) represent them in the regional format of piecewise linear functions, where regions of inputs are associated with linear functions computing the network outputs. In this talk, we present an algorithm that translates feedforward neural networks with ReLU activation functions in hidden layers and truncated identity activation functions in the output layer. We also empirically investigate the complexity of regional representations produced by our method for neural networks with varying sizes. Lattice and logical representations of neural networks may be derived from their regional representations, and we address these representations as well.
A possible path to the interpretability of neural networks is to (approximately) represent them in the regional format of piecewise linear functions, where regions of inputs are associated with linear functions computing the network outputs. In this talk, we present an algorithm that translates feedforward neural networks with ReLU activation functions in hidden layers and truncated identity activation functions in the output layer. We also empirically investigate the complexity of regional representations produced by our method for neural networks with varying sizes. Lattice and logical representations of neural networks may be derived from their regional representations, and we address these representations as well.