A possible path to the interpretability of neural networks is to (approximately) represent them in the regional format of piecewise linear functions, where regions of inputs are associated with linear functions computing the network outputs. In this talk, we present an algorithm that translates feedforward neural networks with ReLU activation functions in hidden layers and truncated identity activation functions in the output layer. We also empirically investigate the complexity of regional representations produced by our method for neural networks with varying sizes. Lattice and logical representations of neural networks may be derived from their regional representations, and we address these representations as well.