Neural network interpretation using descrambler groups [Physics]

The lack of interpretability and trust is a much-criticized feature of deep neural networks. In fully connected nets, the signaling between inner layers is scrambled because backpropagation training does not require perceptrons to be arranged in any particular order. The result is a black box; this problem is particularly severe...
Source: Proceedings of the National Academy of Sciences - Category: Science Authors: Tags: Physics Physical Sciences Source Type: research