An invariants based architecture for combining small and large data sets in neural networks

Authors Roelant Ossewaarde, Stefan Leijnen, Thijs van den Berg
Published in Proceedings of BNAIC/BeneLearn 2021.
Publication date 10 November 2021
Research groups Artificial Intelligence
Type Lecture


We present a novel architecture for an AI system that allows a priori knowledge to combine with deep learning. In traditional neural networks, all available data is pooled at the input layer. Our alternative neural network is constructed so that partial representations (invariants) are learned in the intermediate layers, which can then be combined with a priori knowledge or with other predictive analyses of the same data. This leads to smaller training datasets due to more efficient learning. In addition, because this architecture allows inclusion of a priori knowledge and interpretable predictive models, the interpretability of the entire system increases while the data can still be used in a black box neural network. Our system makes use of networks of neurons rather than single neurons to enable the representation of approximations (invariants) of the output.

On this publication contributed

Language English
Published in Proceedings of BNAIC/BeneLearn 2021.
ISBN/ISSN URN:ISBN:0-2799-2527-X
Key words Interpretability, Neural Network architecture, A priori knowledge
Page range 748-749

Roelant Ossewaarde

Roelant Ossewaarde | Researcher | Intelligent Data Systems

Roelant Ossewaarde

  • Researcher
  • Research group: Artificial Intelligence