A statistical analysis of neural computation
This paper presents an architecture and learning algorithm for a feedforward neural network implementing a two pattern (image) classifier. By considering the input pixels to be random variables, a statistical binary hypothesis (likelihood ratio) test is implemented. A linear threshold separates p[X|H_0] and p[X|H_1], minimizing a risk function. In this manner, a single neuron is considered as a BSC with the pdf error tails probability ε. A Single layer of neurons is viewed as a parallel bank of independent BSC's, which is equivalent to a single effective BSC representing that layer's hypothesis testing performance. A multiple layer network is viewed as a cascade of BSC channels, and which again collapses into a single effective BSC.
© 1994 IEEE.
Published - 00394753.pdf