Learning Systems Pattern Recognition Graph Theory Recurrent Neural Networks
Issue Date:
2008
Publisher:
Institute of Information Theories and Applications FOI ITHEA
Abstract:
When Recurrent Neural Networks (RNN) are going to be used as Pattern Recognition systems, the
problem to be considered is how to impose prescribed prototype vectors
ξ^1,ξ^2,...,ξ^p as fixed points. The
synaptic matrix W should be interpreted as a sort of sign correlation matrix of the prototypes, In the classical
approach. The weak point in this approach, comes from the fact that it does not have the appropriate tools to deal
efficiently with the correlation between the state vectors and the prototype vectors The capacity of the net is very
poor because one can only know if one given vector is adequately correlated with the prototypes or not and we
are not able to know what its exact correlation degree. The interest of our approach lies precisely in the fact that it
provides these tools. In this paper, a geometrical vision of the dynamic of states is explained. A fixed point is
viewed as a point in the Euclidean plane R2. The retrieving procedure is analyzed trough statistical frequency
distribution of the prototypes. The capacity of the net is improved and the spurious states are reduced. In order to
clarify and corroborate the theoretical results, together with the formal theory, an application is presented