Welcome to Dan Morris's CG102 final project...


Java-Based Observation of the Hopfield Network

The Applet itself and a detailed description follow. The accompanying paper is also available at :
Also note that this takes a minute to load, but it really will load. I swear.

From here on I will assume you have the Applet properly loaded, and will make reference to various GUI components. In fact, the following is a description of each of the components and the specifics of the network implementation where appropriate; this seems an appropriate approach for description of the project. Examples, conclusions, and interesting observations will be interspersed throughout the discussion of the components. If you just want to skip to the primary conclusions, look for the PROJECT CONCLUSION fields; you can't miss them.

If you're having trouble loading the Applet, all of the following information and more, along with a screen shot, is available at :


I. The input fields :

PROJECT CONCLUSION #1 : if I were really implementing a character recognition algorithm, a simple neural network would be inadequate, and a great deal of image parsing would be required (well beyond the simple scaling employed here). Furthermore, even if I were going to implement a neural-net-based OCR procedure, a Hopfield net would likely not be the most efficient choice. End project conclusion #1.

PROJECT CONCLUSION #2 : The system is much better at recognizing a figure that has had considerable noise applied to it than it is at discerning a hand-drawn shape. In fact, it is quite good at reconstructing noisy patterns that are unrecognizable to the human eye, but is completely incompetent at building a figure that differs in shape from the prototype, even for figures easily recognizable to a literate reader.

As a quick example, try drawing a simple figure (my favorite is an 'x' that is 5 squares from corner to corner). Click 'scale' (because it looks nicer that way), then click 'train'. Then set the noise field to 20%, and click the 'noise' button. Likely you would never recognize the figure in front of you as an 'x'. However, clicking 'propagate' should fully reconstruct the figure.

This is, of course, a silly demonstration, because only one figure has been stored. But it does show that on some level, the network is capable of reconstructing badly damaged prototypes that are beyond human recognition (but still really bad at handwritten characters). Similar properties are indeed demonstrated for larger training sets. Perhaps this network would thus be well-suited to recovering damaged typewritten characters into digital text (fuzzy-looking faxes are a great example).

II. The output fields :

III. The animation fields :

IV. The pattern set fields :

V. The file fields :

VI. The correction trial fields :

VII. Nonlinearization to binary elements :

VIII. Constraining weight symmetry :

And so that's about it. An interesting exploration, though a disappointment with regard to character recognition. A task for graphics-types...

Return to the Cobbler's home page.