Chapter 1. Introduction and Background
Figure 1.4: This figure (taken from Harm and Seidenberg (1999)) shows the basic
architecture of the model.
After training, the network computed the correct output for 97 3% of the words in the training corpus. In total there were 77 errors, 14 of which arose from wrong coding by the experimenter. Errors made were mostly on low-frequency words, and 14 of the errors were regularisation errors, where an irregular word is pronounced in a regular way. However, an important criticism of this model was that it performed much less well on non-words than human subjects (Coltheart et al., 1993). For example on one set of nonwords, the networks performance was 59% correct whereas humans typically get 94% correct. The problem is that Seidenberg and McClelland (1989) claimed that it was meant to read regular, exception words and non-words at a level comparable to human subjects.
The Harm and Seidenberg (1999) Model
Harm and Seidenberg (1999)’s model
builds directly on vious model, with figure 1.4).
the previous model. There are several improvements to the the main difference being the phonological output component
As the previous net, this one has a phonological output layer. The difference now is that the whole phonological output component is made up of the phonological units and a layer of so-called clean-up units. These clean-up units can be thought of as a second set