Chapter 2. The Simulations
Figure 2.1: A graphical overview of the PDP++ network used. The two input layers are
connected to the two hidden layers which are connected to the output. The two hidden
layers are connected to each other. This example is for the word desk fixated between
the letters ’e’ and ’s’. A unit with an activation of 1 is yellow and a unit with an activation
of -1 is blue. Any other colors are somewhere in-between this range.
hidden layers corresponding to the left and right hemisphere respectively. The two hidden layers are connected to each other and both are connected to the output layer (see figure 1.8 in chapter 2).
Like in Shillcock and Monaghan (2003), both the orthographic input and the phono- logical output representations ared slot-based representations. For the orthographic input, this means that there are four slots per input layer, one for each letter (since we are dealing with four letter words). Each slot has twenty-six units, so that each unit in a slot corresponds to a single letter. Thus, the two input layers each have 104 units. When a word is presented to the network, the unit corresponding to the letter in each slot is activated. For those slots where no letter is presented, no unit is active. Both hidden layers have 100 units, which is enough for them to solve the task (Shillcock and Monaghan, 2003). The network was implemented in PDP++ (Dawson et al., 2001) and figure 2.1 shows the graphical ’display’ of the network. The figure shows a presenta- tion of the word ’desk’ after training was completed. The network is able to ’read’ the