X hits on this document

PDF document

The split-fovea model of visual word - page 47 / 78

231 views

0 shares

0 downloads

0 comments

47 / 78

Chapter 2. The Simulations

42

/rEd/ with frequencies 0.443 and 0.473 respectively. Although these frequency values suggest that it would be hard for the models to learn the word because of the competi- tion, both networks mangaged to learn it correctly. Interestingly however, the control learned the word as /rEd/ at all positions whereas the fixation net learned it as /riid/, also at all positions. The reason that it was possible for both nets to learn a different pronunciation for the same word might be the following. The bigram ’ea’ appears with different phonemes in the training corpus, with the most frequent phonetic feature be- ing /ii/ (e.g. neat). This explains the /riid/ output of the fixation net. The combination ’ead’ however appears most often and in higher frequencies as /Ed/ (e.g. dead). Thus the output /rEd/ from the control net.

This is an interesting example that demonstrates a possible difference between the two networks. The control net has seen the word in each position an equal number of times whereas the fixation net has seen it most often at position 4 (rea - d). This would cause a preference in the network to learn the combination ’ea’ in ’read’ as a bigram rather than the trigram ’ead’ and pronounce it as such. This is because when the word is fixated at position 4, ’rea’ will be sent to the left hemisphere and only ’d’ to the right hemisphere. Note that this network is still able to learn words like ’dead’ (/dEd/) where there is only one possible phoneme. But when there are two different possibilities, the pronunciation ’ii’ is easier.

2.2.7

Nonwords

Background.

An important part of connectionist modelling of visual word regoni-

tion has traditionally been to test the network on a series of nonwords. This is impor- tant in order to show that the network has actually learned how to pronounce words rather than having merely stored the correct outputs for the words in the training cor-

pus

without

having

learned

anything

more

general

about

the

words.

A

connection-

ist

model

needs

to

be

able

to

read

nonwords

in

order

to

justify

the

central

claim

of

Document info
Document views231
Page views240
Page last viewedFri Dec 09 15:59:22 UTC 2016
Pages78
Paragraphs3087
Words17414

Comments