Fig. 1A, B Target motion and display conditions. A The trajectory of the target across the display screen, each dot representing one frame of the motion. The axes are approximately the visual angles subtended at the subject’s eye. B The four conditions used in ex- periment 1, as displayed to the subject. The subjects were always to follow the white target (hollow square) with their eyes, and the red square (shown hatched) was always the target for the cursor (large cross). The cursor was actively moved by the subject in tasks A and C, or it was passively moved in tasks B and D, fol- lowing the previously recorded trajectory to maintain approxi- mately similar retinal inputs
using distribution approximations from the theory of Gaussian Fields. This characterisation is in terms of the probability that a re- gion of the observed number of voxels (or bigger) could have oc- curred by chance [P(nmax>k)], or that the peak height observed (or higher) could have occurred by chance [P(Zmax>u)] over the entire volume analysed (i.e. a corrected P-value).
Experiment one. Cerebellar activation during co-ordinated eye and hand tracking
Subjects lay on the scanner bed with head restraint achieved by Velcro straps across their forehead and by an individually moulded bite bar covered with a vinyl polysiloxane dental plastic com- pound (Exafine, GC Intl., Tokyo, Japan). They viewed a back pro- jection screen set across the scanner opening and positioned just above the subjects’ legs. A front-silvered mirror within the head coil allowed vision of the screen, and the distance from eye to
screen was about 1.5 m. A colour LCD VGA-projector (Sony VPH-12720 J) was used to display images on the screen and also to project text cues to the subjects between different tracking con- ditions: a single, large font word (“REST”, “MOVE” etc.) was displayed at the bottom centre of the screen for 4.4 s, indicating the changes between each tracking condition. In addition, the op- erator gave a verbal instruction to the subjects via pneumatic head-phones at the same time as the visual instruction was dis- played.
The target(s) were provided by a small light-coloured square moving in a slow, smooth and unpredictable trajectory against a black background (Fig. 1A). In all instances, the target for the eyes was a filled white square 55 pixels, subtending about 0.73° at the subject’s eye, and moving within a frame subtending ap- proximately 2320° at the eye. The target waveform was the sum of four non-harmonically related sinusoids (0.125–0.55 Hz), cho- sen to satisfy approximately the 2/3 law for speed and curvature (Lacquaniti et al. 1983); average target velocity was 21.3°/s.
Movement of the subject’s hand was recorded by a modified computer mouse (PocketEgg, ELECOM, Japan), moved across a 2020 cm urethane board and displayed on the screen as a light green cross 1010 pixels in size. Hence, subjects performed a vi- sual tracking task in which they attempted to track the target waveform with the cursor as accurately as possible. As explained in more detail below, there were actually two separate targets dis- played on screen: one was a white “ocular” target to be followed with the eyes; the second was a hollow, light red square of 77 pixels, which was the target for the mouse-controlled cursor (Fig. 1B).
To monitor tracking performance in the manual tracking condi- tion, we recorded the absolute spatial error between the cursor and the target accumulated over each 4.4 s, and also the total distance the mouse was moved every 4.4 s. Movement of the eyes could