neural functioning. Neurons and neural circuits are oscillatory, involving baseline levels of oscillation which are modulated by influences from other neurons and neural circuits. Some kinds of neurons never fire at all, but do modulate the activities of others. And evolution has created a virtual tool kit of temporal and spatial ranges of modulatory influences, from tiny and very fast gap junctions, to classical synapses, to volume transmitters that diffuse throughout a local population of neurons, to graded release of transmitters that are not all or nothing, and so on (Bickhard & Terveen, 1995).
Such oscillatory and modulatory architectural principles are at least as powerful as classical conceptions: a limit case of one system modulating another is for the first system to switch the second on and off, and switches are sufficient for the construction of computers. They are more powerful in that they inherently provide timing, while Turing
machines do not, and computers involve timing in a biologically impossible form.
The idealization of neurons into threshold switches that so often occurs in computer perspectives, or of simple activation level transformers as in connectionist models, is seriously unfaithful to what actually occurs in the brain (Bickhard & Terveen, 1995). The contrasting oscillatory and modulatory architectural consequence of the interactive model is not logically forced, but it is forced by evolutionary considerations, and it is consonant
with actual brain processes.
Irreversibility and normative scaling. One further biological foundation that I would like to consider arises from looking more carefully at the case of a computer controlling a robot. A robot can interact with the world, and so would seem to satisfy the
interactive condition. To be successful in its interactions, the computer would have to appropriately handle timing in some way, even if not in a biologically plausible way. If we suppose, for example, that among the capabilities of the robot is the ability to detect when
its batteries are running low, and to seek out power sources to replenish its batteries when that occurs, then we would seem to have, in some minimal sense, a far-from-equilibrium system that is also minimally self-maintenant, and even recursively self-maintenant (because it can switch into and out of “power source seeking” similarly to the paramecia’s ability to switch into and out of “swimming”).
Could such a robot have emergent cognition? The contrast with the biological case arises in the fact that most of the robot’s body is not far-from-equilibrium, cannot be self- maintained, and certainly not recursively self-maintained. Conversely, the only part of the robot that is far from equilibrium, the battery, is not self-maintaining.