In search of general theories

THEORETICAL IMPLICATIONS OF MULTISENSORY NEURONS

13.07.2014 17:42

THEORETICAL IMPLICATIONS OF MULTISENSORY NEURONS

 

 
 

Arturo Tozzi

MD, PhD, ASL Napoli 2 Nord, Napoli, Italy

 

ABSTRACT

Recent advances in neuronal multisensory integration shed new light on questions concerning the functional architecture of mind and the nature of mental states. 

The neocortex is a sum of multisensory neurons that form extensive forward, backward and lateral connections and collect, via “inner” receptors, progressively more elaborate and converging inputs from unimodal and heteromodal areas.  Our “plastic-connectionist” model provides a mechanicistic explanation to the issue of cognitive architecture, considering as well genetic constraints, phenotypic factors and dynamic interactions between organism and surrounding environment.  

The inner receptors integrate the inputs into a novel output which holds a semantic, rather than syntactic, content.  The “theory of the inner receptors”, although reductionistic, takes into account the “cognitive” need of theoretical representational states and, supplying an empirical adequate description to conceptualist models, provides a neuroanatomical frame to cognitive semantics. 

Our senses do not exist in isolation of each other.  Perception, cognition and motor control are integrated already at very early levels of processing, in a densely coupled system with multisensory interactions occurring at all temporal and spatial stages.  Higher multisensory neurons are also able to re-evoke complex data also in absence of the original external stimuli, by comparing and integrating large amounts of semantic messages. 

 

KEY WORDS

Multisensory integration, heteromodal cortex, pyramidal neurons, external receptor, epistemology, cognitive semantics. 

 

 

 

NEUROANATOMICAL BACKGROUND: MULTISENSORY NEURONS AND INNER RECEPTORS

 

Current advances in human neurosciences, in particular in multisensory integration, shed new light on questions concerning the status of the mental and its relation to the physical. 

Much research on the basic science of sensation asks what types of information the brain receives from the external world.  As an example we will go through the visual system, the best known and the most important among sensory systems in Primates. 

The retinal receptors are sensitive to simple, quantitative signals related to the external world.  The message is sent to the primary visual cortex V1[1].  Specific aspects of vision such as form, motion or colour appear to be segregated in different parallel pathways.   V1 projects to the associative areas termed “unimodal”, since they are influenced by a sole sensory modality (in this case, the vision).  The visual unimodal associative cortex (secondary visual cortex) is arranged in two processing streams, devoted to the control of actions and to perception of objects.  Processing channels, each serving simultaneously specialized functions and generating a topographically organized neural map of the receptive surface in the brain, are also present in the central auditory and somatosensory systems.

The message is then conveyed from unimodal areas to associative areas termed “heteromodal”, since they are influenced by more than one sensory modality (visual, auditory, somatosensory, and so on). 

An high-order heteromodal association area, the prefrontal cortex, receives highly processed inputs conveyed by all unimodal and heteromodal sensory association areas. 

The neocortical sequential processing of information is apparently hierarchical, such that the initial, low-level inputs are transformed into representations and multisensory integration emerges at multiple processing stages (Werner and Noppeney 2010). 

We shall give an example of multisensory integration.  You are in Africa.  The form, colours and movements recognized by your eyes converge in your visive unimodal areas, and you get in front of a male lion (don’t worry, we are not talking about the Wittgenstein’s lion) wandering into the jungle.  Wait.  It’s moving towards you…  Other sensations join your image in the heteromodal associative cortices: the roar, the quite strong odour, the warm wind, your feet trembling.  Once the messages have been processed, the memory system plays a key role in their storage and retrieval: you desperately dig out your past experiences, stored somewhere in your brain… Additional building blocks are integrated in high-order heteromodal association areas: you are frightened by the lion approaching… better try to run away or turn your back on the animal?  Try to make and hold eye-contact with the lion?  Try to appear larger?  Wave your arms and make noise?  Note that the whole process takes less than half a second. 

Within the last decade, neuroscience has witnessed major advances, emphasizing the potentially vast underestimation of multisensory processing in many brain areas.  New mechanisms, such as the “unimodal multisensory neurons” (Allman et al. 2009), have been demonstrated.  In addition, multisensory interactions have been reported at system levels traditionally classified as strictly unimodal: both primary and secondary sensory areas receive substantial inputs from sources that carry information about events impacting other modalities (Hackett and Schroeder 2009). 

The current broad consensus is that the multimodal model is widely diffused in the brain and that most, if not all, higher, as well as lower level neural processes are in some form multisensory.  Information from multiple senses is integrated already at very early levels of processing and extensive forward, backward and lateral connections aid communication, in a densely coupled system with multisensory interactions occurring at all temporal and spatial stages (Klemen and Chambers 2012). 

 

Differences among neocortical zones are mainly due to macro- and micro-circuitry, other than pyramidal neurons’ intrinsic qualities such as different neurotransmitters, fast- or regular-spiking, chattering, bursting and so on.  The pyramidal neurons of various neocortical areas differ in their afferents.  Analogous to the McCullogh-Pitts neuronal model, every pyramidal neuron share a functional feature: countless converging inputs activate a single output.  We term “inner receptor” the network of converging axonal inputs, microcircuitry and interneurons surrounding the pyramidal neurons.  We suggest that the multisensory integration is not a feature of the pyramidal neurons, but of their inner receptors.  If the pyramidal neurons are passive “puppets” animated by the inner receptors, the inner receptor must be located in a place in which large amounts of data can be stored.  The best anatomical candidate is the arrangement of fibers, interneurons and microcircuitry surrounding the pyramidal neurons in the layer IV, which receives afferents both from the sensory pathways and cortical areas. 

The hypothetic pathway is shown in Figure.  It is perhaps enough if we can sketch an account, whether or not it is correct in detail.  Step by step the content of the single neuronal outputs turns out to be increasingly complex.  The wheel has come full circle when the highest polymodal areas send feedback messages to the primary sensory cortices.  From now on, a ceaseless pathway of feedbacks and feedforwards takes place among cortical areas.  We emphasize the functional importance of the so called “backward” connections.  The effects of the projections from higher areas to the multisensory neurons play an important role both in maturation of multisensory integration and in complex brain functions, providing meaningful adjustments to the ongoing activity of a given area.  It has been argued that, contrary to common conjecture, backward connections might embody the causal structure of the world while forward connections only provide feedback about prediction errors to higher areas.  That is, anatomical forward connections are functional feedback connections (Klemen and Chambers 2012).  We will be able to come back to this important topic soon.  

 

 

 

INNER RECEPTORS AND CURRENT PHILOSOPHICAL ISSUES

 

Taking into account the recent experimental data, the “naturalized” statement that we cannot ask any firmer ground than science itself on which to build epistemological claims[2], the lessons from evolutionism and the Mach’s principle of economy of thought, we propose a theoretical framework as follows: the neocortex is a sum of multisensory neurons guided by “inner” receptors.  The latter receive progressively more elaborate and converging inputs from unimodal and heteromodal areas.  We will examine our model’s theoretical consequences on the issues of the functional architecture of mind, the nature of mental states and the theory of knowledge.

The inner receptor’s theory, at odds with the current opinions, takes into account the content-bearing properties of physical neural networks.  Our framework “reduces” representational states to construct of observables, thus providing a brute mechanicistic, empiricistic account of the neural processes of the knowledge.  “We have been under the reign of Emergentism (and of nonreductive materialism, we add) since the early 1970s” (Kim 1999), and our model resembles a predictive/explanatory or representational/cognitive epistemological emergence, or a mereological ontological emergence.  That is not as easy as it sounds, though.  We treated the whole business of puppeteers (the inner receptors) controlling puppets (the pyramidal neurons) as a didactic scheme: however, the pyramidal neurons of the lower levels contribute with their axon’s branching to the structure of the inner receptors of the higher levels.  Thus, we continue to accept the existence of inner receptors but come to see that they can be (partly) identified with neurons.  Inner receptors to a certain extent reduce to neurons.  It is a case of partial identity, of ontological reductionism, at least in the particular domain of cortical neural microanatomy.  

 

 

Among the current “paradigms” on cognitive architecture[3], we favour the connectionist[4] one.  It has recently received further experimental corroborations.  Using diffusion tensor imaging to map connections in human brains, twelve highly interconnected bilateral hubs were identified in the hippocampus, thalamus, putamen and in precuneous, superior frontal, superior parietal cortices (Van den Heuvel and Sporns 2011).  The high interconnectivity suggests that the hubs-nodes function as an integrated unit and are important for communication between cortical areas. 

However our multisensory approach, that we term “plastic connectionism”, considerably differs from the connectionist theories and embraces as well features of other models.  The inner receptors are able to work not only at the brain’s low-level processes (such as perception, motor control and simple mnemonic processing) like neural networks and “fuzzy” control systems, but also at the so-called high-level processes (such as planning and reasoning) like “classical” models of artificial intelligence. 

We admit with connectionism that hierarchical systems are a central feature of modern theories about the brain, but we argue that the sensory modalities form a broadly interconnected network with multiple forward, backward and lateral connections rather than, as previously assumed, a system of parallel sensory streams that converge at higher levels.  The key difference is whether the hierarchy comprises distinct and parallel streams that converge at higher levels, or whether there are dense interactions between all sensory streams, even at lower levels (Klemen and Chambers 2012). 

Against psychological nativism, we do not postulate the complete innateness of cognitive functions and concepts.  We held the “interactionist consensus” that all mental features are dependent both on genetic and phenotypic factors.  Just as the evolution allows mutational forces to modify the genome and provides large degrees of freedom to phenotypic and epigenetic factors, the brain allows the synapses to associate with few restrictions, insofar as probababilistic, genotypic and developmental constraints permit.  We do not view neural machinery as entirely genetically fixed or as a deterministic Turing machine, but as a plastic device which dialectically interacts with the surrounding environment.  A stated above, we support a model that assume rich crossmodal interconnectedness.  Our hypothesis, based on dynamic inner receptorial systems which are the true “puppeteers” of both our sensations and brain processing, requires the neural machinery operating in embodied interaction in a structured environment in which the organism is embedded, in touch with the “counter-revolution” to the cognitive revolution.  Simulations of multisensory interactions at the neuronal level indicate that the simple effect of multisensory convergence is sufficient to generate multisensory properties like those of biological multisensory neurons (Lim 2011): it implies an amount of genetic constraints.  Nonetheless, the multisensory responses almost never could be predicted on the basis of a simple linear addition of their unisensory response profiles: the interactions may be superadditive, subadditive, or of inverse effectiveness and they may change during an experiment (Krueger 2009, Perrault et al. 2011).  It demonstrates that the “multisensory experiences” are a plastic capacity of the network which interacts with the environment in a “diachronic” process via feedforwards-feeedback circuitries, thus allowing animals to improve reactions to immediate dangers and to adapt to rapidly varying events of the external world.  Indeed a deep investigation of the two visual pathways of the secondary cortex (“where system” and “what system”) shows that their interconnections in temporal heteromodal areas form distributed “dialectical” networks and provide a strong evidence for a common and simultaneous neural substrate for action and perception, “praxis” and “theory”.  Thus, our theory suggests that the brain is richly interconnected at all processing levels.  Functional interplay between the senses occurs at all anatomical levels of the processing hierarchy and across a varied time course.  Very rarely certain conditions may engender the predominant overall direction of information processing described by functionalism and connectionism. 

 

“Puppets and puppeteers” permit us to establish the metaphor in our cognitive theory.  If a metaphor is a linguistic expression in which at least one part of the expression is transferred from one domain of application (source domain), where it is common, to another (target domain) in which it is unusual (Bailer-Jones 2002), there are formal analogies with inner receptors’ function.  The metaphor is likely produced by the convergence on a group of multisensory neurons of both literal and figurative messages.  The heteromodal integration, aiming to gain insight through the metaphor that no literal paraphrase could ever capture, links together in a linguistic expression the domain of intuitive common-sense understanding of “literal” and the domain of application of the “figurative”. 

Nevertheless, the peculiar nature of multisensory integration leads to a radical rethinking of the mental representation and give us the possibility to go so far as to claim a more radical conclusion.  Our framework, that we term “multisensory semantic theory”, throws a bridge between representationalism and anti-representationalism, between the functionalist computational conception of mind and the Chinese room argument.  The theory of inner receptors, although mechanicistic, takes into account the “cognitive” need of theoretical representational states and, supplying an empirical adequate description to conceptualist theories, provides a neuroanatomical frame to cognitive semantics.  Can a single output contain more information than each of its countless inputs?  The answer (positive) to this crucial question draws important cognitive theoretical conclusions, concerned both with information and knowledge.  Against Fodor’s functionalism, we do not maintain that thinking and other mental processes consist primarily on computational operating on the “syntax” of representation.  The inner receptor is the possible link between the “semantic” notion of mental content and the computational notion of functional architecture.  Our framework does not exclude that the layer IV’ networks can operate via computational models (i.e., boolean logic), but gives to the countless inputs the opportunity to solve not computable functions (not possible for an universal Turing machine) and to “fabricate” a cognitive process.  The inner receptors extract the salient, meaningful informations from different sensations and “melt”  them into a novel message, which holds a “semantic” content.  The novel message’s representational content, rather than a neopositivist syntactic axiomatic proposition in first-order logic, is a (realistic) semantic notion equated to an abstract, theoretic model of the real world, an idealized and schematized representation of convergent data.  Consider the classical “leg of the table”.  The leg “appears to us at first a single, indivisible whole.  One name designates the whole”.  But “the apparently indivisible thing is separated into parts” (Mach 1885: 41): the world “leg” is not strictly a literal source domain, because it already contains the “merging” and the integration of several features, such as the color, the shape, the hardness, the related evoked feelings.  Sensations and their meanings are relationally defined by comparing and contrasting their meanings to one another, paraphrasing Saussure.  For instance, the sound images for and the conception of a book differ from the sound images for and the conception of our leg.  The arrangement of the inner receptor let us hypothesize that our “naïve” thoughts are semantic, not syntactic, and that in our mental processes the metaphor precedes the axiomatization and the symbols in first-order predicate logic.  The original semantic relations between symbols are followed by the formal syntax rules, and the logical thinking can be established.  It gives further support to the negative answer given by Church’s theorem to the Hilbert’s Entscheidungsproblem asking for an algorithm that takes as input a statement of a first-order logic.  Scientific data confirm heteromodal integration as “semantic” and show which cortical networks are most likely to mediate these effects, with the temporal being more responsive to semantically congruent and the frontal to semantically incongruent audio-visual stimulation (Doehrmann and Naumer 2008).  Other papers suggest that multisensory facilitation is associated with posterior parietal activity as early as 100 ms after stimulus onset; as participants are required to classify multimodal stimuli into semantic categories, multisensory processes extend in cingulate, temporal and prefrontal cortices 400 ms after stimulus onset (Diaconescu et al. 2011).  Just as the connectionists, our framework suggests that the schemes are learned via exposure to data, but, against the connectionists, it is able as well to account for linguistic competence. 

 

Inner receptors have astonishing connections with the Gärdenfors’ (2011) tenets of cognitive semantics, although with few exceptions, and provide a bridge between realistic semantics (the meaning of an expression is something out there in the world) and cognitive semantics (meanings are expressions of mental entities independent of truth). 

The first tenet states that meaning is conceptualization in a cognitive system (not truth conditions in possible worlds).  We share the view that “meanings are in the head”, but we do not agree with “meaning comes before truth and is independent of truth” (see below our theory of knowledge). 

The second tenet states that cognitive models are perceptually determined, and experimental data fully agree.  Meaning is not independent of perception or of bodily experience or, we add, of peripheral or inner receptors. 

The third and the fourth tenets are intriguing.  They state that semantic elements are based on spatial or topological rules (forming a Gardenfors’ “conceptual space” similar to Langacker’s “domains”) (Gärdenfors 2000, Kuhn 2003), not symbols, and that cognitive models are not propositions, but primarily image-schemas which are transformed by metaphoric and metonymic operations (Gardenfors 2011).  It bears a resemblance with the anatomical arrangement and function of our inner receptor.  Experimental data provide strong evidence for a complex interdependency between spatial location and temporal structure in determining the ultimate behavioral and perceptual outcome associated with a paired multisensory stimulus (Stevenson 2012).  This experience of time and two- or three-dimensional spatial configurations and the expressions pertaining to higher level of conceptual organization can be explained by a mapping of the components onto the inner receptors.  Langaker’s image schemas (“imagistic concepts which are abstracted from pre-conceptual bodily experience, function as constituents of more complex notions, and provide the structure projected metaphorically to more abstract domains”) bear a close resemblance to the multisensory integration. 

The blending is a standpoint of cognitive semantics, applicable to a wide variety of experiences, both linguistic and non-linguistic (Langacker 2008).  Blending refers to a “mental” space configuration in which elements of two input spaces are projected into a third space, the blend, which thus contains elements of both but is distinct from either one (Fauconnier 1997).  If we figure this space not “mental” but “anatomical”, the blend turns into the inner receptor.  Rather than just combining predicates, the semantic model and the inner receptor blend behavior from multiple (mental or anatomical) spaces and explain “apparently” emergent phenomena. 

 

 

Multisensory neurons receiving convergent inputs from multiple sensory modalities (independent sources of information or “cues”) integrate information from different senses to profoundly increase their sensitivity to external events (Krueger et al. 2009).  When cues are available, combining them facilitates the detection of salient events and reduces perceptual uncertainty (Fetsch et al. 2010).  Even if research believes that the primary benefit of multisensory neurons is to guide behaviour, they are able to perform as well complex brain activities[5].  In view of the Avenarius’ principle of minimum expenditure of effort, the higher brain activities may be explained by a sole factor: the inner receptor.  Godel’s suggestion of abstract terms more and more converging to the infinity in the sphere of our understanding take us straight to prefrontal cortex’s multisensory convergence.  Operating in a frame of highly sophisticated multisensory neurons and concentrating a massive amount of data in relatively short space, the prefrontal cortex is involved in orientation, in the law of prior entry[6], in decision making on the basis of current and past knowledge, in planning and sequencing of actions, in personality, at the same time as other cerebral circuits supply “flavours” such as affective or emotional significance. 

 

Our mind has two options to cope with the causal inference, both involving multisensory integration. 

First, the response to external and internal stimuli.  Inferring which signals have a hypothetical common underlying cause, and hence should be integrated, represents a primary challenge for a perceptual system dealing with multiple sensory inputs.  The brain is able to efficiently infer the causes underlying our sensory events.  Experiments demonstrate that we use the similarity in the temporal structure of multisensory cue combinations to infer from correlation the causal structure as well as the location of causes (Parise and Spence 2012).  This capacity is not limited to conscious, high-level cognition; it is also performed continually and effortlessly in perception (Körding 2007): it suggests that causality is inferred through a behavioural mechanism.  In case of abnormal multisensory experience (visual-auditory spatial disparity), an atypical multisensory integration has been demonstrated (Stein 2009).  We can thus hypothesize that the inference, once “fixed” in our inner receptors, form a sort of “niche construction” which constitutes the common sense and refuses, as constructivism states, to accept changes.  Incidentally, growing evidence suggests that multisensory integration is achieved by Bayes optimal integration (Klemen and Chambers 2012)[7].  If we hypothesize that inner receptors act like Bayes’ agent and support their probabilistic choices on both previous individual and evolutionary experiences, then the inductive-probabilistic Bayesian approach could explain, despite its limits (old evidence, too wide a role to subjective opinion) how the brain confirms its beliefs.  

Second, the abstraction.  Truth conditions cannot be considered crucial for this kind of representational content.  Our brain is able to create its own representational endowment or potential, without recourse to external objects or states of affairs, such as causal antecedents or evolutionary history (Grush 2002: 286).  How can our novel multisensory output abstract both causal and not causal events and introduce new multifaceted elements into the thinking process?  The answer rests on inner receptors, which also contain storage and retrieval systems (Stein et al. 2009).  Possibly heteromodal neurons are able to re-evoke complex data also in absence of the original external stimulus, by comparing and integrating many novel semantic messages.  Through inner receptors, the brain “looks without seeing, listens without hearing” (Leonardo da Vinci).  As stressed above, each inner receptor receives inputs not only from the external world and the primary and secondary cortex, but also from the highest cortical areas.  The role of the latter, something similar to neural networks’ back-propagation, could be crucial in activating cortical multisensory neurons of lower levels also in absence of external stimuli.  Data from literature suggest this possibility.  Two important temporal epochs have been described in the visual, auditory and multisensory response to stimuli: an early phase during which there is weak or no unisensory responses yet a defined multisensory response, and a late period after which the unisensory responses have ended and the multisensory response remains (Royal 2010). 

 

Do “pure” abstractions, such as ideas, scientific theories and religiosity exist out of our mind? Are the fundamental principles of all knowledge the starting-point of the investigation, or its final result? Must science yield results? We have been taught by the last century that there are no simple answers to complex questions and we are aware that there must be pragmati(cisti)c explanations provided in an efficacious manner (Machamer 2002).  We must not fail to remember the lessons from Darwinism.  Despite the evolutionary approaches have been the subject of criticism (paraphrasing Jacobs 2009, researchers have had little difficulty in identifying potential advantages that might explain characteristics evolved in the way that it has, but so far have had less success in demonstrating whether any single set of circumstances may provide a general explanation), the survival value of biological processes is self-evident, consistent with the common scientific belief, the biosemantics and so on.  The usefulness and effectiveness of abstract thoughts are more noticeable if integrated in an evolutionistic framework, because they increase evolutionary fitness.  Religion is an example[8]

 

 

A shift in conceptualizations is evident in a theory of knowledge based on multisensory integration: it is no longer the default position to assume that perception, cognition and motor control are unisensory processes.  Our senses do not exist in isolation of each other and, in order to fully understand epistemology, the processes in question must be studied in a naturalistic, multisensory context (Klemen and Chambers 2012). 

Is knowledge innate or acquired?  The problem is difficult to tackle, because in adults is no longer possible to recognize the data as a priori or a posteriori: once the process takes place, things keep starting over and over again from square one, in a ceaseless feedback-feedforward process involving the whole neocortex.  The only way to respond to this ancient question is by looking at the multisensory integration in the first weeks after birth.  Contrary to the expectations of the current developmental theories that stress innate knowledge and neglect the potential contributions of experience to guide acquisition of sensations, it has been recently demonstrated that the process of multisensory integration is absent in the superior colliculus of newborn’s brain (Stein and Rowland 2011, Royal et al. 2010).  At birth, multisensory integration resembles a tabula rasa.  It does not exist at all.  In the earlier weeks/months, a convergent processing occurs: the first informations from the external world progressively reach the unimodal sensitive cortex, the heteromodal sensitive and the higher associative cortices.  The neurons develop their capacity to engage in multisensory integration, which determines whether the stimuli are to be integrated or treated as independent events, only gradually during postnatal life, as a consequence of three causes: early sensory experience, extensive experience with heteromodal cues and, above all, maturation of cooperative interactions between the superior colliculus and the association cortex (Burnett et al. 2007, Stein et al. 2009).  The ability of a superior colliculus’ multisensory neuron to integrate its heteromodal inputs is not a “given”.  It depends on the presence of influences from the cortex, specifically required by the neurons to engage in multisensory integration (Stein 2009)[9].  Without such experience, neurons become responsive to multiple sensory modalities but are unable to integrate different sensory inputs (Xu et al. 2012). 

Another finding provides further evidence that infants learn representations from real-world experience.  Concepts of objects as enduring and complete across space and time have been documented in infants within several months after birth (Johnson et al 2003).  It is once again worthy of note that each inner receptor, the “puppeteer” of the system, receives inputs not only from the external world and the unimodal cortices, but from the higher areas too.  The latter projections could play an important role in the early development of mental representations.  The above mentioned experiment (Johnson et al 2003) demonstrates as well that very brief training facilitated formation of object representations in 4- and 6-month-old infants, while older infants received no additional benefit from training, most likely because they enter the task capable of forming robust object representations under these conditions. 

 

An astonishing piece of evidence shows that perceptions (in our case, colours) are not produced by the central nervous system, but by… an underestimated player: the peripheral receptor.  The phenomenal knowledge is an intrinsic property of the peripheral receptor itself.  Just as the inner receptors are fundamental for the higher cerebral activities, the peripheral receptors have a prominent functional role in sensitive pathways.  The appearance of a new dimension of sensory experience has been astoundingly demonstrated, based solely on gene-driven changes in receptor organization[10]

Knowledge is acquired and an inductive process is required along the whole visual pathway, starting straight away in the first stations of visual system and continuing throughout the whole neocortex.  The human retina transmits data to the brain at the rate of 10 million bits per second (McLean et al. 2006), while memory capacity of brain is tentatively estimated from 1 to 1000 terabytes[11]: given the huge amount of afferent data, just the information recognized as noteworthy can be preserved, through a mechanism of convergence.  We agree with Mach (1885) when he states that objects are “combinations or complex of sensations”: an hint of “multisensory integration?

 

In (partial) accordance with Giere’s perspectival realism, we agree that our senses capture only bits and selected aspects of reality, representing the world from particular and limited points of view[12].  From a distinctive human perspective, the external world is filled with a (metaphorical) anti-intuitive “dark matter”.  However a realistic fraction, albeit incomplete, reaches our brain from outside.  The type of realism suggestyed by our multisensory semantic theory shows no isomorphism between reality and appearance.  It fits more, this time, with realistic than cognitive semantics.  Our “curtailed” sensations, experiences and knowledge, although never providing accurate or complete representations, supply successful adaptive responses: we do not perceive the truth of things, but the utility of things.  “A truth is like a map, which does not copy the ground, but uses signs to tell us where to find the hill, the stream and the village” (Russell 2005).  We agree with situated cognition that learning can be seen in terms of an individual’s increasingly effective performance across situations rather than in terms of an accumulation of knowledge, since what is known is co-determined by the agent and the context.  Langacker (2008) too emphasizes the dynamic nature of our concepts and the process of constructing them (more an activity than a result).  The meaning of an expression has to do with how the listener understands the expression, not with the truth.  Meaning is a product of minds functioning interactively via an interconnected brain model and our mental constructions or conceptualizations derive from our embodied experience as physical and social creatures in the world.  Paraphrasing Saussure, multisensory elements can be defined in terms of their function rather than in terms of their inherent qualities and have a social nature in that they provide a larger context for analysis, determination and realization of their message.  Langacker’s account is in accordance with our “dialectical” view of the inner receptor.  The multisensory experience is a plastic capacity of the network interacting with the environment in a dynamic pathway. 

Our senses can be aided and enhanced both by technological devices (termography, PET and fMRI, telescopes) and cultural developments (social networks, globalization).  A medium affects the society in which it plays a role not by the content delivered over the medium, but by the characteristics of the medium itself (McLuhan 1964).  If a change in peripheral or inner receptor modifies our perception of the external world, likewise a change in environmental, technological or cultural fields modifies our cognitive faculties and our world’s perception.  Media themselves, not the content they carry, should be the focus of knowledge (McLuhan 1964).  Recent studies demonstrate that neurons retain sensitivity to heteromodal experience well past the normal developmental period for acquiring multisensory integration capabilities.  Experience surprisingly late in life was found to rapidly initiate the development of multisensory integration independent on an alert brain.  The use is a major factor driving plasticity of cortical processing and cortical maps (Lissek 2009), hereupon the neurons learn a multisensory principle based on the specifics of experience and can apply it to new incoming stimulus conditions (Yu et al. 2010). 

 

In our mind materialism and behaviourism meet idealism and constructivism.  Experimental data agree with materialism that we proceed from things to sensation and thought.  In addition, the behaviourist notion of the polysynaptic reflex pathways (linking afferent sensations and efferent motor responses via neocortex) is still extant, of course with some limit.  “Our behavior is shaped in response to stimuli in our environment, and the environment that we know is created in the brain from our senses: sight, sound, smell, taste, touch, pain and the sensation of body movements” (Amaral 2000).  On the other side, idealism claims that objects do not exist “without the mind” and that “without subject there is no object”.  We claim instead: “without subject there is not the subjective representation of the object”.   And, from the subject’s point of view, what is more important than his own account?  Truth depends on perspective.  “A colour is a physical object when we consider its dependence upon its luminous source; regarding, however, its dependence upon the retina, it becomes a psychological object, a sensation.  Not the subject, but the direction of our investigation, is different in the two domains” (Mach 1885: 14-15). 

 

 

We would like to bring to an end with a methodological remark. 

It has been suggested that from experiments theoretical data could not be drawn.  There are epistemically significant differences between observing without interfering and setting up an experiment, not to mention the problems of informational, causal and covariational accounts.  In our case, the responses of multisensory neurons are traditionally characterized solely in terms of a set of empirical principles.  The lack of knowledge about heteromodal integration at the neuronal level is due to the difficulty for biological experiments to manipulate and test the connectional parameters that define convergence.  Hence, how can we then be sure that the experiments do deliver the true mechanisms of multisensory integration? 

In our view, we support a scientific phenomenologist approach.  As we stated above, our epistemological claims are not derived from philosophical skepticism, but from experimental results in neuroscience.  On the other hand, promising empiric advances are forthcoming.  Cutting edge methods have been proposed, from oscillatory phase coherence, to multi-voxel pattern analysis as applied to multisensory research at different system levels; furthermore, the yet to be realized TMS-adaptation paradigm holds promise for distinguishing overlapping unimodal populations from those that are multimodal (Klemen and Chambers 2012).

The last but not the least, two computational frameworks that account for these multisensory integration have been recently established (Magosso et al. 2008, Ohshiro et al. 2012).  They may in future provide a considerable experimental support and an unifying computational account of the features of multisensory neurons.  

 

 

 

 

 

BIBLIOGRAPHY

 

1.      Allman BL, Keniston LP, Meredith MA (2009) Not just for bimodal neurons anymore: the contribution of unimodal neurons to cortical multisensory processing. Brain Topogr 21(3-4):157-67

2.      Amaral DG (2000) The Anatomical Organization of the Central Nervous System.  In: Kandel ER, Schwartz JH, Jessell TM (eds) Principles of Neural Science.  McGraw-Hill Companies

3.      Bailer-Jones (2002) Models, Metaphors and Analogies.  In: Machamer P, Silberstein M (eds) The Blackwell guide to the philosophy of science. Blackwell, Oxford

4.      Burnett LR, Stein BE, Perrault TJ Jr, Wallace MT (2007) Excitotoxic lesions of the superior colliculus preferentially impact multisensory neurons and multisensory integration. Exp Brain Res 179:325–338

5.      Diaconescu AO, Alain C, McIntosh AR (2011) The co-occurrence of multisensory facilitation and cross-modal conflict in the human brain. J Neurophysiol 106(6):2896-909

6.      Doehrmann O, Naumer MJ (2008) Semantics and the multisensory brain: how meaning modulates processes of audio-visual integration.  Brain Res 25;1242:136-50

7.      Evans EM (2001) Cognitive and contextual factors in the emergence of diverse belief systems: creation versus evolution. Cogn Psychol 42: 217–266

8.      Fauconnier G (1997) Mappings in Thought and Language.  Cambridge University Press 

9.      Fernandez AA, Morris MR (2007) Sexual Selection and Trichromatic Color Vision in Primates: Statistical Support for the Preexisting-Bias Hypothesis.  Am Nat 170(1):10-20

10.   Fetsch CR, DeAngelis GC, Angelaki DE (2010) Visual-vestibular cue integration for heading perception: Applications of optimal cue integration theory. Eur J Neurosci 31(10):1721–1729

11.   Gärdenfors P (2000) Conceptual Spaces: The Geometry of Thought.  The MIT Press, Cambridge.

12.   Gärdenfors P (2011) Six Tenets of Cognitive Semantics.  https://www.ling.gu.se/~biljana/st1-97/tenetsem.html.

13.   Goodman RB (2005) Pragmatism: Critical Concepts in Philosophy, Volume 2.  Taylor & Francis, Routledge Ed, New York p. 345. 

14.   Grush R (2002) Cognitive Science. In: Machamer P, Silberstein M (eds) The Blackwell guide to the philosophy of science. Blackwell, Oxford 

15.   Hackett TA, Schroeder CE (2009) Multisensory integration in auditory and auditory-related areas of Cortex.  Hear Res 258(1-2):72–79 

16.   Holldobler B, Wilson EO (2002) Formiche - storia di un’esplorazione scientifica.  Adelphi, Milano 

17.   Jacobs GH, Williams GA, Cahill H, Nathans J (2007) Emergence of Novel Color Vision in Mice Engineered to Express a Human Cone Photopigment. Science 315,23:1723-25

18.   Jacobs GH (2009) Evolution of colour vision in mammals. Phil Trans R Soc B 364: 2957-67 

19.   Johnson SP, Amso D, Slemmer JA (2003) Development of object concepts in infancy: Evidence for early learning in an eye-tracking paradigm. Proc Natl Acad Sci USA 100(18):10568-73

20.   Kapogiannisa D, Barbeya AK, Sua M, et al. (2009) Cognitive and neural foundations of religious belief. PNAS 106,12: 4876–4881   

21.   Kim J (1999) Making Sense of Emergence.  Philosophical Studies Volume 95, (1-2): 3-36

22.   Klemen J, Chambers CD (2012) Current perspectives and methods in studying neural mechanisms of multisensory interactions. Neurosci Biobehav Rev 36(1):111-133

23.   Körding KP, Beierholm U, Ma WJ, et al. (2007) Causal inference in multisensory perception. PLoS One 26;2(9):e943

24.   Krueger J, Royal DW, Fister MC, Wallace MT (2009) Spatial receptive field organization of multisensory neurons and its impact on multisensory interactions.  Hear Res 258(1-2):47–54 

25.   Kuhn W (2003) Why Information Science needs Cognitive Semantics- and what it has to offer in return.  Written while visiting the Meaning and Computation Laboratory Department of Computer Science and Engineering University of California at San Diego. DRAFT Version 1.0, March 31 

26.   Langacker RW (2008) Cognitive Grammar- A Basic Introduction. Oxford University Press, New York

27.   Lissek S, Wilimzig C, Stude P et al (2009) Immobilization impairs tactile perception and shrinks somatosensory cortical maps. Curr Biol 19(10):837-42

28.   Mach E (1885, 1896) Analysis of the sensations.  The Open Court Publishing Company, Chicago

29.   Machamer P (2002) A brief historical introduction to the philosophy of Science.  In: Machamer P, Silberstein M (eds) The Blackwell guide to the philosophy of science. Blackwell, Oxford, p. 13

30.   Magosso E, Cuppini C, Serino A, Di Pellegrino G, Ursino M (2008) A theoretical study of multisensory integration in the superior colliculus by a neural network model.  Neural Netw 21(6):817-829.

31.   Mancuso K, Hauswirth WW, Li Q, et al. (2009) Gene therapy for red-green colour blindness in adult primates. Nature 461, 7265:784-7 

32.   McLean J, Freed MA, Segev R et al (2006) How much the eye tells the brain. Curr Biol 25;16(14):1428-34

33.   McLuhan M (1964) Understanding Media: The Extensions of Man. McGraw Hill, New York

34.   Nieuwenhuys R, Voogd J, van Huijzen C (2008) The Human Central Nervous System.  Springer, Heidelberg

35.   Xu J, Yu L, Rowland BA, Stanford TR, Stein BE (2012) Incorporating cross-modal statistics in the development and maintenance of multisensory integration. J Neurosci 15;32(7):2287-98

36.   Ohshiro T, Angelaki DE, DeAngelis GC (2011) A normalization model of multisensory integration.  Nat Neurosci 14(6):775-82

37.   Parise CV, Spence C, Ernst MO (2011) When correlation implies causation in multisensory integration.  Curr Biol 10;22(1):46-9

38.   Peirce CS (1892, 2011) Philosophical Writings of Peirce. Buchler J (eds), Courier Dover Publications, p. 344

39.   Perrault TJ Jr, Stein BE, Rowland BA (2011) Non-stationarity in multisensory neurons in the superior colliculus.  Front Psychol 2:144

40.   Royal DW, Carriere BN, Wallace MT (2009) Spatiotemporal architecture of cortical receptive fields and its impact on multisensory interactions. Exp Brain Res 198(2-3):127-36

41.   Royal DW, Krueger J, Fister MC, Wallace MT (2010) Adult plasticity of spatiotemporal receptive fields of multisensory superior colliculus neurons following early visual deprivation- Restor Neurol Neurosci 28(2):259-70

42.   Stein BE, Perrault TY Jr., Stanford TR, Rowland BA (2009) Postnatal experiences influence how the brain integrates information from different senses. Front Integr Neurosci 3,21

43.   Stein BE, Rowland BA (2011) Organization and plasticity in multisensory integration: early and late experience affects its governing principles.  Prog Brain Res 191:145-63 

44.   Stevenson RA, Fister JK, Barnett ZP, Nidiffer AR, Wallace MT (2012) Interactions between the spatial and temporal stimulus factors that influence multisensory integration in human performance. Exp Brain Res. Mar 24, Epub ahead of print

45.   The Chimpanzee Sequencing and Analysis Consortium (2005) Initial sequence of the chimpanzee genome and comparison with the human genome.  Nature 437: 69-87

46.   Yoder L (2009) Explicit Logic Circuits Discriminate Neural States. PLoS ONE 4(1): e41-54 

47.   Yu L, Rowland BA, Stein BE (2010) Initiating the development of multisensory integration by manipulating sensory experience. J Neurosci 7;30(14):4904-13

48.   Van den Heuvel MP, Sporns O (2011) Rich-club organization of the human connectome.  J Neurosci. 2011 Nov 2;31(44):15775-86  

49.   Werner S, Noppeney U (2009) Superadditive responses in superior temporal sulcus predict audiovisual benefits in object categorization. Cereb Cortex 20(8):1829-42   

50.   Werner S, Noppeney U (2010) Distinct functional contributions of primary sensory and association areas to audiovisual integration in object categorization. J Neurosci 17;30(7):2662-75