Some science behind the scenes

Sight

Filtering by the eyes

Electromagnetic radiation is classified into several types according to the frequency of its wave; these types include (in order of increasing frequency and decreasing wavelength): radio waves, microwaves, terahertz radiation, infrared radiation, visible light, ultraviolet radiation, X-rays and gamma rays. The chart below  shows the types of waves and their wavelengths.


A very small window of frequencies is sensed by the eye; this is what we call the visible spectrum, or light. 

Electromagnetic radiation with a wavelength between approximately 400 nm and 700 nm is detected by the human eye and perceived as visible light.  As you can see from the chart this is a tiny proportion of the total radiation which ‘surrounds’ us.   Natural sources produce EM radiation across the spectrum, so already we can see that what we perceive is not what exists.

Designation

Frequency

Wavelength

ELF

extremely low frequency

3Hz to 30Hz

100'000km to 10'000 km

SLF

superlow frequency

30Hz to 300Hz

10'000km to 1'000km

ULF

ultralow frequency

300Hz to 3000Hz

1'000km to 100km

VLF

very low frequency

3kHz to 30kHz

100km to 10km

LF

low frequency

30kHz to 300kHz

10km to 1km

MF

medium frequency

300kHz to 3000kHz

1km to 100m

HF

high frequency

3MHz to 30MHz

100m to 10m

VHF

very high frequency

30MHz to 300MHz

10m to 1m

UHF

ultrahigh frequency

300MHz to 3000MHz

1m to 10cm

SHF

superhigh frequency

3GHz to 30GHz

10cm to 1cm

EHF

extremely high frequency

30GHz to 300GHz

1cm to 1mm


 The ‘foveal’ system of the human eye is the only part of the retina that permits 100% visual acuity, the rest does not, meaning any image we get is also not ‘accurate’.  Our foveal vision is very slow (only 3 to 4 high quality telescopic images per second), meaning we cannot pick up everything that happens, peripheral vision is also very inaccurate but also very fast (up to 90 images per second). So again we are not seeing what there is to see.

We also do not see what other animals insects and so on ‘see’.  For example:

Many animals are capable of perceiving some of the components of the polarisation of light. This is generally used for navigational purposes, since the linear polarisation of sky light is always perpendicular to the direction of the sun. This ability is very common among the insects, including bees, which use this information to orient their communicative dances. Polarisation sensitivity has also been observed in species of octopus, squid, cuttlefish, and mantis shrimp. In the latter case, one species measures all six orthogonal components of polarisation, and is believed to have optimal polarisation vision.

A dog's visual acuity [clearness of vision] is poor, but their visual discrimination for moving objects is very high; and some dogs are able to recognise their owners from distances up to a mile. They are also better at seeing objects in low light than we are. They have very large pupils, a high density of rods in the fovea, and an increased flicker rate. Like most mammals, dogs are ‘dichromats’ and have color vision equivalent to red-green color blindness in humans.  Dichromacy occurs when one of the cone pigments is missing and color is reduced to two dimensions.  So we see more colours than a dog.

Nor do we ‘see’ what plants ‘see’.

Plants can sense, evaluate and respond to light quality, quantity, direction and duration with a highly sophisticated suite of photoreceptor pigments.  The ability to sense the environment is so important that in Arabidopsis about 25% of its 25,000 genes are involved in signalling reception and communication.

The light signal is received by three different sets of photoreceptive pigments.  There are 5 versions of the phytochrome pigment that sense light at the red end of the spectrum between 600 and 750 nm.  Two cryptochromes which are blue light receptors sensitive between 320 and 500 nm, are involved along with the phytochromes in the general entrainment of the circadian system to the daily light/dark cycle.  A third set of photoreceptors, the phototropins, are also sensitive to light between 320 and 500 nm, but these pigments are not thought to be involved in the circadian system. 

There are almost certainly more photoreceptors to be uncovered.

So what we can see from all these examples is that our sensors are geared towards only certain signals and huge quantities of information are excluded at our sensors without taking into account what happens during the processing of the information.  We can also see that we do not perceive what other living organisms perceive, their view of the world may be wholly different from ours, their ‘reality’ is not our ‘reality’.  Ours, just like theirs is a manufactured world – in this case manufactured by the limitations or design of our sensory system.

Now let me look at how our eyes work – how that information is ‘processed’.

The processing software

Hermann von Helmholtz was one of the first to study the human eye to determine such things as visual acuity.  He concluded that the eye was, optically, rather poor meaning that the information gathered via the eye wasn’t sufficient to enable us to draw many of the conclusions we do about the ‘Reality’ we memorise. He concluded that vision could only be the result of some form of unconscious inferences: a matter of processing and manipulating this ‘incomplete data’.

Studies for a while were based on the idea that we infer things based on previous experience and on various assumptions about the nature of the world.  The study of visual illusions (cases when the inference process goes wrong) has provided some insight into what sort of assumptions the visual system makes.

More recently, however, it has been the use of computational models of vision that have had more success in explaining visual phenomena and have largely superseded previous theories. These computational models of visual perception – software in other words - have been developed for Virtual Reality systems – and are closer to real life situations as they account for the sorts of events and activities which occur in everyday life.

Let me put this another way.  Computer scientists are gradually producing software [albeit somewhat crude at the moment] that can simulate the way we see.  To me this is key.  Images are not processed by ‘the brain’ which is only the hardware, they are processed by software.  We use software to see. Not computer software obviously, but biological software.

Everything is an hallucination….

When our eyes are closed we may experience a dream or a vision.  Dreams and visions are in essence the same thing – an experience in which there is no ‘leakage’ from our sense of sight, no combination of sensory information with composer constructed information.  The dream or vision becomes our reality.

When our eyes are open;  images, sounds, and sensations from the composer may be  overlaid on what our senses are relaying to us.   If we define these to be ‘hallucinations’ then a hallucination is also no different to a dream or a vision in the way it is constructed.  The same function is at work.  The only difference is that there is ‘leakage’ of sensory information which gets combined with the product of the composer.

Since both the input from our sensory systems and input from the composer are identical in their format and construction, and we have no way of knowing how much leakage is occurring either from our senses or from our composer, there is no real way of knowing what is ‘real’ and what is not ‘real’.

We may be hallucinating  most of the time and never know.

Overall

What can we conclude from this short summary of sight and how it works?

  • First, we see only a fraction of the EM spectrum, we can detect most colours [unless we have colour blindness or similar ailments], but our light sensitivity is low and we miss images.  Much of what exists is actually invisible to us.
  • Secondly, the information gathered by the eye is actually not sufficient to enable us to operate without some form of software to process that information to deduce and infer various things from the poor quality of information we do receive.  Much information is, as a result filtered further.  We may not even ‘see’ what we don’t expect.
  • Thirdly, we do not ‘see’ what other creatures or living things  ‘see’, our ‘reality’ from sight is not their ‘reality’.
  • Fourthly never know whether what we see is coming from the external world or from our composer.  We may be seeing things that others cannot see because the composer has added them in.  Since few of us actually discuss what we are seeing, we could be hallucinating most of the time and never know.
  • Finally, we are unable to see anything that is not of a comparative size to us.

Overall,  we do not see Reality.  The actual world may be nothing like the world we ‘see’

The Sutra of Hui-Neng – Grand Master of Zen [translated by Thomas Cleary]

 

Everything has no reality

We do not see reality thereby

If you see reality

That is a view,

 not reality at all

 

 

 

Henri Bergson – Matter and Memory

 

Psychology has accustomed us to assume the elementary sensations corresponding to the impressions received by the rods and cones of the retina. With these sensations it goes on to reconstitute visual perception. But in the first place, there is not one retina, but two; so that we have to explain how two sensations, held to be distinct, combine to form a single perception corresponding to what we call a point in space.

Observations

For iPad/iPhone users: tap letter twice to get list of items.