WHAT AND WHERE IS HEAVEN?

Does heaven exist? With well over 100,000 plus recorded and described spiritual experiences collected over 15 years, to base the answer on, science can now categorically say yes. Furthermore, you can see the evidence for free on the website allaboutheaven.org.

Available on Amazon
https://www.amazon.com/dp/B086J9VKZD
also on all local Amazon sites, just change .com for the local version (.co.uk, .jp, .nl, .de, .fr etc.)

VISIONS AND HALLUCINATIONS

This book, which covers Visions and hallucinations, explains what causes them and summarises how many hallucinations have been caused by each event or activity. It also provides specific help with questions people have asked us, such as ‘Is my medication giving me hallucinations?’.

Available on Amazon
https://www.amazon.com/dp/B088GP64MW 
also on all local Amazon sites, just change .com for the local version (.co.uk, .jp, .nl, .de, .fr etc.)


Common steps and sub-activities

Learning - understanding and recognition

What we have as a result of the Extraction process is a series of Perceptions which have some relevance to what we want to do, what we need to know and where we are in our development.

These Perceptions are simply a pure sequence of activity and emotion and sensory information. If they contain language of any sort it is simply part of the overall scene, no meaning attached as yet, like seeing a billboard in a photo or a name on a shop, it is just an image until it has been ‘tagged’. We might think of this sequence of action as a sort of film clip with extra data – the thing a dog or cat might see because it has no language, no tags are attached to the things, the sequence is simply a set of images and impressions.

This function of understanding and recognition attaches tags to things, and gives meaning to them; in a sense, takes all this raw data and breaks it down into its constituent parts. In order to do this we have to be able to recognise images. Thus recognition is a key process of understanding. Recognition as a function is a huge area in its own right and complete books have been written about the subject. Recognition is likely to be the first function in the process followed by tagging. I think we also do some more selection at this stage.

In order to provide an example of how I think this stage works I will use a person going into a shop.

Recognition

 We start with a picture and then break this picture down into a series of occurrences that describe the activity taking place

OCCURRENCES

 

TAGGED OCCURRENCES

Thing A verb Thing B

 

Person A goes into Shop B

Thing A verb Thing C

 

Person A talks to Person C

Thing C verb Thing E Thing D

 

Person C takes Chocolate Bar E from Shelf D

Thing A verb Thing C Thing F

 

Person A gives Person C Money F

Thing C verb Thing A Thing E

 

Person C gives Person A Chocolate Bar E

Thing A verb Thing B

 

Person A leaves Shop B

Thing A verb Thing E

 

Person A eats Chocolate Bar E

This is recognition and is shown in the table on the left 

Tagging

In order to understand what is going on we have to be able to give names or symbolic tags to occurrences of things. We have 'learning software' we are born with which enable us to do this.

Most tagging is fairly straightforward in principle. Knowing and giving names to occurrences of things is the first stage of learning and is what children do. We have a sort of store of name tags and sounds obtained from adults which gradually build up from what we are told and later read. What we do is attach name tags to images – words or sounds to images. So this process is a form of image and name pattern matching.

But we are of course very dependent on adults and our other sources of information to get the right tags to the right things. A bright child will look for anomalies or discrepancies or contradictions in what they are told. But a simple or gullible or innocent or not so bright child may simply accept everything it is told.

The moment we start to tag things we also start to apply cultural rules upon the occurrences – things we have learnt and assume. We assume that the thing we see is a 'person', we assume the thing we see is a 'shop', we assume the thing we see is 'a chocolate bar', we assume the thing we see is 'money' and so on. By pattern matching of images we make assumptions about what things are. Maybe the thing isn't a shop maybe it is just a building with a big plate glass window and a name above the door. Maybe the thing isn't a chocolate bar but a packet of cigarettes – it just matches in appearance something we have seen before that was called a chocolate bar. Maybe the person did not hand over money, perhaps he handed over some tokens – it just looked like money. Maybe the person isn't talking to the other person maybe he is actually singing.

We make a lot of assumptions when the cognitive software tries to make sense of something it is seeing, tasting, smelling and so on – much of the way we process things is by pattern matching with objects which have been named for us before – trying to detect similarities and also by trying to spot similar patterns of behaviour. As children much of the world is named for us by adults. When we learn language, we are not really learning language as such, we are learning which sounds [and then which words] match which objects and behaviours.

So to improve this process we have to recognise that we must never tag if we don’t know, that tagging of observations can be simply reinforcing a naming system we have inherited from adults who were not up to the job and that we may need to completely re-examine our tagging system – practise naming objectively and not subjectively.

Selection of observations

I think extraction and understanding are cyclical activities. We use a first level of extraction of Perceptions, tag them and then perform a second level of extraction based on the same criteria.

We are likely to have literally million and millions of tagged thoughts.

A computer is able to record and also store all this sort of information without ever needing to select from this database of information, but a person’s processor is not anywhere near as capable as a computer’s and what we do is delete information we believe is irrelevant.

How and why we do it is entirely personal, it depends on the capacity of the person’s ability to remember, but also tends as we have seen to be based on their current objectives, obligations and so on. Huge swathes of perfectly relevant information may, as a result, be ‘deleted’ – never committed to the knowledge base. Furthermore as we get older, we may reject perfectly sound information because it did not match our preset belief systems – our perceptions of what the universe is and what we need. ...........

But we always have perceptions to fall back to, because thoughts are not deleted. The very act of creating these tagged images is recorded, thus if we go back to our perceptions, we will find them there within the Perception log, they will stay as unclassifiable observations to be handled later if more observations of this sort suddenly start to appear. So all is not lost, we just have to learn how to go back to retrieve observations that may be in the Perception log

Observations

For iPad/iPhone users: tap letter twice to get list of items.