Coupling between past and future

yingyang_dualityPrediction is the hardest intellectual problem. It seems that we need to know everything about the Cosmos to predict the future accurately. I am not sure that even omniscience is enough because there is an intrinsic uncertainty in the Cosmos.

Having said that, it is important to point out that there is order and causality in the universe too. Intrinsic uncertainty is one thing and randomness is another. Do not confuse uncertainty with randomness!

Order and causality are firmly rooted in the Cosmic Mind aspect of Consciousness. Mind is a mechanism for causal expression. Cosmic Mind establishes the causal relationships. Therefore, the Cosmic Mind is also the ordering principle.

Cosmic Mind organizes the expression of Consciousness. This may seem to be opposing the freedom seeking motive of Consciousness, but on the contrary, Cosmic Mind maximizes the expression of Consciousness to accelerate the liberation process. The Cosmic Drama is very mysterious indeed! Consciousness allows itself to be confined in the first place then seeks freedom from bondage through various mechanisms as explained in “Definitions and Summary of Soul Monism.” I would like to remind you that Cosmic Mind is part of these “various mechanisms” of liberation.

Scientific theories are supposed to be predictive

Even though we can never know the Mind of God directly we are developing better models of physical reality. In the context of science the word “better” usually means having more predictive power.

Our scientific theories are predictive. That’s the most important requirement for a model to be accepted as a scientific theory.

Deterministic, statistical and probabilistic theories

In the case of classical mechanics the theory predicts the motion (evolution of the position) of an object. In the case of quantum mechanics the theory predicts the probability of finding an elementary particle in a specified location. In the case of statistical mechanics the theory predicts the various averages of a collection of particles.

The classical mechanics is known as a deterministic theory. If the initial conditions are known the classical mechanics can predict the future position of an object precisely. Later developments in classical mechanics – chaos theory – showed that in many situations it is impossible to know the initial conditions precisely. Small unknowns in the initial conditions grow into big changes as the system evolves.  Small errors in the measurement of initial conditions are magnified as the system evolves according to the laws of classical mechanics. This brought us the concept of deterministic uncertainty.

Quantum mechanics is known as a probabilistic theory. It cannot predict the future position of an elementary particle but it can compute the probability of finding the particle in a specific location. The uncertainty concept in quantum mechanics is different from the uncertainty concept of classical mechanics. The uncertainty of quantum mechanics is intrinsic. What “intrinsic” means is that the uncertainty is not due to measurement errors but it is a built-in feature of the physical reality. The intrinsic uncertainty of quantum mechanics is also known as indeterministic uncertainty.

Statistical mechanics is an example of a statistical theory where we no longer talk about single objects. In statistical mechanics we compute ensemble averages and variations from the average and predict the evolution of averages. In statistical mechanics there are concepts similar to uncertainty. They are known by the names such as “standard deviation,” “standard error,” “variance” etc. Most theories of social and economic sciences are statistical theories similar to statistical mechanics of physics.

Difference between statistical prediction and probabilistic prediction

Statistical models deal with ensemble averages; the probabilistic models deal with individual probabilities. Don’t get me wrong, the statistical and probabilistic approaches are related but this relationship is very subtle and most people get confused here.

Behind the probabilistic computations there is statistics too. The difference is that in the probabilistic approach the statistics is supposed to be collected over time for the specified unit. In the probabilistic approach we are essentially talking about the time average. In the “statistical” approach we are talking about the average over all the other units in the specified universe (cross-sectional universe). Note the difference in terminology: “specified unit” vs. “specified universe.”

Horizontal and vertical attribute pairs

The distinction between the “specified unit” vs. the “specified universe” or the distinction between the ‘time average” vs. the “cross-sectional average” is very similar to the distinction between the “vertical attribute” and the “horizontal attribute” concepts introduced in New Perspective on Unification.

Horizontal attributes are associated with collectivity and multiplicity. They are attributes of the group behavior. Horizontal attributes are about the cross-sectional universe.

Vertical attributes are about individuality, individual histories and individual characteristics. Vertical attributes are about the idiosyncratic behavior.

The “time average” is a vertical attribute whereas the “cross-sectional average” is a horizontal attribute.

Coupling between horizontal and vertical attribute pairs

Horizontal and vertical attributes are supposed to be orthogonal. The term “orthogonal” means “90 degrees.” Orthogonality is a measure of independence. If there is perfect orthogonality there would be no coupling between the attributes of the pair.

If the “time average” is a vertical attribute and the “cross-sectional average” is a horizontal attribute then they are supposed to be orthogonal and therefore independent. If they are perfectly “horizontal” and “vertical” then there is not supposed to be any coupling between them.

In reality, there is no perfect orthogonality. All horizontal/vertical pairs are coupled. There is a coupling between “time average” and “cross-sectional average” too.

What this means is that the individual histories are reflected in the cross-sectional statistics and the cross-sectional information has some bearing on the probabilities of finding certain future outcomes.

Prediction involves a translation from individuality to collectivity then another translation from collectivity to individuality.

Thinking in terms of distributions

One way to translate individuality to collectivity is to construct histograms (distributions). Most scientists think in terms of distributions. This is how we make sense of the world but unfortunately we cannot predict the world this way. We cannot predict anything from a distribution because a distribution hides the time dimension. As the name “histogram” indicates distributions are about the past. There is very little predictive power in a histogram.

Hiding the time dimension

There are mathematical procedures that extract a static picture from a dynamic one. These procedures effectively hide the time dimension. I mentioned ‘thinking in terms of distributions’ above. There are other mathematical procedure known by the names FFT (Fast Fourier Transform) and Phase-Space that effectively hide the time dimension too.

FFT procedure decomposes a time-varying signal into its frequency components producing a spectrum. FFT procedure does not eliminate time. It simply hides it from view. Neurons in our bodies carry time-varying electrical signals (pulses). Brain effectively performs an FFT because it extracts a static picture from time-varying signals.

The Phase-Space is a plane where the horizontal axis is any variable X; the vertical axis (X’) is the rate of change of that variable. The Phase-Space is similar to FFT in the sense that it also hides the time-dimension.

When we hide the time-dimension we lose the information about the past but gain some insight about the future. These insights are statistical and probabilistic in nature. We can never predict the future of a unit from a histogram, FFT or Phase-Space but we can learn about the ranges of its behavior.

Cycles and Trends

It is easier to make predictions if you can identify cycles or trends. If there are cycles or repetitions in time one can devise procedures to extract a static mental picture of that process and be cognizant of the structures in the frequency and phase space.

FFT is very useful to identify the most frequently occurring situation. A Phase-Space picture contains much more information than a FFT. Phase-Space is far superior than FFT or histogram as a tool for prediction.

Regression Models

I cannot explain the regression models in few sentences but the idea is to fit a parametric model to the data. In other words we try to explain the data by some mathematical function then use that mathematical function to guess the future averages. Please note the emphasis is on the word “averages.” Regression models are statistical models. No statistical model can predict the individual behavior. Statistical model make predictions about the averages.

Overfitting

Regression models are great for analysis but they are not so good for prediction. The common mistake is to find the best fit. The best fit for the past data does not mean that it is the best fit for the future. It is better to make the model purposefully wrong a little bit.

Power Laws

There are many other mathematical procedures for extracting a static picture from dynamic (time-varying) signals. Power laws hide the time dimension as well. Power laws represent a static picture of reality. Discovering power laws is a lot of fun. They call this new branch of statistics “econo-physics” these days! But, I have to tell you that power laws do not have much predictive power either.

Quantum Mechanics Maximizes the Predictive Power

Let’s go back to physics. Quantum Mechanics was developed by a handful of physicists in Europe in the span of a decade. Those lucky physicists developed a mathematical procedure that optimized the predictive power of the theory. I mentioned the problem of “overfitting” above. Quantum Mechanics was developed to avoid this “overfitting” problem. It is truly amazing how the collective wisdom figured this out. They were aided by nature, of course! The intrinsic uncertainty of nature provides just the right amount of error thereby allowing quantum mechanics to avoid overfitting. Instead of focusing on ‘what happened’ quantum mechanics focuses on ‘what is possible.’ This expanded viewpoint gives quantum mechanics its success in prediction.

Quantum Entanglement between past and future

There were arXiv papers that talked about the quantum entanglement between past and future.

http://arxiv.org/abs/1003.0720
http://arxiv.org/abs/1101.2565
http://arxiv.org/abs/1206.6224

The article in the Wired magazine on this subject is interesting too

http://www.wired.com/wiredscience/2011/01/timelike-entanglement/

Coupling, Interaction, Correlation, Entanglement, Mixing

These are related concepts. Let’s use the word “coupling” in the most general sense to refer to “coupling, interaction, correlation, entanglement, mixing.” Without coupling in the cross-sectional universe we would have no coupling between past and future and therefore zero predictive power.

Advertisements

About Suresh Emre

I have worked as a physicist at the Fermi National Accelerator Laboratory and the Superconducting Super Collider Laboratory. I am a volunteer for the Renaissance Universal movement. My main goal is to inspire the reader to engage in Self-discovery and expansion of consciousness.
This entry was posted in physics, prediction, probability, science, statistics and tagged , , , , , , , . Bookmark the permalink.