The Nature of Comprehension

Thus I would like to define “understanding” to be recognition of the truth to be assigned to a specific thought represented by a specific collection of defined concepts: i.e., I am using the word “concepts” to refer to the the language elements required.

 

 Part 1: The underlying problem.

Any individual’s knowledge of reality is entirely built on their personal perceptions (note that explanations by others are a very important component of those perceptions). That the actual perceptions arise from interpretations of earlier experiences is an issue seldom considered by the scientific community. First I would like to avoid the word “perception” as it can be seen as implying an actual “interpretation” of those experiences. The word “experiences” provides a much more objective reference to such interactions. We must first identify what it is we “think” we perceive before we can build any mental explanations and/or representations of the supposed source.

 

Every human (including the most brilliant scientist who has ever lived) can be seen as beginning life as a child born without a language. During his life he will experience many interactions with what he will eventually suppose to be reality. Language is the english term for the mechanism necessary to reference those experiences which stand behind his eventual understanding.

 

Any competent thinker must be aware of the fact that a secret code can represent all the information required to communicate any collection of ideas. Learning the meanings of the elements of that secret code is a problem essential to learning any language. (Bit codes used to represent letters on computers are an excellent example of this issue.) In the final analysis, comprehension itself is the very essence of learning.

Part 2: A universal representation of any language.

 

The collection of "concepts" expressible via any language can be listed by what is commonly referred to as a dictionary. This dictionary must be finite as otherwise it can not be constructed. Given such a construct, each and every entry could be given a specific numerical index  usable to refer to that specific concept. Using that collection of numerical indices, any experience can then be specified a numerical expression   One may think in terms of the english concepts such as words, spaces, punctuation, etc.  

 

It should also be understood that the meaning of any specific element may very well be altered by the other specified indices within the set.  That is the issue of “context”, a phenomena present in most all languages (note that the actual meaning of any language element often depends upon the context of the useage). The required dictionary is no more than another set of such entries​.

That implies a very profound aspect of such a representation. Suppose that, given some specific index set​, one creates a second index set (including a new dictionary) where every specific index  ​ is exactly the original  ​ plus a given constant “c”. It must then be absolutely true that

 must refer to exactly the same thought as does

 

It should be clear to the reader that  thus becomes an abstract representation of a thought in the scientist’s personal language. It should be clear that there can exist no thought conceivable by that scientist which cannot be expressed by the notation

Part 3: Opening up a possible universal mathematical representation.

 

Given the above notation, the scientist's understanding of his experiences (essentially his explanation of any or all aspects of reality) can be represented by  where P stands for the probability he holds the specific represented thought to be true.  

 

Note that the constraint imposed by "internal consistency" is a very simple issue under such a representation. Under this representation, the truth of the specified thought is a function of the explanation and cannot change except by changing either the actual "thought" which is being represented by the specified collection of indices, or by changing the "explanation" itself.

 

 

as each probability specifies the presumed truth of exactly the same thought, merely in a different code. Were the above a valid mathematical expression, that result would appear to lead to vanishing of another rather common mathematical expression.  

 

 

 

Anyone familiar with calculus will recognize that this expression as exactly the definition of the derivative of P with respect to c when the partial of  with respect to c is unity (which it must be if c is the shift defined above in the conversion to ) .

Clearly the expression  could be interpreted as representing a set of n points on an x axis. In actual fact, such an interpretation would directly violate the original definitions given in part 2 above.

First, in a standard mathematical function, the number of arguments does not vary. Thus, in the approach presented here, the number of arguments “n” clearly varies from thought to thought.

Second, if one is to interpret the index as representing a point on an “x” axis, the order of the elements in  will certainly be lost and the order of elements is a significant issue in all languages of which I am aware.

Third, the subscript on x indicates the specific element index from the dictionary to be used. It should be clear that any specific index could be used more than once in a given expression. In such a case, the element of interest would plot to exactly the same point: i.e. the existence of such repetitions would be totally lost.

Part 4: Actual conversion into commonly understood valid mathematical notation.

 

Considering the third problem first, the relevant loss of information is easily fixed by adding another coordinate to the representation. I will call that axis the “tau” axis. The existence of that axis allows any repeated elements to be plotted to different ​ positions in the tau direction. Note that the introduction of ​ has added ignorance to the representation. Including this ​axis requires a vector notation ​.  That their positions can not be the same can be enforced by the requirement,

 

where  is the function which yields the probability of the specific collection of entities. 

Addition of ignorance can also solve the first problem specified above. All one need do is to find the specific representation with the largest number of entries and add “unknown” entries to every known experience sufficient to yield a specific value to n sufficient to cover all experiences.  To evaluate the represented probability, the ignorance introduced must be handled by integrating any mathematical representation over all possibilities for these added arguments. This integration will add a net impact on the result (not an unreasonable result).

 

That leaves the second problem, the order of the elements. That problem can be solved by adding another hypothetical axis orthogonal to both x and  axes​. I will call that axis t because it clearly corresponds to what is commonly called time. All languages I am aware of have a temporal order given to their elements.  Each and every expression can now be replaced by a “collection” of expressions of the defined form within which order of the actual elements is of no significance.

 

Note that once again ignorance has been added. In evaluating​, only the elements specified by a specific value of  “t” are actually defined. The others are to be seen as added “unknown elements”: i.e., their possible values are essentially defined to be unknown though, now that the expression can be seen as valid mathematics there exist “possible values” consistent with the defined probability and one must integrate over those possibilities.  

 

Under the elements and extensions as defined, we have what could be a mathematical expression required to be valid for all internally consistent explanations of any phenomena. It should be quite obvious that the representation is beginning to resemble the common physics representation of a collection of points moving in time. However, note that t was not defined as a continuous variable. Essentially t specifies the element existed at a specific known value. We can allow t to become a simple continuous variable by presuming the actual position of the referenced point is unknown. (Just adding more ignorance to the problem.)

 

At this point, the indices used to indicate specific concepts have become two dimensional vector entities. Excellent reasons can be laid out to increase this dimensionality by adding two more dimensions to the representations thus creating an excellent resemblance to modern physics.

 

Uncertainty, (values for the undefined elements) has become the single most prevalent feature of this representation. However, the scientific perception of actual values in such a functional representation of the whole universe is chock full of such uncertainty.  

Part 5: Clarifying the resemblance to valid mathematics to an exact match.

 Anyone familiar with modern physics will tend to see  as essentially equivalent to the the expression for the probability of a specific distribution of points changing in time. That idea presumes each and every element exists between the specified known times. In essence this is no more than an extension of the uncertainty introduced by the creation of elements tau, t and fixed n.

 

The definition of a probability requires that P be positive definite and furthermore, that the integral over all possibilities must be unity. This suggests that P should be set equal to ​ where  ​ is a complex function and ​ is the complex conjugate of that function.  

If the integral of ​ over all arguments is finite, one may merely divide ​ by the square root of that number and “P”​ will then be bounded by zero and one for absolutely all valid functions ​.  

The simplicity of that result is actually somewhat surprising. In modern physics, they begin with mathematical equations (based on their understanding of reality) which must be solved. For assorted reasons, they end up defining the solution to be a complex function ​. The product ​ is then “interpreted” to be a probability and much is made of the problems associated with the issue above (referred to as normalization).  

 

In the attack given here, the issue is attacked from exactly the opposite direction. P was defined to be a probability and, if “normalization” is not possible, the function  can not be a valid solution. A somewhat different but much more satisfying resolution of the underlying issue.  

 

Setting ​, the original algebraic constraint that the sum of the differentials of P with respect to all ​ arguments must vanish requires a rather simple constraint

 

 

 

 Direct substitution will confirm that ​ together with its complex conjugate constitutes a solution to the above equation. At this point, of the language being used to represent our experiences has begun to strongly resemble our current mathematical representations of a quantum mechanical universe.  

 

If we give an alternate name to what we have defined to be “elements” and call them “particles”, the identity becomes close to complete. This should not be thought of as a serious alteration of the original hypothesis. Clearly this proposed view of “supposed reality” must include particles of graphite on a sheet of paper, particles of chalk on a blackboard or even ink patterns within a published book. These and all the other existing elements together must constitute our understanding of reality.

 

At this point, the final mathematical constraint on  can be explicitly written. First I will introduce some anti-commuting operators (elements which change sign when commuted) plus some related matrix elements. The resultant mathematics is as follows,

 

           .

and,    ,  

   and       

One can then assert a properly Fourier transformed  exists which must obey

 

as it amounts to little more than a Fourier transformed assertion that 0+0=0.

Part 6: The differences between this result and modern physics.

 

Clearly the mental picture here is somewhat different from the standard mental model presented in modern physics. Probably the single most disturbing factor is the existence of that tau axis. Under the picture presented here, the value assigned to ​tau can not be known. That brings up an interesting thought.  

 

Heisenberg's uncertainty principle (an issue central to modern quantum mechanics) asserts that exact measurement of both position and momentum for any physical particle can not be made. That clearly implies that, if the position in the tau direction can not be known at all, the momentum in the tau direction must be quantized.

 

That idea suggests that every particle should have a quantized value associated with its momentum in the tau direction. Since velocity in the above representation is essentially an open undefined parameter with no direct impact, we could simply define it to be a constant. This leads to some interesting consequences.

If a specific particle is at rest in the  frame, its velocity in the tau direction must be that constant just defined. This implies that momentum in the tau direction must have a specific  quantized value: i.e., it is rather obvious that this quantized momentum corresponds quite well with rest mass. That fact is enforced by noting that a specific particle moving at the above defined velocity in the rest frame would have to have a zero velocity in the tau direction implying its momentum in the tau direction must vanish. Thus only massless entities could travel at that defined speed in the rest frame.

Further analysis yields the fact that the above conceptual structure not only conforms to modern quantum mechanics but that it perfectly conforms to special relativity. It is a little more difficult to show but the structure also conforms to general relativity. (Defining an accelerating frame which encompases the entire universe is not a trivial matter.)

Finally, this description of reality (which makes no presumptions as to what reality actually is) corresponds almost exactly to modern physics.  I can show a number of subtle differences which I think are worth looking into, one of which I will explain here.

An  important “subtle difference” is the failure of electromagnetic interactions to cancel exactly. If electromagnetic interactions are created by photon exchange (which must be quantized) then interaction with a positive charge can not exactly cancel via an interaction with a negative charge. This fact leads to a subtle consequence: a single interacting charge (interacting with an extremely large collection supposed canceled charges) will end up following a crooked path.      

 

That crooked path ends up appearing to be propagating at a lower velocity than the velocity defined earlier: i.e., apparent velocities of entities approaching extremely large collections of charged bodies will decline.  That is, the apparent speed of light must decline in the vicinity of large objects. This effect turns out to exactly duplicate gravitational effects without curved space.

Richard D. Stafford, Ph.D. in Theoretical Physics, Vanderbilt University,1971