His papers are quite readable by any motivated physics fan, there’s not too much math, but you probably need at least a little general familiarity with current theories about the way black holes relate to entropy (like in Hawkings’ books):
Inspirational. Here’s how I summarize the metaphysics.
Form = energy
Form > space
Form > time
Information = form
Information = energy
Fractals = n-dimensional form.
Thoughforms = fractals.
Thoughtforms = trans-tempero-spatial energetic n-dimensional information.
Here’s where it gets tricky. Self-referential.
Learning these things about mind, information and context ... affects them all. Can you imagine? Information travels faster than the speed of light. In experiments of quantum entanglement, it is instantaneous. In this, it is like gravity. The two are related, clearly. The TV goes “crack” with big thoughts like these. So I know.
A-temporality creates significant challenge, and opportunity, through participatory feedback. To be aware that information is outside of time is to be with it thus. Awareness of this fact, as concept, thus leads to amplication of the information (and projection) through time. Signal strength is not, primarily, time or space dependent. It is associative, as the last article above suggests. Concepts, cares, affinities ... forms ... attract… or repel. So, once one has associated the foregoing concepts (associativeness, atemporal, transspatial) of information and awareness themselve with one’s own awareness and information, one’s relation to information is forever changed. Technically, only then does one’s relation and being toward information become ... what it had always been.
This is why I had to write. The future is waiting for us.
So. There is of more, of course.
To be aware of the trans-temporal phenomenology of information and awareness as such ... is to prepare to surf the manifold. One might do it for sport or fun (later), or simply out of fear of causing some disaster (early in the process).
That is, the implication is that to be attentive (and presumably even not) ... is to be in such a way so as to create the very wave that one rides. It is to be the leaf in the wind ... that determines the course of the wind itself.
Specifically, if guided, one might be in time and awareness (if one is aware and active) in such a way that one at once 1) adapts to thought stream so as to enable this present to have been created in the past by what I do now, even as I reflect on the past and perhaps wish it (or this) to have been different; and 2) aims for immediate resolution of the past, present and future pathologies of this cross-roads that required me to do this precarious balancing act and obstruct my more immediate objectives, and; 3) facilitates movement toward the future where my original first-order goals actually lie, and where this balancing will also not be necessary - post-paradox (simply because the tempero-informatic metaphysics are then known as commonplace, if nothing else).
We are the chronon.
I recommend Barbour, “The End of Time”, for good lay-accessible but quite cutting-edge view of the physics of time.
An interesting viewpoint on the subject of time is the field of study created by Charles Muses that he dubbed “chronotopology”. Probably the best introduction to the topic is the introduction Muses wrote for another book, Communication Organization and Science by Jerome Rothstein. A partially transcribed copy exists here Time and its Structure by C.A. Muses.
This fills in the missing pages between xi and xxx:
Modern studies in communication theory (and communications are perhaps the heart of our present civilization) involve time series in a manner basic to their assumptions. A great deal of 20th century interest is centering on the more and more exact use and measurement of time intrervals. Ours might be epitomized as the Century of Time - for only since the 1900’s has so much depended on split-second timing and the accurate measurement of that timing in fields ranging from electronics engineering to fast-lens photography.
Another reflection of the importance of time in our era is the emphasis on high speeds, i.e. minimum time intervals for action, and thus more effected in less time. Since power can be measured by energy-release per time-unit, the century of time becomes, and so it has proved, the Century of Power.
To the responsible thinker such an equation is fraught with profound and significant consequences for both science and humanity. Great amounts of energy delivered in minimal times demand:
a) extreme accuracy of knowledge and knowledge-application concerning production of the phenomena
b) full understanding of the nature and genesis of the phenomena involved; since at such speeds and at such amplitudes of energy a practically irrevocable, quite easily disturbing set of consequences is assured.
That we have mastered (a) more than (b) deserves at least this parenthetical mention. And yet there is a far-reaching connection between the two, whereby any more profound knowledge will inevitably lead in turn to a sounder basiis for actions stemming from that knowledge.
No longer is it enough simply to take time for granted and merely apportion and program it in a rather naively arbitary fashion. Time must be analyzed, and its nature probed for whatever it may reveal in the way of determinable sequences of critical probabilities. The analysis of time per se is due to become, in approximate language, quite probably a necessity for us as a principal mode of attack by our science on its own possible shortcomings. For with our present comparatively careening pace of technical advance and action, safety factors, emergent from a thorough study and knowledge of this critical quantity ‘time,’ are by that very nature most enabled to be the source of what is so obviously lacking in our knowledge on so many advanced levels: adequate means of controlling consequences and hence direction of advance.
Chronotopology (deriving from Chronos + Topos + Logia) is the study of the intra-connectivity of time (including the inter-connectivity of time points and intervals), the nature or structure of time, if you will; how it is contrived in its various ways of formation and how those structures function, in operation and interrelation.
It is simple though revealing, and it is practically important to the development of our subject, to appreciate that seconds, minutes, days, years, centuries, et al., are not time, but merely the measures of time; that they are no more time than rulers are what they measure. Of the nature and structure of time itself investigations have been all but silent. As with many problems lying at the foundations of our thought and procedures, it has been taken for granted and hereby neglected - as for centuries before the advent of mathematical logic were the foundations of arithmetic. The “but” in the above phrase “investigations have been all but silent” conveys an indirect point. As science has advanced, time has had to be used increasingly as a parameter (in the phase spaces of statistical mechanics) or explicity.
Birkhoff’s improved enunciation of the ergodic problem(3) actually was one of a characteristic set of modern efforts to associate a structure with time in a formulated manner. Aside from theoretical interest, those eforts have obtained a wide justification in practice and in terms of the greater analytic power they conferred. They lead directly to chronotopological conceptions as their ideational destination and basis.
The discovery of the exact formal congruence of a portion of the theory of probability (that for stochastic processes) with a portion of the theory of general dynamics is another significant outcome of those efforts. Such a congruence constitutes more than a mere suggestion that probability theory has been undergoing, ever since its first practical use as the theory of probable errors by astronomy, a gradual metamorphosis into the actual study of governing time-forces and their configurations, into chronotopology. And the strangely privileged character of the time parameter in quantum mechanics is well known - another fact pointing in the same direction.
Now birkhoff’s basic limit theorem may be analyzed as a consequence of the second law of thermodynamics, since all possible states of change of a given system will become exhausted with increase of entropy(4) as time proceeds. It is to the credit of W.S. Franklin to have been the first specifically to point out that the second law of thermodynamics “relates to the inevitable forward movement which we call time”; not clock-time, however, but time more clearly exhibiting its nature, and measured by what Eddington has termed an entropy-clock.(6) When we combine this fact with the definition of increase of entropy established by Boltzmann, Maxwell, and Gibbs as progression from less to more probable states, we can arrive at a basic theorem in chronotopology:
T1 The movement of time is an integrated movement towards regions of ever-increasing probability.
Corollary: It is thus a selective movement in a sense to be determined by a more accurate understanding of probability, and in what ‘probability’ actually consists in any given situation.
This theorem, supported by modern thermodynamic theory, indicates that it would no longer be correct for the Kantian purely subjective view of time entirely to dominate modern scientific thinking, as it has thus far tended to do since Mach. Rather, a truer balance of viewpoint is indicated whereby time, though subjectively effective too, nevertheless possesses definite structural and functional characteristics which can be formulated quantitatively. We shall eventually see that time may be defined as the ultimate causal pattern of all energy-release and that this release is of an oscillatory nature. To put it more popularly, there are time waves.
An operator we wi will frequently use in these discussions is
m( ), standing for ‘the measure of...’
It is part of the definition of this operator that
where F2 and F1 are two functions or types of variation now to be defined further, F2 being explicitly given but F1 not necessarily so.
F1 we may term the primary function i.e. the basis of the observed variation-pattern for a given phenomenon or set of phenomena, whose behavior we can partially or perhaps even fully record, but whose complete nature is still unknown; while F2 is the secondary or measure-function, which we hypothesize and use as a substitute for the direct knowledge of F1, in order to measure or comprehend it scientifically. We do this and may do so, since the variation of F2 simulates, to a greater or lesser degree, the observed and recorded variation-patterns of F1. (7) All scientific mathematical laws are but measure functions. Clearly, there will exist some difference (as indicated by such greater or lesser degree of simulation) between m(F1) and F1; that is, in general,
m(F)-F != 0
That is, the measure is to some extent arbitrary or “ignorant” in the technical communicational sense.
Usually it is enough, for the satisfaction of the criteria of accuracy and predictability that
 m(F)/F = K,
where K is what may be called a functional constant, denoting an operational proportion, i.e. a predictable variation pattern of the distortion or relative disorganization when proceeding from F to m(F). (8). And K^-1 indicates the inverse operational proportion that directs the transition from m(F) back to F. We may thus write
Before proceeding further (and it will presently be apparent that these considerations are vitally preliminary to chronotopology), it is correct and advantageous to inquire as to within what limits a given m(F) can be regarded as a feasible, usable, and valid measure of F.
The ideal case is given by K = 1 in , or
m(F) = F
Setting m(F)=y and F=x, the ideal condition is represented by the equation
y = x,
which in turn represents graphically a straight line passing through the origin with a slope of +45 degrees to the X-axis.
Now in a given instance of m(F) and F, if K != 1, then y does not equal x but some function of x instead; that is,
y = f(x)
And to compare the given m(F) with the ideal of measure, we compare the equations y=x and y=f(x). This comparison is facilitated materially by some considerations of analytic geometry.
1.) Where the first deriuvative f’(x) = y’, it is indicated that the measure is functioning perfectly, K having then been reduced to a scalar constant. (9)
2.) Wherever f’(x) = (y’)^-1, it is thereby indicated that m(F) has nothing in common with F and that it is under those conditions an unusable measure, since its projection in such cases lines on the line y=x (which represents the ideal condition of m[F]=F) is zero. At places of such perpendicularity the function m(F) as a measure of F does not reproduce F but yields only an arbitrary pattern, an unrelated distribution of variation. (10)
3.) When f(x) is inclined 45 degrees to y=x, it indicates that the measure m(F) is halfway between its perfect phase and its worthless phase. That is, wherever the 45 degrees mutual inclination occurs, m(F) is at its limits of usability and has reached the end of its margin of safety. Thus when y=x and the tangent to f(x) become more than 45 degrees apart m(F) becomes progressively more useless as a measure.
Hence, most measures (we are now speaking of functions or even entire theories, and not simply of scalars) have boundaries of usability that should be specified. It thus might well be, to take one instance, that in certain regions of phenomena the standard physical definition of force as m*s/(t^2) (where m is mass, s distance, and t time) will no longer obtain as a usable measure and will have to be replaced by a better, in the sense of a more comprehensive and more accurate one which will not contradict any true result obtained by use of the former measure. Approaches to such conditions are already known to exist to some extent in regions of so-called static equilibrium of forces, of potential energy; and in such regions the measure m(Force)=m*s/(t^2) becomes somewhat artificial and inapplicable to a noticeable degree. This observation also relates to those domains, termed un unstable equilibrium, where negative entropy is retained.
4.) Therefore, the range of usability of m(F) is represented by those portions of f(x) which lie between pairs of successive points on f(x) where the slope of the tangent to f(x) to the line y=x, is 45 degrees. The area between f(x), y=x, and the perpendiculars from such points dropped on y=x, represents the distortion of the measure, and shows how that distortion varies within the range of usability. These statements are made more susceptible of numerical application by employing the equation of the tangent line to f(x) at (x1, y1).
The situation m(x) = e^x is interesting in that,m since e^x approaches tangency to the line y=0 as x-> -infinity , and parallelism to the line x=0 as x-> +infinity, both of which lines are inclined 45 degrees to y=x, e^x thus proves usable throughout the range -infinity < x < +infinity. (11) However, there is a smaller region within this range, which can be termed the range of the most accurate usability or the region of nearest approach.
[Insert Illustration page xx, graph comparing of y=e^x and y=x)
In some cases this region is identical with the entire range of usability. In this case, it is not; but it may be practicably defined for y=e^x as the region bound by AB, BB’, B’A’, and A’A in the figure, where P(0,1) is the point of nearest approach. The point of nearest approach is a generalization of the points of tangency and intersection. its calculation and some of its relevant applications are best reserved for another occasion. Suffice it to say that in the present context it relates to places of greatest kinship between m9F) and F; and at points of nearest approach f(x) is parallel to y=x, the significance of which has already been discussed.
With the material presented since the introduction of  we have now sketched a general analytic means for the evaluation of hypotheses, theories, definitions, and concepts, all of which fall under the scope of the generalized measure operator. The problem of mathematical stability is related to the problem of the usability of a measure; and mathematical instability is related to the progressive collapse of a measure beyond its margins of safety, or beyond the limits of its range of usability. Further details are not demanded by our present theme, and we return to the relation
m(F)/F = K.
Developing the matter in a somewhat different direction, we can regard the existence of K as an indication of the arbitrariness of the measure. That is,
[1c] K=m sub a [m(F)] ,
m sub a[m(F)] denoting the measure of arbitrariness of m(F).
We can assign a value to this arbitrariness by defining its measure as some usable measure of K, for instance the k sub a discussed in note (10), above [below]. Then, the arbitrariness being a, we have, for any particular place in m(F)
m(a) = K sub a
and as we have seen, if
m(a) -> 0
K -> constant,
and then of course, m(F) = F, by [1b].
This ideal condition does not ordinarily hold, and in general a != 0.
However, the previous ideal result does refer to an essential tendency and as such a necessary condition for all increasingly unusable measures, an important concept. That is, any such measure-e.g. a hypothesis or theory, regarded as a fact-measure, as some m(F)-as its use proceeds in time is modified progressively so that its a->0. Without such a time-modification any measure with an initial K != 1 degenerates more or less rapidly in its significance, i.e. becomes decreasingly usable as observations continue, and is finally unusable.
We shall find that for a strict application of this chronotopological criterion it is not enough that a->0; it must approach zero not asymptotically, but more convergently than that. The difference between two types of thinking must now be specifically stated. Where a->0 asymptotically, we have the normal process of historical development of understanding of a subject or field. That is, the arbitrariness becomes successively smaller, but never zero. In terms of Mr. Rothsteins approach, there always remains some noise (non-information) in the circuit. Thus, m(F) != F and the concept is never quite true to its object, and hence wherever actual elements in F operate within the distortion region of m(F), mistakes are made. These mistakes are the discrepancies between F, as determined from m(F) with K != 1, and F as determined when K=1. F sub (k = 1) is indirectly revealed by the analysis of mistakes.
Hence, such discrepancies lead to a better approximation, to a better measure, to a lower a, to a K more nearly 1. Thus, the existence of a discrepancy, which may be represented as
[m(F) sub (k != 1) - m(F) sub (k = 1)],
operates on m(F) sub (k != 1) in such a way that the ration
m(F)/F -> 1 more nearly.
m(F)/F -> p/q, where p->q.
For simplicity’s sake let us assume that p and q are positive integers, with p varying in value and q constant. This entire action and process recurs in the usual historical development of the understanding of a given subject in such a fashion that p->q asymptotically. Thus we effectively have a hyperbolic accuracy-function if we assume the simplest time of asymptotic approach, above stated.
The accuracy of this process of recurrent self-correction without ever attaining complete correctness is that of a divergent series with an asymptote.(12) It is not unconditionally divergent, as it would be without homostasis of some sort, which is directly related to purpose-maintenance. Intelligence is maintenance of purpose by adaptation of the means at hand to the attaining of that purpose. But
represents a series-process that reaches no limit at all; and if regarded as the deviations of an unusable m(F), it can be said to oscillate to and fro about F in larger and larger amplitudes as the number of data (occurrences) increases with time. The series
represents a similarly unintelligent process with limits imposed on its maximum distortion through some set of controlling conditions. But tendencies toward convergence within the context of this discussion denote the presence of some teleological force pattern, including intelligence.
Normal historical development is not divergent or randomly distributed in its K-change series., But it is asymptotic. This fact suggests that certain limit-ratios obtained from the process itself might enable the formulation of a more accurate measure of it than could be gained by consideration of its merely general character.
The way understanding in a given field historically develops, to take as an example the Fibonacci series (to which we shall have later occasion to refer), is by way of the successively more accurate approach of F sub n / F sub (n-1) to r as n-> infinity, where F sub n is the nth term of the series, and r, the limit of the ratio of two successive terms. Thus, the Fibonacci series being 1,1,2,3,5,8,13,21,34..., we have as ratios
5/3, 8/5, 13/8, 21/13, 34/21, ... r,
where r=(1 + sqrt(5))/2, posessing the important property r+1 = r^2, r^2 + r = r^3, ... , r^(n+1) = r^2(n+1).
F sub n / F sub (n-1) = m(r),
we can write for this measure, K, as per  above,
K = (F sub n / F sub (n -1))/r,
and for any assignable n, however large, the distortion relation
K != 1
m(r) != r
will always hold, even though m9r) is an increasingly usable measure as n increases.
In addition to the above results, this Fibonacci ratio-series proves to be still more comprehensive as a measure-function for the historical development of understanding in any field, since each successive F sub n / F sub (n-1) has a value alternately above and below that of r; but the swings become smaller and smaller in amplitude as n increases. Thus the graph of the F-ratio series is a damped oscillation about the value r, first erring toward one extreme and then toward the other, approaching the limit by a process of continuing reaction, endlessly extended. This is a familiar historical phenomenon.
But the process of discovery, as distinguished from the normal historical trial-and-error development in any field, is not asymptotic or even convergent in the ordinary sense. For, turning again to the F-ratio series, discovery consists not in trying to obtain r by successive oscillating approximations, each attached to a ‘probable error’; but discovery succeeds in grasping the law of the F-series itself, which law yields the exact truth of the matter; namely in this case that:
F sub n / F sub (n-1) -> r
(n -> infinity)
where r has the exact value of (1+ sqrt(5))/2.
The difference between F sub n / F sub (n-1) (however great n be) and r is the difference between (a) logical guessing and (b) complete and full understanding of a subject with mastery of its fundamental explanation. Since F sub n / F sub (n-1) can never equal r, they are what may quite correctly be described as a process dimension apart. A statement of full comprehension or discovery, as distinct from its preceding period of increasingly better inductive approximation, is a statement of an immediate convergence, not by the rule for approximating, but by the law of the series itself and the structure of its limit, which govern the rule for approximation by including it as one implication. Discovery thus is related to the limit of a convergent series and becomes a kind of immediate convergence, bringing to an end some otherwise infinite series of never-ending approximations.
There is a science of certainties, just as there is the more familiar science of approximations.
It is now possible to appreciate the important distinction between existence theorems and what may be termed reason theorems. That something is true in mathematics is like an observation in physics. But the explicit understanding of why (and not merely formally how) it is true constitutes a reason theorem. Every fundamental discovery in the history of scientific though has enlarged some phase of some existence theorem into a reason theorem. We cannot remain satisfied in mathematics (nor in logic or physics) with the mere ‘that’-type of proof or with existence proofs. Having them, we must then uncover also the reason for the existence of each existence theorem, and the relation between those reasons. Then we not only know, but we know why we know. We understand. Among recent great mathematicians, Hilbert called this process insight, without analyzing it further.
When we turn to chronotopological analysis as such, we find that
a) Time has a structure;
b) That structure is chartable and calculable; one of the primary statements in chronotopology being the description of the time interval t (=t2 - t1), as
t = integral of the range g sub 2 to g sub 1, function ((1 + S’’^2) ^ 1/2) dg
where g is the change effected, measured in some appropriate phase space, and S is the scope or range of change, measured in some kind of released energy quanta; the functional relation between S and g being periodic and S’’ being the second derivative of S.
c) The structure of time can be used as a basis for prediction and control;
d) All phenomena are ultimately formed of and possess cyclical components, keyed to the wave-nature of time.
e) Situations contain quite general laws of development, growth, change, and cessation, and these laws can be formulated, leading to practical and powerful tools which may be descriptively termed for the present as chronavigation and chronocontrol.
It is to be concluded even from daily observation that there are oscillations of many different temporal wavelengths operating upon us, both directly and indirectly. Citing a few, there are the various hormonal cycles, the encephalographic ‘brain waves,’ the autonomic periodicities governing heart-beat and breath-rhythm; various historical and socio-economic typological recurrences such as inflation and deflation, stability and instability; personal periodicities of moods, and the climatic and astronomical cycles of the earth’s circumsolar revolution, our galaxy’s rotation, and many more. Waves of probability, or, more concretely put, periodic likelinesses of occurrence, are now known to furnish the chartable dynamic laws of the subatomic world. There is a common reason for all this, and that reason is part of the nature of time and specifically, of its wave-nature.
Some of the results obtainable directly or as a by-product of chronotopological analysis can be summarily outlined. Interestingly enough they fall for the most part in the fields of number theory, including Diophantine analysis; complex function theory; and certain aspects of morphology, both physical and biological.
Thus we find that Bode’s Law, so often cited and yet not understood, finds its basis in chronotopological considerations.
Actually, three series are involved,
Series I: 4+3n, n=0, 1, 2
(Mercury through Earth)
Series II: 4+3(2^(n+1)), n=0, 1, 2, 3, 4
(Earth through saturn, using Ceres, the largest asteroid, for the orbit between Mars and Jupiter)
Series III: 4+3(n+1)2^5, n=0, 1, 2, 3,  (13)
(Saturn and beyond)
yielding with surprising accuracy the mean solar distances of the planets in terms of tenths of Astronomical Units. When it is not recognized that there exist these three series, the ordinary use of Bode’s Law breaks down after Uranus. However, series III yields distances of 29.2 A.U. and 38.8 A.U. for Neptune and Pluto respectively, which answer to the observed values of 30.1 and 39.5.
The distances of the planets are, of course, related to the periods through the Newtonian gravitational potential, manifesting as Kepler’s third law. however, it remains to be seen whether there exist, as in the case of the distances, any integral numerical series in the periods.
We are afforded a clue to the answer of this question by the special positions held by the Earth and Saturn in the distance series. We shall hence heuristically assume that the periods of Earth and Saturn to be our units, noting that we are seeking tendency-laws rather than minutely deterministic ones, which in any event scarcely if ever are to be met with in problems involving natural form or design. But tendencies do exist in nature, and these may be sharply defined and very useful in relation to their numerical bases and the reasons for those bases in nature. The principle is pervasive and fundamental enough to warrant this verbal underscoring.
(3) Which may be briefly summarized as the problem of obtaining some analytic connection between time and the patterns of possible change of a statistically analyzable set-up composed of many heterogeneously moving elements.
(4) Entropy: In one respect a poorly contrived word, yet the only one we possess in standard usage for the concept. Its word-form is positive, but it stands for a negative concept, that of irrevocable thermodynamic degeneration with time: that a burnt match can’t be ‘unburned’ is an illustration of the law of increase of entropy. ultimately, the central fact of entropy is involved with the inability to turn radiant energy into potential, radiable energy again.
(5) Physical Review 1, xxx, 766, June 1910
(6) The Nature of the Physical World, p 63ff. Cambridge University Press, 1928.
(7) The connection between the calculus of variations and measure theory lies in these considerations, and arises out of them.
(8) In the somewhat more artificial, but perhaps more familiar terms of set theory, K is a transformation from F onto m(F); that is, K is a transformation whose domain is in F and whose range is in m(F).
(9) At those points where f(x) and y=x are not only parallel but also tangent, K=1.
(10) At points where f(x) might intersect y=x perpendicularly (or nearly perpendicularly), we obtain what can be called <i.points of meaningless identity;</i. that is, places where isolated values of m(F) are identical with those that would be afforded by a perfect measure, while the pattern of variation of m(F) immediately surrounding such points shows it to be totally unrelated to that of F. Points of meaningless identity, unsuspected as such, generate possibilities for fallacious conclusions as to the worth of m(F) as a valid measure. Many untenable hypothesis and theories are of such nature. Such pitfalls are distinguished and avoided by the test of ascertaining the over-all pattern of mutual variation of m(F) and F which surrounds such regions. This pattern is given by K. It is practical to observe that if we take K sub a = arc tan [f’(x)] - pi / 4, then K sub a = m(K), since pi / 4 is the value of the slope of y=x. If K sub a = 0, f(x) and y = x are paralklel; if K sub a = pi / 4, they are 45 degrees apart; and if K sub a = pi / 2; they are perpendicular.
(11) This demonstratedly wide range of usability of the exponential function has a connection with its wide and versatile use as a measure in many concrete applications. The equilateral hyperbola vies with it in range of usability. This is not surprising since that curve and log sub e (x) are intimately related: Integral(y = 1/x)dx = log sub e (x); and we are generally cognizant of the widespread use of the logarithmic function as a measure of various phenomena. Log sub e (x) itself also has a range of usability from 0 < x < +infinity. On the other hand, the circle y=sqrt(9-x^2)+6 under the circumstances of this discussion would be usable only in the range of arc between the points (0,3), (3,6) and between (0,9), (-3,6), for example.
(12) E.g. a series based on the area under a hyperbola, which area is infinite but not unconditionally so.
(13) Trans-Plutonian term, yielding a most probable mean solar distance of 48.4 A.U., corrected to 48.9 A.U. by extrapolation from neighboring deviations from observed values.
The above transcription ends mid page xxx, sadly still a bit shy of the start of the next transcribed section at page lviii.
Hence, to assert that such existence theorems constitute full logical satisfaction is another example of the general reductive fallacy, which we mention now in a different context because of its deep-seated prevalence, and the consequent necessity of understanding it in order to be aware of it under Protean guises. That fallacy consists essentially in a process of unilaterally exclusive reasoning, whereby inadequate premises lead to conclusions which result in reducing the field of the subject matter to less than it in fact is. Such conclusions are naturally false beyond the narrow and arbitrary barriers of the more or less rigid conceptions that spawned them in the first place, and misleading even within those barriers as suggesting there is nothing outside them to consider.
The reductive fallacy is a besetting one. A recent example of such thinking is the persistent attempt to interpret the non-Doppler red-shift in the ad hoc terms of a speculative, violently expanding universe, the only alleged proof for which lies in the very red-shift under observation, and the only example of which consists in the alleged expansion. This is a case of reducing the intrinsic implications of the observations, and then imputing other implications in order to obtain conformity with some otherwise unverified assumptions arising out of the hitherto current fashion of general relativity theory.
Actually, the logical economy of the observational situation in question would direct the attention of a dispassionate inquirer to a much more economic conclusion: that here we have an implication of a type of hysteresis. Consideration of the elastic hysteresis of a wave, computed from the energy of the wave in connection with its medium of transmission, shows that both the amplitude and frequency of any unsustained radiation wave must decrease with time on account of hysteresis losses of energy. In the case of light, this hysteresis loss can be shown to be of the same order of magnitude as the observed red-shift.(36) By the nature of elastic hysteresis we would expect such losses to be accelerated as the energy of the spent wave approaches zero. This has proved to be the case. On July 14, 1956, Drs M Humason and A R Sandage of the Mt Wilson and Mt Palomar observatories reported a non-linear increase in the red-shift for the most distant galaxies the great 200-inch telescope could reach., This fact they interpreted under the old exploded-universe hypothesis as a fantastic increase in speed by the giant nebulae, instead of a natural and inevitable accelerating energy-loss of the light wave.
Not only does the understandable and easily verifiable hysteresis phenomenon explain the non-Doppler nebular red-shift, but it enables us to calculate the maximum diameter of a physical cosmos as 3.5 x 10^9 light years. (37)
There are interesting relations between the elastic hysteresis of a light wave and its electromagnetic momentum and mass, for which there is not space here. Suffice it to say that the horizon of logical inquiry and scientific thought in general is significantly extended by putting aside reductive thinking, as such thinking ends in actually tending to hinder inquiry and discovery.
The process of the historical development of ideas has proceeded, as we said before,like the successive approximations of the final ratio of successive terms of a Fibonacci series, first veering to one side of the true answer and then to the other. There is a definite oscillation of injustice and error in history and in the fashions of ideas, at one period swinging to the right of issues and in a following period to the left, within a given area of discussion. In our times the pendulum of scientific fashion has swung almost the gamut toward formal reductivism. We are confronted with an anti-intuitive mathematics, a behavioristic approach in the social sciences, and a consequent laxity in thinking about origins, the reasons for the choice of axioms being allowed to remain obscure-often, unfortunately, in the prejudicial interest of a minutely formalistic exhibition of the necessary conclusions following upon the favored assumptions. All this not infrequently accompanies a preoccupation with merely notational minutiae and tableaux of ad hoc and already prearranged axioms, dangerously nearing intellectual decadence.
Mr. Rothstein is one of a growing number of refreshing and felicitous exceptions to a general reductivist tendency and tropism of twentieth century thinking today. he fails to be only where he forgets that in the process-too often overlooked-underlying the choice of certain axioms and the consequent rejection of others lies the nub and node of the inherent problems encountered in various methodological theories. Occam’s razor helps in eliminating logical expendables, but still the ultimate means of admitting or rejecting axioms, assumptions, premises, or hypotheses remains as it ever has been: fact, as known through experience, with no arbitrary limits of meaning imposed upon experience. The clauses are essential.
The very value of his contribution makes mandatory the mention of some safeguards against specious pitfalls. Thus when Mr. Rothstein attempts, on the grounds that his assumptions are not competent to deal with them, to exclude feelings from meaningful experience (p.56), he becomes Procrustean and subject to the intellectual ankylosis he himself laments in his preface. For he implies in the discussion that his assumptions are sufficient for logical inquiry, although the fact of his omission would deny such sufficiency.
If he or any other non-commercial writer did not have distinct feelings-quite aside from knowledge-as to a given subject, he would not be writing about it. Indeed he would not have set about acquiring his specialized knowledge in the first place without the drive and incentive of some feeling in regard to it. Scientific curiosity itself contains a goodly share of affective components. The search for and appreciation of harmony in nature that Mr. Rothstein so well speaks of is perhaps the most potent factor behind all the greatest scientific discoveries - and that factor is principally a feeling. Even the phobia against considering the scientific importance of feelings is itself a feeling. Far from being irrelevant to logic, feelings are the prime basis in all human activity-including science-for selection among alternatives of otherwise equally acceptable premises. And premises underly all logical demonstration.
Simply because something is not susceptible to measurement with the means we have developed, or is not commensurate with some favorite criterion of knowing that we have advanced, we cannot say that it does not exist or that it has no meaning. One of the greatest and most recurrent fallacies in twentieth century thinking has been to equate non-measurability with non-existence of insignificance, in an attempt to relegate the allegedly nonmeasurable to the limbo of unreality whither, however, it stubbornly will not go.
The author himself (p.29) terms absolute simultaneity “meaningless within the framework of the empirically verifiable,” following at this point the phenomenological relativist view that began with Ernst Mach, who so sacrificed objectivity that in the face of fact and reason he soberly alleged and preferred to believe centrifugal forces were effects of the fixed stars rather than to see them for what they are: derivative components of the rotational forces whence they arise. But rotation did not fit into Mach’s phenomenological relativity, as it has always proved a sore point for reductive thinking of a relativistic nature. Likewise, the statement above quoted is a fallacious notion similar to that consigning the presently unmeasuarable to non-existence or insignificance.
Actually it is not difficult to demonstrate the necessary existence and occurrence of absolute objective simultaneity, irregardless of any sensory illusions arising from using a signal of finite speed as a measurement communicator for mutually moving bodies. If two wheels, A and B, whose axes lie in a common horizontal plane, are rotating at different speeds, there are at all times two points, P sub a and P sub b, on the respective peripheral edges at a maximum perpendicular distance from the axial plane on any given side of it. The existence of an absolute simultaneity follows simply from the continuous nature of the wheels’ rotation and their circular shape. There is always a highest vertical point on both wheels, and in fact a continuous succession of absolute simultaneities. Indeed a constantly changing complexion of continuously occurring absolute simultaneities is an essential characteristic of any dynamic system.
A similar situation is presented in the case of mutual motion among two or more bodies, said by reductivist extremists to be impossible to specify among the bodies and hence meaningless and effectively non-existent. However, let us consider two manned rockets passing by one another, each ship equipped with a speedometer and a televised screen-image showing the instrument panel of the other ship. Now if both rockets are proceeding at uniform velocities and approach each other from opposite directions, a third and external observer might well not possess enough data to determine the vector components and decide who is approaching whom. But the individual pilots do have enough data to ascertain this fact unambiguously.
An inaccessibility of data cannot be set down as empirical non-existence of those data; (38) otherwise scientific method is abandoned for mere semantic jugglery. The logical situation is similar to that surrounding the commonly used phrase “equally likely,” e.g. p.21 of this volume: “… some physical magnitude has a value between 0 and u, all values being equally likely,” italics ours. In all such uses of the phrase what is actually meant is “equally likely so far as we know at the time of the measurement”; for greater knowledge always refines probability data and provides more specific and differentiated probability values that nullify the previous allegation of “equally likely,” which was based only on our then greater ignorance. (39) “Equally likely” possesses more entropy and hence less information than a more specified probability pattern. With more information, an entropy decrease is mandatory as per the author’s own exposition.
Thus, in the empirical realm of human observation a two-valued logic is far less realistic than a three-valued system containing a ‘Yes’, a ‘No’, and an immense ‘Maybe’ reaching over a vast number of data, each tagged with a thus-far-ascertained probability. On the other hand, a two-valued logic becomes far more appropriate for data within a strictly limited and specifically defined universe of discourse, or, more generally put, for data on which full information is available or accessible; in other words, for informationally closed systems. (40) It is worth noting in this connection that purse noise also exists only for informationally closed systems, and may be alternatively defined as irrelevancies with respect to those systems. Otherwise, there is no such thing as “pure” noise in an informational sense. Radio static may tell us about the ionosphere or even the stars, as in radio astronomy. Any noise in any case tells us something about what is causing the noise, if we allow our interpretational framework to remain open. And thus noise metamorphoses into information. In the context of this discussion the statement on P.16 that “no information can be extracted from pure noise” is not unqualifiedly meaningful, as pure noise does not exist except in some specifically restricted sense and in a closed system. It is the reductive fallacy again.
The logic of the situation is similar with respect to the observation (P.19) that Riemannian geometry has been shown to give a more “perspicacious description of the physical world” than Euclidean geometry. This is not quite so. The shortest distance between two points is still a straight line in our physical universe, provided we ourselves impose no special limitations as to the paths over which measurements may be made. An arc of a great circle, for instance, is the shortest distance between two points, A and B, on a sphere if we restrict ourselves to paths on its surface. But if we allow ourselves to pass through the surface, the shortest distance is the chord AB. The unrestricted geodesic in any space is always a straight line.
A specious notion-and one which tends to reflect upon some of Mr. Rothstein’s finest insights-is that progress is always assured by more “fitting into the ‘system’” (P.59) or “adjustment,” a shibboleth promoted by John Dewey on the basis of reductive Darwinian psychological dogmas. One reason words like “adjustment” and its cognates have become so popular in our time is that they provide a superficially impressive verbal screen for an age seriously depleted inv alues, the reacquiring of which is its major problem. This is a problem which cannot be solved by the mass methods now so in fashion; for its solution integrally involves the increase of the dignity and integrity of the individual as such.
Adjustment in itself is a meaningless term until we are clearly told what it is to which adjustment is being advocated. A successful criminal may also be “well adjusted.” No pseudo-magical formula of “more adjustment” (which in some situations, as in brain-washing, can mean actual degradation) amounts to more than empty or invidious words unless those values and aims are revealed to which that adjustment is exhorted. Thus the aim of the deindividualized and soulless twentieth century mass-state is 100% adjustment-to those few who rule the state. In fact, the characteristic negative entropy associated with human activity can be interpreted as the creative and responsible opposite of “adjustment,” which is simply going with the tide. “To adjust” has come to mean “to conform” in the context of Deweyian and post-Deweyian mass-ethos and its political counterpart, the oligarchically controlled total state, whether it call itself capitalist or communist.
In this sense the USA and the USSR, for all their vaunted opposition, appear rather, by a common denominator of de-individualization, to be approaching each other psychologically; and hence in terms of the effect upon their component “cogs"-the forgotten individualities of their peoples. The situation is rapidly becoming that of the nonsense verse
Back to back they faced each other,
with its ominous sequel of a surprise weapon
Drew their swords and shot each other.
Yet quite aside from atomic fission or fusion, the nation or civilization that kills the individuality of its members, and worse, anaesthetizes them to the fatal amputation, has thereby slain itself by its own deadening hand-and is doomed.
There is no cure for this tragedy once it is upon us; but there is a preventative. A very important aspect of negative entropy is involved, which the author adumbrates when he very ably writes, with admirable acumen (pp.34-35):
‘What do we mean by an organization:? First of all organization presupposes the existence of parts, which, considered in their totality, constitute the organization. The parts must interact. Were there no communication between them, there would be no organization, for we would merely have a collection of individual elements isolated from each other. Each element must be associated with its own set of alternatives. Were there no freedom to choose from a set of alternatives, the corresponding element would be a static, passive cog rather than an active unit contributing to the organization in an essential way. Such an element can be called structural, as distinguished from the active or organizational element.’
We have italicized the essence of this excellent definition, as without that essence we have merely a rigid structure: ossification, but not organization.
Curiously enough, in the very next paragraph the author loses sight of the vital distinction he had just so patly described, and says
‘On the other hand, it is possible that the coupling between elements is so strong that only one complexion is possible. In this case ... organization is said to be maximal. All elements are then “cogs.” --
thus misleadingly using “maximal organization” to refer to ossification, i.e. when “all elements are then ‘cogs.’” This confusion after so telling an initial analysis springs only from having neglected an essential attribute of organization he had almost first grasped; namely, the existence of centers of energy concentration. A wider generalization is now possible: energy concentration is the hallmark of all forms of negative entropy, just as energy dissipation or scattering distinguishes positive entropy. The energy distribution patterns so characteristic of the presence of negative entropy are but the consequences and contingent reflexes of those centers of concentration. A still wider form of generalization is that some form of energy concentration must be the basis for any kind of order; just as energy scattering is the basis of disorder. It should be carefully noted that energy distribution, which involves energy concentration, is diametrically different from mere scattering or dissipation. We are now able to define the very fundamental concept of order in precise terms, as essentially involving energy concentration of some kind.
A point-center hence becomes the simplest form and source of order, as we already demonstrated by a different route on page fifty. The individual is thus the source of societal order, not the state. Tog ignore or, worse, dispense with him , the social atom, is to bring eventual and inevitable disorder upon society-the death of total entropy.
Recalling some previous distinctions, we can say now that rotation and angle would be logically more primarily related to negative entropy, just as radial extension and line would be to positive entropy. Our previous findings in regard to the imaginary and to periodicity serve to confirm that conclusion. The interesting question of what would physically constitute mathematically imaginary entropy can now be approached. For we realize at least that it must be characterized by some energic operation, which when applied to its own result leads to a concentration of energy (negative entropy) just as i^2=-1. If applied twice more in like manner, positive entropy must result: i^4=1. The simplest physical counterpart of such an operation is the rotation of the driving wheel of a locomotive cylinder piston, the piston being attached in the usual eccentric fashion. Such rotation requires 180 degrees for compression (negative entropy increased), while a further 180 degrees completes the expansion phase (positive entropy increased). Imaginary entropy thus refers to the energy state of a fundamental periodic operation resulting in alternation of negative and positive entropy increases. But this is precisely the operation of a wave. Thus imaginary entropy is the characteristic entropy of any energy wave and thus of all radiant energy, as well of all energic cycles.
Returning to the nature of negative as distinct from positive entropy, there is something we still should consider. It may be called the problem of the water-skaters, insects that move constantly on the surface of a flowing stream to keep themselves at comparative rest with regard to its banks. Their motions represent a perpetual fight against the current of the stream. unlike the leaf floating downstream, they are not adjusted to the prevailing current but are set against it. They are centers of negative entropy maintaining their own form of organization in the face of an opposing force. They have solved their problem against the attrition of positive entropy. On the other hand, the floating leaf exhibits positive entropy.
Flowing with the prevailing tide or current because of not being able to do otherwise is an entropy-increasing process; just as rowing upstream, for instance, reveals an entropy-decreasing activity. Housecleaning is another form of increasing negative entropy, since the direction of positive entropy is toward a household’s becoming disorderly and dusty. Auto maintenance and repair are likewise entropy-decreasing. The central problem of geriatrics is to decrease biological entropy by slowing down the rate of rate of increase of positive entropy in the human body. If we may term G the complex of geriatric or life-prolonging factors, then we have as a measure G = - (d^2 * E sub b) / (d * (t^2)), where E sub b is the biological entropy and t is time. All stresses and strains, both physical and otherwise, including psychophysiological anxieties and tensions, are entropy-increasing factors, and may be analyzed by the methods and concepts possessing the logical structure of thermodynamics. On these levels thermodynamics merges into chronotopology, as is also evident in light of all the preceding pages.
The point to be kept in mind here is that conforming to an externally imposed system of prevailing tendencies more often than not actually represents an increase of entropy, and not organization at all. Thus what may appear as one hundred percent coupling is in itself not a criterion of zero much less negative entropy, and the situation must be analyzed further. A penetrating consideration of negative entropy takes one to very subtle and profound atmospheres of thought and conception. As Mr. Rothstein finally senses in summation (pp.109-110):
‘Each synthesis seems to open the door to a higher kind of experience, which in turn permits higher syntheses. in spite of the solid “material” or “mechanical” basis of the discussion, there was always something eluding mechanical description.’
On page 81 this statement is found further specified:
‘In fact, if one were to define the spiritual as that which transcends the material, he would be forced to conclude that the material always bears within it the seeds of the spiritual. One could accept this as an alternative to saying we are “mere machines,” and with good justification.’
The question may be regarded as one which asks, “Do you include consciousness as an attribute of matter, or not?” If the answer be “Yes,” then the answerer cannot hold with mechanism; if the answer be “No,” then he is faced with an intolerable, because ever disconnected dualism, denied by the observed intimate connection of matter and consciousness. The facts of the matter-and Mr Rothstein senses it on page 81-point in the direction of a more sophisticated restoration of the hylozoistic thought of the ancients, Greece borrowing from older Egypt in this respect. It is not quite enough to say that Goedel’s theorem ends the contest between vitalism and mechanism “in a tie” (ibid.); for vitalism was never quite as dogmatic as mechanism, which claimed all was mechanistic. Vitalism at least admitted mechanisms as instrumentalities or vehicles of vitality. In this respect the contest is proving to be not quite a tie, in favor of the vitalistic view containing a greater proportion of correct elements than the merely mechanistic one.
On the other hand, the author believes that the correct approach to a universal language is through the programming code of an electronic computer (p.93). He goes so far as to extol the eventual “poetry” of such a language (pp.99-100). However, the very essence of poetry is the spontaneous creation, from hitherto known elements, of new images and meanings, fraught with the feelings that generated those combinations. There is no room in such a process for an electronic computer. For someone to suppose there might be shows prima facie how quickly deep can become the inroads upon logic that a reductive viewpoint and outlook can make.
A common fallacy about such computers is that they can create anything new. In point of fact they are either gigantic electro-mechanical tautologies, regurgitating only what has first been thought out and predigested for them in the human mind, and then encoded in them; or they are predesigned magnifiers of original human effort, as are levers or pulleys in a physical sense. Their results are no more mysterious or awe-inspiring than that the letter “A” appears on a paper when a varitypist presses the key marked “A.” The machine thought no more to print “A” than the computer thought to produce its result. The computer certainly could not “draw conclusion” (p.93) though it could separate appropriate from inappropriate statements by selection from a built-in set of possibilities according to a built-in set of rules.
Te answers the computers yield have all been preinsured by previous human thinking-the only thinking involved in the entire situation. What is distinctive about such machines is their speed and power of performance, thus enabling the production of results otherwise unobtainable in practice, just as a weaving machine can weave many more rugs in a few months than a human artisan could in a lifetime.
To set computing machines on a higher intrinsic value-level than an abacus is an error fatal to a true evaluation of things. Yet that error is very liable to be made by any habituated by reductive blinders to hold the inventive gifts of man to his world above the giver, in a gigantic underlying inferiority complex, strangely often coupled with a lack of intellectual humility as perhaps a sort of aggressive overcompensation. There is morescientific mystery in the tiniest rosebud than in the greatest electronic computer that will ever be made. We use the future tense advisedly.
Fortunately, Mr. Rothstein’s quality of intellect does not allow him to be entirely happy with his occasionally reductivist framework. he shows a basic healthiness in circumventing or bettering it on many alternative occasions. This fact proves his worth as a creative thinker, and as such, his real value to the reader, who will find a wealth of new and interesting material to explore.
The essential fuzziness of time may be the limiting factor for a gravitational-wave detector in Germany.
Poets have long believed the passage of time to be unavoidable, inexorable and generally melancholic. Quantum mechanics says it is fuzzy, ticking along at minimum intervals within which the notion of time is meaningless. And Craig Hogan claims he can ‘see’ it â€” in the thus far unexplained noise of a gravitational-wave detector. “It’s potentially the most transformative thing I’ve ever worked on,” says Hogan, director of the Center for Particle Astrophysics at the Fermi National Accelerator Laboratory in Batavia, Illinois. “It’s actually a possibility that we can access experimentally the minimum interval of time, which we thought was out of reach.”
In a classical view of the world, space and time are smooth. The minimum scales at which, according to quantum mechanics, the smoothness breaks down â€” the Planck length and time â€” can be derived from other quantities, but they have not been tested experimentally, nor would they be, given their impossibly small size.
Yet if Hogan’s ideas are right, noise associated with this fundamental fuzziness should be prominent at GEO600, a joint British and German machine operating near Hannover, Germany, that is searching for gravitational waves. These waves are thought to arise during events such as the massive cosmic collisions of black holes and neutron stars. Confirmation of the idea â€” which could come as experimental upgrades to GEO600 are put in place over the coming year â€” would be a big step towards a verifiable quantum theory of gravity, a long-sought unification of quantum mechanics (the physics of the very small) with general relativity (the physics of the very big). Hogan outlines his predictions in a paper published on 30 October in Physical Review D1.
Max Planck Institute for Gravitational Physics
Of course, theorists are full of extraordinary ideas that never pan out, so physicists at GEO600 are treating Hogan’s ideas with a healthy dose of scepticism. “To me as an experimentalist, this all seems a bit like black magic,” says Karsten Danzmann, principal investigator for GEO600, and director of the Max Planck Institute for Gravitational Physics. “It seems a bit far-fetched and artificial. But if it’s true, it’s Nobel-prize-winning stuff.”
Hogan says that the noise could be responsible for about 70% of some unaccounted for noise that GEO600 is recording. Danzmann says it’s “intriguing” that this noise just happens to be the right magnitude and shape to account for most of the ‘mystery’ noise that his team has been unable identify for a year now.
The predictions are based on a lower-dimensional view of spacetime: two spatial dimensions, plus time. Spacetime would be a plane of waves, travelling at the speed of light. The fundamental fuzziness of the waves, on the order of the Planck length and time, could be amplified in large systems such as gravitational-wave detectors. The third spatial dimension of the macroscopic world would be encoded in information contained in the two-dimensional waves. “It’s as if, in the real world, we are living inside a hologram,” says Hogan. “The illusion is almost perfect. You really need a machine like GEO600 to see it.”
According to Hogan, the ‘holographic’ noise is more likely to be seen in certain detectors, because the fuzziness gets translated into noise only in the plane of the underlying wavy two-dimensional fabric of spacetime. GEO600 is less sensitive to gravity waves than are detectors such as those in LIGO (Laser Interferometer Gravitational-Wave Observatory), two similar, large L-shaped detectors in Washington and Louisiana. But Hogan says GEO600 is more sensitive to holographic noise, because its power is locked in a beamsplitter that amplifies the peculiar transverse quality of the fuzziness.
The idea for an essentially holographic Universe has gained traction in recent years, as string theorists have found ways to trim the 10 dimensions that their theories call for. A decade ago, Juan Maldacena, now of the Institute for Advanced Study in Princeton, New Jersey, put forward the idea that most of the 10 dimensions can be reduced when the information is encoded, like a hologram, in three or four basic dimensions. “The ideas of holography in string theory are extremely well accepted,” says Gary Horowitz of the University of California, Santa Barbara. He adds, however, that Hogan’s ideas about holography don’t use conventional starting points. “There is reason to be somewhat sceptical. I don’t find the theoretical motivation totally convincing.”
But Hogan’s predictions are striking and specific enough to get the attention of the GEO600 staff. Hogan will travel to Hannover to work with GEO600 scientists such as Harald LÃ¼ck, who is leading an effort to double the sensitivity of the machine by the end of 2009. That should mean that the instrumental noise also drops. But if most of the noise remains, then it could be a sign that it is due to holographic noise, which would be fundamental, and pervasive throughout the Universe. “If the noise is still there, we have to be serious” about the observations, says LÃ¼ck.
“Time” is the most used noun in the English language, yet it remains a mystery. We’ve just completed an amazingly intense and rewarding multidisciplinary conference on the nature of time, and my brain is swimming with ideas and new questions. Rather than trying a summary (the talks will be online soon), here’s my stab at a top ten list partly inspired by our discussions: the things everyone should know about time.
1. Time exists. Might as well get this common question out of the way. Of course time exists — otherwise how would we set our alarm clocks? Time organizes the universe into an ordered series of moments, and thank goodness; what a mess it would be if reality were complete different from moment to moment. The real question is whether or not time is fundamental, or perhaps emergent. We used to think that “temperature” was a basic category of nature, but now we know it emerges from the motion of atoms. When it comes to whether time is fundamental, the answer is: nobody knows. My bet is “yes,” but we’ll need to understand quantum gravity much better before we can say for sure.
2. The past and future are equally real. This isn’t completely accepted, but it should be. Intuitively we think that the “now” is real, while the past is fixed and in the books, and the future hasn’t yet occurred. But physics teaches us something remarkable: every event in the past and future is implicit in the current moment. This is hard to see in our everyday lives, since we’re nowhere close to knowing everything about the universe at any moment, nor will we ever be — but the equations don’t lie. As Einstein put it, “It appears therefore more natural to think of physical reality as a four dimensional existence, instead of, as hitherto, the evolution of a three dimensional existence.”
3. Everyone experiences time differently. This is true at the level of both physics and biology. Within physics, we used to have Sir Isaac Newton’s view of time, which was universal and shared by everyone. But then Einstein came along and explained that how much time elapses for a person depends on how they travel through space (especially near the speed of light) as well as the gravitational field (especially if its near a black hole). From a biological or psychological perspective, the time measured by atomic clocks isn’t as important as the time measured by our internal rhythms and the accumulation of memories. That happens differently depending on who we are and what we are experiencing; there’s a real sense in which time moves more quickly when we’re older.
4. You live in the past. About 80 milliseconds in the past, to be precise. Use one hand to touch your nose, and the other to touch one of your feet, at exactly the same time. You will experience them as simultaneous acts. But that’s mysterious — clearly it takes more time for the signal to travel up your nerves from your feet to your brain than from your nose. The reconciliation is simple: our conscious experience takes time to assemble, and your brain waits for all the relevant input before it experiences the “now.” Experiments have shown that the lag between things happening and us experiencing them is about 80 milliseconds. (Via conference participant David Eagleman.)
5. Your memory isn’t as good as you think. When you remember an event in the past, your brain uses a very similar technique to imagining the future. The process is less like “replaying a video” than “putting on a play from a script.” If the script is wrong for whatever reason, you can have a false memory that is just as vivid as a true one. Eyewitness testimony, it turns out, is one of the least reliable forms of evidence allowed into courtrooms. (Via conference participants Kathleen McDermott and Henry Roediger.)
6. Consciousness depends on manipulating time. Many cognitive abilities are important for consciousness, and we don’t yet have a complete picture. But it’s clear that the ability to manipulate time and possibility is a crucial feature. In contrast to aquatic life, land-based animals, whose vision-based sensory field extends for hundreds of meters, have time to contemplate a variety of actions and pick the best one. The origin of grammar allowed us to talk about such hypothetical futures with each other. Consciousness wouldn’t be possible without the ability to imagine other times. (Via conference participant Malcolm MacIver.)
7. Disorder increases as time passes. At the heart of every difference between the past and future — memory, aging, causality, free will — is the fact that the universe is evolving from order to disorder. Entropy is increasing, as we physicists say. There are more ways to be disorderly (high entropy) than orderly (low entropy), so the increase of entropy seems natural. But to explain the lower entropy of past times we need to go all the way back to the Big Bang. We still haven’t answered the hard questions: why was entropy low near the Big Bang, and how does increasing entropy account for memory and causality and all the rest? (We heard great talks by David Albert and David Wallace, among others.)
8. Complexity comes and goes. Other than creationists, most people have no trouble appreciating the difference between “orderly” (low entropy) and “complex.” Entropy increases, but complexity is ephemeral; it increases and decreases in complex ways, unsurprisingly enough. Part of the “job” of complex structures is to increase entropy, e.g. in the origin of life. But we’re far from having a complete understanding of this crucial phenomenon. (Talks by Mike Russell, Richard Lenski, Raissa D’Souza.)
9. Aging can be reversed. We all grow old, part of the general trend toward growing disorder. But it’s only the universe as a whole that must increase in entropy, not every individual piece of it. (Otherwise it would be impossible to build a refrigerator.) Reversing the arrow of time for living organisms is a technological challenge, not a physical impossibility. And we’re making progress on a few fronts: stem cells, yeast, and even (with caveats) mice and human muscle tissue. As one biologist told me: “You and I won’t live forever. But as for our grandkids, I’m not placing any bets.”
10. A lifespan is a billion heartbeats. Complex organisms die. Sad though it is in individual cases, it’s a necessary part of the bigger picture; life pushes out the old to make way for the new. Remarkably, there exist simple scaling laws relating animal metabolism to body mass. Larger animals live longer; but they also metabolize slower, as manifested in slower heart rates. These effects cancel out, so that animals from shrews to blue whales have lifespans with just about equal number of heartbeats — about one and a half billion, if you simply must be precise. In that very real sense, all animal species experience “the same amount of time.” At least, until we master #9 and become immortal.