Revisiting the mysterious #137

The number 137 is suggested by some to be of importance in fundamental physics.The late great Richard Feynmann is known for saying all serious physicists should have it nailed to their office wall. So how do I tackle this 137 from the simplest perspective possible? Well, to start with I realize that I have no idea. Even if Feynmann’s solution is simple elegance in its own right, it is too complex for my very restricted know-how in math and physics. I will die well before I’m close to getting familiar with the basics upon which his solution stands. I must find another way or let it be. Since I seem unable to let things be, I do it my way. As always, I do this knowing I’m putting dunce cap on my head.

In this article I skip the neccesary background details and go straight to Feynmann’s solution. In doing so, I probably miss the whole point from a formal perspective, but my own point is not to be “correct” in a formal way. The point is to play around with it and generate fun ideas. If any of them turn out to be of “real” value to anyone, that’s a bonus.

If you’re at all interested in 137, you probably know the backgrund better than I do, so let’s go head on to the solution:

The Solution: It will here be shown that this problem has a remarkably simple solution confirming Feynman’s conjecture. Let P(n) be the perimeter length of an n sided polygon and r(n) be the distance from its centre to the centre of a side. In analogy with the definition of π = C / 2r we can define an integer dependent generalization, π(n), of π as π(n) = P(n) / (2r(n)) = n tan(π / n).  Let us define a set of constants {α(n1, n2)} dependent on the integers n1 n2 as α(n1, n2) = α(n1, ∞) π(n1 x n2) / π, ………………………* where α(n1,∞) = cos(π / n1) / n1.  The numerical value of α, the fine structure constant, is given by the special case n1 = 137, n2 = 29.  Thus α = α(137,29) = 0.0072973525318…  The experimental value for α is αexp = 0.007297352533(27), the (27) is +/- the experimental uncertainty in the last two digits.

Here’s my image of it:
137monopole

 

Instead of a polygon, I use my fundamental units 2D-surface extension which is disc like and have a smooth perimeter. This makes “the center of a side” to mean “anywhere on the only side there is, which is a boundary/limit to the units inherent force”.
Instead of taking the perimeter length as a given, we assume the action of “2r” to be what generates it.
Letting both contribute equal magnitudes, we evaluate them as .5 each, so 2 halves will be what causes the effect of 1 whole.
If we know 2r to be 1, we divide C by 1 and we get 3.
So the π = C / 2r becomes kind of circular:
3 is the same as when divided by/shared by it’s two halves.
The two parents share the same kid…
Now we can freely speculate in what way these values relate, what C really is, how time and space fit in the image and whatnot.
My point is that the most basic unit of measurable reality, the one most fundamental unit, may be the result of two lesser parents. These two are perhaps not empirically detectable on their own, because they do not appear on their own. Perhaps they are never on their own.

I suggest it takes 2 of these units to become what we think of and measure as empirical facts.
I suggest the above image is of a half wavelength, and that this is the essence of a singularity/monopole.

Side note: this is before C becomes less than 3 and before 3 multiplies as to generate Pi additions/decimals.

 

Advertisements

The content of mathematics

I have recently had an interesting conversation about the usefulness of mathematics in trying to describe the fundamentals of reality. I have found that this question have had great minds occupied for ages. One of my favourite minds is that of Kurt Gödel, so I had a quick look at his take on this issue. I was delighted to find that he shared a vital aspect of my position that mathematics are very real and not to be mistaken as being “just abstractions” and “just conventions”. Math may be both conventional and abstract, but it is not “just” that, as if it was a “lesser” version of the real reality of concrete physics.

The reason I consider mathematics as being closely corresponding to the physical reality is a simple one; human minds are what generate that which we call “mathematics”, and since human minds are themselves generated by physically real events, there has to be this correspondence. In other words, assuming an observation is of reality, and the observer also being a reality, then it follows that the response of the observer to this observation, for example a mathematical formulation, is also of reality.
Bottom line – Everything in reality is equally real, including that which we sometimes define as “abstract”. Only superficially is math unrelated to that which it is believed to describe. On a fundamental level, it is by neccesity corresponding to exactly that, but inderectly so, via the human correspondence. The bridge between the object observed and the math of the object is the human mind. And only ignorance of what a human mind is can lead us to believe there is a true disconnection between physical reality and the math we generate to describe it. That is not Gödel’s reason for defending the real content of math, but we arrive att the same conclusion. Not only does Gödel say math is valid in this sense, but he goes further to say math don’t neccesarily need empirical correspondence to be true in its own right.
The last statement is very important and worthy of consideration. It opens the door to mathematics that can probe reality deeper, or from a different perspective, than empirical observation is able to do. If that is so, we should be aware of which kind of math is used and what defines an empirical math from an un-empirical math.
As will argue further, we must learn what makes some aspects of reality real in an empirical sense, and what aspects might be defined as inherently un-empirical. This goes down to the nature of mind and its limit of awareness. To be empirical, it is required that objects/observables comes into minds awareness. If not, we simply don’t know them. This is not to say they don’t affect us. That would be a mistake, becuse most of what affects us is out of our awareness. But it means that we cannot build a theory about it, because a theory must be of something explicitly defined or we have knowledge without knowing what it is that we know. Creative perhaps, but not so uselful.

A Philosophical Argument About the Content of Mathematics

The bold extraction of philosophical observations from mathematical facts—and, of course, the converse—was Gödel’s modus operandi and professional trademark. We present below an argument of this type, from draft V of Gödel’s draft manuscript, “Is Mathematics a Syntax of Language?” though it also appears in the Gibbs lecture.

The argument uses the Second Incompleteness Theorem[1] to refute the view that mathematics is devoid of content. Gödel referred to this as the “syntactical view,” and identified it with Carnap. Gödel defined the syntactical view in the Gibbs lecture as follows:

The essence of this view is that there is no such thing as a mathematical fact, that the truth of propositions which we believe express mathematical facts only means that (due to the rather complicated rules which define the meaning of propositions, that is, which determine under what circumstances a proposition is true) an idle running of language occurs in these propositions, in that the said rules make them true no matter what the facts are. Such propositions can rightly be called void of content. (Gödel 1995, p. 319).

Under this view, according to Gödel:

…the meaning of the terms (that is, the concepts they denote) is asserted to be man-made and consisting merely in semantical conventions. (Gödel 1995, p. 320)

A number of arguments are adduced in the Gibb’s lecture against the syntactical view. Continuing the last quote but one, Gödel gives the main argument against it:

Now it is actually possible to build up a language in which mathematical propositions are void of content in this sense. The only trouble is 1. that one has to use the very same mathematical facts (or equally complicated other mathematical facts) in order to show they don’t exist.

The mathematical fact Gödel is referring to is the requirement that the system be consistent. But consistency will never be intrinsic to the system; it must always be imported “from the outside,” so to speak, as follows from the Second Incompleteness Theorem, which states that consistency is not provable from within any system adequate to formalize mathematics.

The paper “Is Mathematics a Syntax of Language?” paper is an extended elaboration of just this point. It is more specific, both as to the characterization of the syntactic view and as to its refutation.

In version V of it, Gödel identifies the syntactical view with three assertions. First, mathematical intuition can be replaced by conventions about the use of symbols and their application. Second, “there do not exist any mathematical objects or facts,” and therefore mathematical propositions are void of content. And third, the syntactical conception defined by these two assertions is compatible with strict empiricism.

As to the first assertion there is a weak sense in which Gödel agrees with it, insofar as he notes that is possible to arrive at the same sentences either by the application of certain rules, or by applying mathematical intuition. He then observes that it would be “folly” to expect of any perfectly arbitrary system set up in this way, that “if these rules are applied to verified laws of nature (e.g., the primitive laws of elasticity theory) one will obtain empirically correct propositions (e.g., about the carrying power of a bridge)…” He terms this property of the rules in question “admissibility” and observes that admissibility entails consistency. But now the situation has become problematic:

But now it turns out that for proving the consistency of mathematics an intuition of the same power is needed as for deducing the truth of the mathematical axioms, at least in some interpretation. In particular the abstract mathematical concepts, such as “infinite set,” “function,” etc., cannot be proved consistent without again using abstract concepts, i.e., such as are not merely ascertainable properties or relations of finite combinations of symbols. So, while it was the primary purpose of the syntactical conception to justify the use of these problematic concepts by interpreting them syntactically, it turns out that quite on the contrary, abstract concepts are necessary in order to justify the syntactical rules (as admissible or consistent)…the fact is that, in whatever manner syntactical rules are formulated, the power and usefulness of the mathematics resulting is proportional to the power of the mathematical intuition necessary for their proof of admissibility. This phenomenon might be called “the non-eliminability of the content of mathematics by the syntactical interpretation.”

Gödel makes two further observations: first, one can avoid the above difficulty by founding consistency on empirical induction. This is not a solution he advocates here, though as time passed, he would now and then note the usefulness of inductive methods in a particular context. His second observation is that empirical applicability is not needed; it is clearly unrelated to the weaker question of the consistency of the rules.

From http://plato.stanford.edu/index.html

 

Re: math in physics

This is a Q&A from the site Ask a Mathematician which I think is informative of the problem we’re facing in trying to understand the reality of physics in general, and perhaps General Relativity in particular.

Dear Mathematician, given Gödel’s theorem of incompleteness, is it possible for a complete theory of physics to come with a math that is complete, and still be true in all its statements?
I’m thinking the requirement of a complete formal system S to, by neccesity, include “gaps” could pose a problem for physics since they seem die hard on the math to be totally flawless.
For instance, the concept of singularity as an initial state pre-big bang seems rather accepted in most of physics, but in math it means “undifined” or “Dunno”. How akward it would be if the very fundation of every physical event, every cool equation and theory, could not be described by physics for as long as they (a) require the attachment of well defined math, and/or rejects the notion of a math saying “Dunno”.
Especially if that physical singularity was never broken, and therefore in effect still is a singularity. After all, logic has it that a true singularity has nothing external to it which can break or divide it, right?
To me, it seems reasonable that a theory in physics, supposed to cover Everything, must be unable to cover itself. That is if we a priori assume the theory to actually exist as an aspect of this Everything. Were it “outside” of Everything, it could get the complete picture, but that would question its ecological validity I guess.
Isn’t this the actual physics of Gödel’s brilliant idea concerning self-reference? So, complete math = incomplete physics and incomplete math possibly complete physics?
Then the answer
Most physicists have a healthy understanding of where math sits in relation to physics: if it works use it.

For a physicist, singularities don’t mean “the end of science” they mean “try something else”.  There’s a post here that talks about singularities in physics.

Physics can be described very well using math, and every math system is incomplete, so ultimately we can expect that there are likely to be things about the universe that are likewise true-but-unprovable.

Hope that helps!

-Physicist

My thoughts
– If we don’t know what we’re working on, how can we tell if the math is actually working? Of course, in applied physics, as in engineering, this is a valid statement. That’s the pragmatic perspective.
But if we are about to hack the foundation to “what works”, stopping at “what works” is not good enough.
– If singularities means “try something else” to a physicist, that means s/he must disregard the Penrose Hawkins Theorem proving singularities are essential to General Relativity. If the notion of singularity shows up, the physicist is encouraged to “try something else”. That seems an awkward approach, to disregard the very core of General Relativity.
– Wouldn’t it be a creative challange to figure out if the incompleteness of math, the incompleteness of physics and the fact that human cognition is based on relf-reference in some way are related to the nature of singularity? Wouldn’t it be nice to have a math that also was self-referent?
The last point about self-referent math is obviously a paradox. After all, the power of math is more like the opposite to self-reference. Math is designed for correspondence with objects that are not mathematical. If self-referent, math would probably end up entangled in circular functions that says nothing about the reality it is supposed to describe.
Perhaps that’s not a problem? Actually, that’s what I assume to be a possible way out of incompleteness in theory. Try this thought:
A singularity can be pictured mathematically as .5 + .5 = 1
As such, a singularity is both 1 and not 1.
It is not 0, nor is it 2.
It is 1 integer and 2 fractions.
It is both 1 absolute and 2 relatives.
It is of two faces where one face is Dual and the other face is Singular.
The Single face is same as the Dual face.
The Dual face is of Sameness united.
I suggest we do not try “something else”, but that we try harder to be creative with what we’ve got.
What we’ve got is 1. That’s the smallest quantity of unification, perhaps the only possible.
Math begins with 1 and not fractions. Without 1 in the first place, there is no one from which fractions can be measured and counted.
So while math works fine in our current universe, we can assume the post initial state singularity to be correctly described by the use of 1. If it wasn’t, then math would not correspond as well as it obviously does. Assuming that leads us to contemplate in what way This One can be understood as equal to Those Halves. We must be careful not to analyse the .5’s as if they were 1 divided in 2.
The equation here does not say 1/2 = .5
It says that if we do it backwards, beginning with the current standard of 1, then we end up missing half the initial point of singularity. This is what we normally do, and that’s why we end up in uncertainty.
I am saying that we must avoid breaking apart what has once been unified, or we will lose a vital aspect of reality as it is. Instead we should ask ourselves what halves would be required as to be the same as 1. The trick here is to resist minds habit of manipulating the data as to build minds own understanding of it. Mind has a strong tendency to mean everything and every thing. It cuts up input and conceptualize it as either this or that.

1 or 0
Big or small
Here or there
Dimensional or nondimensional
Absolute or relative
Objective or subjective
Particle or wave
Position or velocity
Discrete or continuus
Right or wrong
Cause or effect
Finite or infinite
Body or mind
Local or nonlocal
Space or time
Electric or magnetic
Singular or dual
Self or no-self
Bounded or free
Surface or bulk
X or Y
Real or imaginary
Me or you
… ad infinitum

This is the requirement for intelligence to reason about reality. It has to do this, or it cannot tell one from another one. No definitions are possible without this a priori reconfiguration of input, and without definitions we can not reason verbally/intellectually at all. Intellectual discourse is impossible without separating This from That. In order to enable gradients, mind also operates in opposites/polarities. By that, it can picture  a scale with 2 extreme values and then place anything related to these extremes somewhere in-between. Can you imagine science or philosophy conducted without this being done?
If you can, please leave a comment and tell my how.
To round this brief pointer off, I will borrow from one of the truly great minds a few quotes that might perhaps at least some air of credibility to the above. Not as in trying to use Henri Poincaré as a proof of me being “right”, but to show the mindset that must be cultivated if we are to make progress in our shared understanding of what This is. The minds are just means to the end of knowledge.
Analyse data just so far as to obtain simplicity and no further.

Mathematics has a threefold purpose. It must provide an instrument for the study of nature. But this is not all: it has a philosophical purpose, and, I daresay, an aesthetic purpose.

Mathematics is the art of giving the same name to different things.

A monopole wavefunction

In various threads on Facebook I have had the pleasure to discuss briefly the general idea of a monopole singularity as being half space + half time, and of two corresponding .5 values as a possible ground for universal parity/duality of measurables/dimensions. The idea is that a pair of fundamental half values is required to make up 1 elementary unit of empirical reality as we know it. This concept of parity is not to be confused with the inherent duality of a singularity. An empirical parity as a wavefunction will instead be comprised of 2 dualities/monopoles in phase sequence. That would be 1 wavepackage of electromagnetism with its 2 values of electricity and 2 values of magnetism. Each fundamental duality contributes with its own polarity and extended charge surface, making one force unit the power of 2 monopoles. Note that these monopoles are not as in math some nondimensional points, but real poles with two ends, moving towards or from the central mean. That is, the Y-axis in the complex image will change uniformly on each side (positive/up and negative/down) of Y=0. This is as important as to have the corresponding X-values to do the same. If X goes from .5 to 1, then -X goes from -.5 to -1. The image is of a spheroid like entity which continously shape shifts between being a magnetic Y 1-1, via an electromagnetic X.5-.5/Y.5-.5 to the other extreme of X 1-1.
As Y it is a linear pole with rotation but no charge.
As X it is a circular 2D surface with no polarization, but likely to have a “hole” at its absolute center.

This entity cannot be accurately described by a static image, complex or simple doesn’t matter. I try to describe it conceptually, but as a reality it cannot be observed. The reason is, I suggest, that its total wavelength relates to a phantom harmonic, a missing fundamental of 1. It simply do not “wave” as it has to if subject to empirical observation. Waves are in this picture a property that comes instantly with the breaking of this monopole/singularity. They do not emerge as the final result of a process, but appear at an instant as one monopole/singularity breaks in two. The process would instead be of the oscillations which must be what breaks it. I believe that can be described by hydrodynamics. Until further notice, I assume it will snap back from full extension/surface and cut itself in 2 when equator “inverts”. Perhaps a very small perturbation of the great circle is enough to have that region speed up rotation as it comes closer to the units absolute axis of rotation. Then the relative great cricles of this 8-like unit might start a relative expanding phase.

 

This is to be understood as food for thought, not as an attempt at formally correct description. It’s a hypothesis. That’s all.

I had another image on this, but can’t get it attached. Also, I’m supposed to do other duties than dwell on the origin of the universe. Relatives are calling and I should respond properly.

 

 

monopolebreakingidea

Singularity evaluated

This just came to mind:

The relevant values of a singularity are:

.5 and 1.5 

.5 is its wavelength

1.5 is its frequency

That’s why the speed of light is 3. It takes for the singularity to break in 2 before the light goes on. When broken  (in a Big Bang – is event) we get the corresponding values:

1 wavelength

3 frequency

This Mother Duality relates to 1 Planck Length. It is data from 2 surfaces side by side. Each unit surface is half a Planck Length in diameter. That makes its radius .25 “long”. A circle of r.25 has a circumference of 1.5 and 2 of those makes the magic number 3.

This is why 3 is the speed at which light travels 1 Planck Length in 1 Planck Time. Singularity is simply half a Space in half a Time. Obviously enough, light as such a single quantized package of 2 singularities does not travel at all. It is all of space at once. Time as we know it is not part of this picture.

The speed of light is of zero time. Speed is a measure of the distance covered, at once, by object X. 3 is the total perimeter distance of 2 flat surfaces with radii .25. That’s the values of a bosonic duality being of 2 monopoles/singularities.

But do the math based on these values alone and you will soon get the numbers of time popping up. I will try it myself and see what happens. I expect no surprises but only familiar numbers of the time domain.

EDIT!
It can be simplified to .5 wavelength and .5 frequency. Each phase relates to radius .5.
When frequency, the .5 is half the length of a linear pole.
When wavelength, the .5 is distance from zero point to perimeter of the circular 2D surface.
The initial state singularity alternates therefore between .5 frequency/spin/magnetism and .5 wavelength/extension/charge.
It begins with 1/2 time and 1/2 space, from a relative perspective that is. From an absolute standpoint that’s an absurd statement. It is better to say the initial state singularity is Alternating Force of One.