A simple note on SpaceTime.

Date : Februari 18, 2019.
Version: 2.6
By: Albert van der Sel.
Status: Not ready yet.
Remark: Please refresh the page to see any updates.



Fig 1: Just an illustration of some views on SpaceTime.





Physics is in movement all the time. Theories on "SpaceTime/Matter/Energy/The Universe" continues
to evolve, and once in a while, a revolutionary insight emerges.

Ofcourse, this is a simple note, and not a very professional, and deep article.

However, I like to picture some relevant ideas about Relativity, Kaluza Klein theories,
some Cosmological models, the metric in general, string and quantum gravity, the holographic principle,
and AdS/CFT related theories.

This is all great stuff !
But this simple text, could possibly be a bit.., well..., too simple....
If so: Sorry for that ! (don't look me up, and punch me on the nose...).

But indeed, it will be simple, even with lots of classical stuff as well.

Now, let us find out, how far I will succeed here.

Chapters 0 and 1, can be viewed as "preliminaries" before we actually step into the theories
as sketched above. However, in Chapter 1, things already become quite serious.


Main Contents (proposed for now):

0. A few preliminaries before we start...
1. A tiny bit of math and some concepts.
2. A few words on SpaceTime in Relativity.
3. A few words on Planck's length, and Planck's time.
4. A few words on Kaluza Klein Theory, and related.
5. A few words on "The Dirac Sea".
6. A few words on Black Hole models.
7. A few words on Quantum Gravity (Loop Quantum, Strings).
8. A few words on Quantum Information and Physics.
9. A few words on Entanglement and SpaceTime (theories of Susskind, Maldacena, Verlinde and others).
10. A few words on AdS/CFT related theories.


Chapter 0. A few preliminaries before we start...

Here you find a mix of facts or features, of which I think might be important to know,
before we start with the main text (as of Chapter 1).
Maybe it's a bit of a strange setup for a note, but... you know the Author....

0.1 A few words on modern ideas on the beginning of the Universe:

Long time ago, the original Big Bang theory, assumed the existence of an infinite dense,
and extremely energetic, "singularity", which at some point "exploded", and evolved into the Universe
as we observe it today.
However, the model was not very appealing to physicists, also due to the rather peculiar singularity.
Why was it there? What was there before the "singularity", and many other questions obviously
came up with this model.

Sure, different models came up all the time in history. But, in the late '70's, and early '80's,
a new promising theory was developed (Guth, Linde et. all). It's called the "Inflationary Universe".

I am not going to tell you that you will find the full theory here.
I have good reasons for that. I think that it is useful to list here, say..., "the populist" version
of that (rather complex) theory.
For now, that is enough, since I like to put your attention to an initial quantum fluctuation,
and show that an alternative for a "singularity" is possible.
So, I hope you understand, that this is only the start of this note.

Thus, one simple approach is the following:

To measure something in space, we like to visualise it in a coordinate system.
This system may have x-, y-, and z axes. And a fourth, like time "t", might be added too.
In such coordinate system, we may visualize objects or events.

But now, just suppose the axes shrink and shrink, until there is no spatial distance anymore, at all.
Absoutely nothing. In such a case, you might say "there is nothing".
Not even time, since the axis representing time was shrinked too, until it's "gone too".
There is no reference framework, whatsoever.
Also, there is nothing you could call "inside" or "outside", since any form of reference is not present.

In Quantum Mechanics, many remarkable theorems go around. Two of them, are called
the "Heisenberg uncertainty principles". One of those uncertainty principles, is key in this text.
It's the Energy/Time uncertainty principle, expressed as:

ΔE Δt ≥ ℏ/2     (equation 1)

This expression means many things, but it also means that it is ok to "borrow or create" a certain amount of Energy (Δ E)
for a certain small amount of time (Δt), as long as it is returned (to nothing), so that this equation holds as valid.

Most physicist see this principle as very fundamental, and intrinsic, to Nature, independent of which mathematical
or physical framework one may use. Some say it's a consequence of the used wave mechanics or Operator theory.
But, I think that most physicists see it as a principle of Nature.

Today, we still observe such effects. For example, the "virtual particles" in the Vacuum are believed to exist,
where those virtual particles may "pop up" into existence (out of nothing), for a very short time, and destroy
each other very shortly after. These are also called the micro "quantum fluctuations".
It's also the cause of the well-know and observable "Casimir effect", or the "Lamb shift" in atomic spectra.

Back to our state of "nothingness". It now is assumed that a quantum fluctuation produced Energy.
This might even be an extremely small amount of Energy. However, once Energy exists, a precursor SpaceTime
must exist as well (due to various reasons).

This intimate realation of Mass/Energy and SpaceTime will be studied later, but for now, you should take
for granted that once Energy exists, it must follow that some structure of SpaceTime must exist as well.
By the way, from equation 1, there is a relation between Energy and time (or SpaceTime).

Now, the model will deviate a bit from the original model of the Inflationary Theory:

Once Energy exists, it may exponentially increase, as long as an equal amount Negative energy is stored in SpaceTime,
so that the overall Balance equates to Zero.
It's tempting to look at the corresponding increasing SpaceTime, as an expanding sphere. However, there is
no real "outside" of this early Universe, so you can not maintain such picture as being realistic.
However, if you visualize it as a sphere (or something) anyway, it does not hurt at all.

It is indeed assumed for a long time, that the present day "cosmological constant" represents an amount
of Energy stored everywhere in SpaceTime. This then could correspond to the negative balanced energy as
referred to above. Unfortunately, the "cosmological constant" is assumed to be positive, which is a bit
in contradiction to the model depicted above.

However, nomatter how flawed this simple explanation is, it represents a very simple explanation of
how the Universe came into existance, whithout reference to a unexplainable "singularity".

Ofcourse, the "real model" knows various phases while the Universe cooled off, where all sorts of events took place,
like "re-heating" after inflation, the period where fundamental particles appeared like the "quark-lepton" phase
(preceding the phase where protons and neutrons were formed), and still later,
where primarily H atoms came into existence (etc.., etc..).
The real Inflationary period, preceding all phases, is often sketched to occur in the time interval
of about 10-38 sec to about 10-32 sec, which is extremely short indeed.

Sometime, in the earlier phase, again "smaller" quantum fluctuations are presumed to be the source for
"disturbances", which were the seed for the formation of Galaxies at much later phases.

Many say, that after the Inflationary period, the expansion rapidly went into a more traditional type
of Big Bang. There are indeed strong motivations to keep the initial Inflationary period in place, like:

-avoiding the singularity
-the Horizon problem
-the seeminly flatness and isotropy of the Universe

Yes, all of the above is certainly a bit flawed, and way to simple presented. Ofcourse, a better description
of the Inflationary model can be found all over the Internet.
I did an attempt too, in another note. If you want to try that, then use this link.
For example, it also shows why inflation solves the socalled "Horizon problem".

By the way, there are quite a few physicsts and astronomers who like the Inflationary Universe model.
But there a quite a few opponents too.

I must also add that more interpretations and theories on the Birth of the Universe exists.
First, there are supporters of a "multiverse" model (multiple Universes).
Secondly, socalled Brane World models evolve all time time. Just recently, some authors have proposed the G-theory,
as the "mother" of M-theory, specifically "designed" to explain the origin of the Universe.
We will see about that later on.

The next sections are some remarks on a few facts or quantities in Physics, which are needed for the next chapters.

Ofcourse, here I only like to highlight some quantities or properties, which I think I will use
in the Chapters hereafter. At least then, they have been introduced a bit.

0.2 Impression of the Large scale structure of the Universe:

No doubt you have at least a general idea about the structure of our own spiral Galaxy (the Milky Way),
as astronomers see it today. There exists an immense amount of such Galaxies, throughout the
observable Universe.

I invite you, to take a look at a nice illustration, available in this link.

It shows a large number of galaxies (only as dots), grouped like in large "filaments", with large "voids"
in between. The view in that picture, should illustrate the clusters of galaxies in a domain of about 1 billion lightyears.

The site of that link, allows you to zoom in or zoom out. What gets apparent, is the almost
swiss-cheese like structure, of filaments, sheets, and voids.

As to why such a "Large scale structure" exists, there is a quite still some uncertainties.
One common thought is that "Dark Matter" is mainly responsible for such structure. However some modern
insights denies the existence of Dark Matter (like Verlinde), due to newer approaches to view "Gravity".
But I am certainly not saying that a majority of physicists and astonomers have left the Dark Matter hypothesis.

At this point, very conclusive aswers are still missing.

It's ofcourse one of the active terrains of Astronomy. It's impossible to keep up with scientific
articles from that community. Every day, if you would read samples of them, you fall from the one
amazement to the other.

Anyway, I found it important that you have an idea of the "swiss cheese" structure, of the distribution
of Galaxies in our Universe, and also that you know that very conclusive answers seem to be missing.

If you google on words like: "galaxies filaments voids", you get lots of facinating links
dealing with that subject. Great stuff for sure !

0.3 What is entropy?

It's not easy to define "entropy" in such way, that it's good definition which applies throughout
all disciplines of Physics.

That being said, what at least comes reasonable close to it, are the following statements:

-Entropy (S) is a measure of number (W) of microstates that "sits" behind a particular macrostate.

or:

-Entropy (S) is a measure for the number (W) of degrees of freedom of a system.

However, Entropy is not equal, or literally the same, as the number W.

Example:

(1): Suppose you have a system (like a particle), which has 4 different internal states (W=4).
Now, suppose that number of internal states increases to 8 (W=8). That's quite a difference compared
to the former situation.

(2): Now, Suppose you have again a system (like a particle), which has 1000 different internal states.
Suppose that number of internal states increases to 1004. That's not much of a difference compared
to the former situation.

In both cases, the number of internal states increased with 4. However, only in the first case,
the increase was really really relevant, while in the second case, it was rather minimal.

Entropy is not equal to the number of states "W" (or "degrees of freedom", or "subcomponents" etc..).
It must be a number, which properly reflects the degree of information provided by the various states.
But it must also be a number, which properly reflects the change of the degree of information,
when the number of possible microstates changes.

In both examples (1) and (2), the number of microstates changed with 4. But only in the first case,
for the system as a whole, the change was rather dramatic.

If you are familiar with the logarithmic function, this type of behaviour is exactly reflected by
such function. A logarithmic function based on the number "e", is the socalled natural logarithmic function.
In many physical processes, the natural logarithmic function "ln(x)", describe such
processes, for example in the decay of radioactive materials.

The function ln(x) is exactly what we need here. If you look up the ln(x) function in some math book,
you will see that it starts out with a rather steep slope, which gradually gets less and less steep.

Ofcourse, math books use "x" as the variable, while we have used W as such variable.
But here too ln(W) works. The slope gets less and less, if W increases.

An equation which relates the Entropy "S", to the number of microstates "W" (or degrees of freedom) is:

S = ln(W)   (equation 2)

Note:
In thermodynamics and chemistry, the usual equation is:

S = kB ln(W)   (equation 3)

where kB is Boltzmann constant.

An example from IT, or datacommunication/network theory, might help a bit to illustrate entropy
in that field.

If you would have 4 bits, you can create 16 different combinations out of them, like
0001, 0010, 0011 etc...
You might say that the number of microstates is 16, thus W=16.
In this case, S=log(W)=log(16)=4.
(with the log function here, based on groundnumber "2").

Is the number S=4, of any value? Does it express the amount of information which underlies it?
Yes. The number 16 itself does not say much. We have 4 entities, each of which is the source
for 2 degrees of freedom. Thus 42 is the total number of possible microstates.
Thus, the number S=4, nicely "sums up" this up.

Maybe, at this point, you do not see the relevance of such entity as Entropy. But in a while,
you will see how useful it actually is.
Especially in Quantum information, Information and Black Holes, and other modern theories,
it's a key element.

0.4 A few words on Elementary Particles.

Since long by now, many physicists say that there should be actually (ideally) be one sort of
elementary particle, and one sort of fundamental interaction between those particles.

This could be true, at extremely high Temperatures (or Energy). Actually, as the Universe is today,
it's very cold place, even in places like the centre of the Sun. A temperature of say 20 milion degrees,
is still very, very, very far, from the energy where unification of particles and forces (may) occur.

So, what happened in the far past? The idea is, that while the Universe started to cool down, in various phases,
several socalled "symmetry breakings" occurred, ultimately leading to the state of the Vacuum as it is now,
with the various elementary particles and forces, as we know them today.

It's important to have a good idea on what the most important ideas are, with respect to forces and particles,
as it is now. Indeed, that is what we are going to study here (in just a few words).

In the course of second halve of the former century, experimental techniques improved (e.g. particle accelerators),
more and more particles were discovered, and/or theoretically conjectured based on experimental data.
At some point, there were hunderds of them.

Due to a variety of theories, like QED, QCD and others, a nice model has emerged, in which the "true"
elementary paricles have been identified and categorized. It's absolutely facinating how the model also uses
vitual particle exchange, or creation, for interactions and shieldings. Let's see how this works, according to the models.

By the way, what we see as a elementary particle, should not have any subcomponents, which obviously then
could be considered to be more fundamental or elementary.
For example, the nucleus of an atom, "contains" protons and neutrons. However, such a particle is not
an elementary particle, since overwhelming evidence exists that they "are build" from three "quarks".
Then finally, such a quark is indeed considered (by most physicists, but not all), to be truly elementary.

Fig 2: My own Jip and Janneke figure, illustrating how the proton and neutron are build from quarks.




A Proton carries the unit of electric Charge. If you would infer from the figure above, that quarks
must have integer multiple of 1/3 of the unit of charge, then you are right.
Actually, when we view the different hadrons (see below), quarks have 1/3, 2/3 or negatives of the
unit of charge.

Elmentary particles can be "typed" by their quantum numbers, like Charge, Spin, Strangeness and a few others,
and other properties as parity.
However, there are more obvious ways to discriminate between fundamental types:

Classifications.

1. Bosons and Fermions:

Fermions:

What we usually see as "real matter" (permanent matter) is build from fermions. This is a sort of
collective name (or type) for the different flavours of quarks, and all the sorts of leptons (like an electron).
Indeed, there are several flavours of quarks, and there are several flavours of leptons, of which the electron
is the most familiar one.
So, for example, a proton is a fermion too. It's build from 3 "sorts" of quarks.

All particles with "half-integral" spin (1/2, 3/2, 5/2 etc..) are Fermions.

Bosons:

A boson is often interpreted as to be a "force carrier". In most cases, it's a virtual particle
involved in the interaction of two particles and/or fields.
Again, in most cases, it exists as long as the interaction lasts.

Formally, it's not always a virtual particle, since it is defined to be an entity with
an integer spin value, like (1, 2 etc..).
Taken the leading principle of integer spin value literally, then even some rare nuclei can be considered as bosons.
Some entities from the "meson" type (see below), have an integer spin too, and thus are qualified as bosons.

However, the most general interpretation is, that bosons are involved with interactions, and simply
may "pop up", to "effect" that interaction. So, for example, an virtual photon is created and exchanged
in electromagneic interaction between two charged particles, like two electrons (in QED/QFT theories).

listing 1:

⇒ The elementary fermions are:

quarks: {up, charm, top, down, strange, bottom} (not complete)
leptons: {electron, muon, tau, electron neutrino (neutrino), muon neutrino, tau neutrino}

I am afraid that the listing above, still is not complete. For example, we still need to account
for "anti-particles".


⇒ The bosons are:

Gauge bosons (force carrier): {gluon, photon, Z boson, W bosons}
scalar boson: {Higgs (and possible still undetected other Higgs types of bosons}

Photon: force carrier for the electromagnetic interaction
W-boson and Z-boson: weak nuclear force (like nuclear decay)
gluon: strong nuclear force (binding quarks)

You may wonder about the illusive "graviton", as the force intermediater for gravity.
Indeed, generally it is viewed as the boson involved with gravity.

As said before, the upper listed fermions are elementary.
However, lots of particles are found, which are "composed" of quarks. The terms "Hadron" and "Baryon"
will often be encountered too. For example, our friends the neutron and proton, are "hadrons" (in subclass Baryon).

Although fermions and bosons are fundamental particles, many articles will once in a while refer to "hadrons",
"baryons", and "mesons". Don't worry: these are all composite particles.

Fig 3: My own Jip and Janneke figure, illustrating elementary particles, and types of composite particles.




Hadrons is a generic term to denote matter that's build from quarks, so hadrons are composite particles,
and it is a general term. Here are the two subclasses:

⇒ Baryons, for example, are our well-know protons and neutrons, which are "three quark combinations".
⇒ Mesons (like the pion, Kaon family etc..) are intermediate mass particles which are made up of a quark-antiquark pair.

However, mesons have an integer spin, and thus are (according to the definition), bosons.
Although composed of quark particles, they are bosons.

About quantum numbers like "spin". You cannot be happy with the information provided sofar, on quantum numbers.
It's a tiny bit complicated. Charge seems to be not so bad, since electric charge can be coupled to macroscopic
events we can see, like a large charged sphere, which may produce a large electric spark.
Spin seems a tiny bit more mysterious. The most useful interpretation is, that an rotating electric charge,
creates a magnetic angular momentum. The same principle applies to the spin of an elementary particle.
However, such an explanation is clearly not enough.

2. Particles (matter) and Anti-particles (anti-matter):

All fermions of Listing 1, have an anti-particle. Such anti-particle has an opposite electric Charge.
So, for example, the electron has an anti-particle called the "positron".

So, it also follows that there are 6 anti quarks, next to the quarks from the quarks in Listing 1.
And indeed, there are 6 anti leptons too.

Hadrons, like mesons, can often be seen as a certain quark, with some other anti-quark.

This way, it's possible that a certain hadron, a composite particle, (build with two different quarks)
has an anti composite particle, because that composite anti particle, is build from the opposite quarks !

Indeed, for example for the proton, we have an anti-proton.

Mind you, if you look at the ordinary Hydrogen atom (one proton, plus an electron in orbital), it's
even possible to have an anti Hydrogen atom, from an anti proton and an anti electron.

In general, an anti-particle entity "just by itself", is observed by high energy experiments with collisions,
cosmic rays etc.., but generally it is not "free" in Nature, since it will quickly annihilate
with it's corresponding sister particle. This releases a enormous amount of energy.

It is true that extremey low quantities of anti-matter have been "stored" for short times in magnetic "traps".

Note: about 1 gram of anti matter would release the energy of a 50 KiloTon nuclear device.

3. What does not fully "fit in" in the classifications above:

All of the above is not complete. Lots of physicists speak of socalled "resonances" too, and also
of classes of pseudo particles.
Rsonances are particles with a very short lifetime, and decay very quickly. They can be associated with
high energy events.

At this particular point, it not important for us to discuss this, as it would clutter this text.

4. What about "Dark Matter" (DM):

If we go back to, say 20 to 10 years ago, DM was an extremely hot item. I'am not saying
that interest has collapsed. Far from it. However, alternatives slowly made their way too, say,
the last 15 years or so.

DM still is considered to be the primary candidate for a number of astronomical observations.

From all matter in the Universe, the major percentage is formed by DM.
Note that we are not talking about "Dark Energy" (DE) here.

It's a strange situation. There still is no convincing clue as to what it really is.

And indeed: some physicists have turned away from DM, due to newer insights on Gravity.
Later we will see about that. But it might be so, that SpaceTime has more capacity
to "remember" Gravitation, which might be an alternative for DM.
I promise that this is certainly an important subject in later chapters, since it's supercool !

Why "Dark"? Since DM does not interact to ElectroMagnetic radiation (like photons), it is indeed
invisible, or nearly invisible. We simply cannot observe it.
However, it participates in Gravitational interactions, and that's why many think that DM
is resonsible for many astronomical observations.

There were (or are) theories on the origin of DM: like the "axion" particle, the "Branon" from
some Brane world theories, or a supersymmetric partner particle, and some other theories
go around as well.

So, there were some important observations. Among them, were the following examples:
  1. The velocity distribution of gas, dust, stars etc.. in the disk of our spiral Galaxy,
    is too uniform (constant). Indeed, a more Kepler-like distribution was expected, and not a
    pretty uniform speed. It can be explained by a large amount of invisible mass in the Halo.

  2. Some objects (like Galaxies) seems to be mirrored in astronomical observations, as if
    a large Mass is bending the light from those objects. However, no visible source with a large
    Mass is detected. So, while in the past this was suspected to be caused by "cosmic strings", since about
    15 years (or so), the effect is attributed to DM.

  3. Some member Galaxies in remote clusters, have a too high velocity, based on the Mass from that cluster.
    One idea is thus, that large invisible mass near or inside that cluster, is the cause of those observations.

If you would ask me, then the swiss-cheese large scale structure, as was touched upon in section 0.2,
would really smell to DM, since it's really remarkable why the galaxy clusters are organized in filaments
and sheets, leaving voids between them.

However, I have seen a few articles in astronomical papers (also on arxiv), which essentially say
that the lack of mass in the "voids", sort of works like as negative mass, on such a large scale.
That would indeed be a remarkable fact around Gravity.

The boys and girls working on DM, still have lots of work to do.

Quite a few "ideas" of DM have been constructed up to now. For example, Baryonic or non-baryonic DM,
Cold- or Hot DM etc..
At least we know what Baryonic matter is. It's just the 3-quark stuff, like protons and neutrons,
thus meaning that Baryonic DM is a lot of undetected brown dwarf stars, undetected other matter etc..

Non-baryonic matter could be stuff like Axions, Branons, neutrino's etc... Often this stuff
is collected in the term "WIMP".

-All over the internet, articles around DM can be found. I would suggest Google-ing
on the terms "Dark Matter" and "arxiv", which leads you to scientific papers on DM.

-A while back, I created a simple note on DM too. Ofcourse, it's not that great, but maybe
you like to browse through it: if so, then use this link.

5. Still to do, and left-overs...:

We still need to see how the "forces" work, in a modern setting. Also quite a few "properties"
of some elementary particles were not mentioned, like right- or left handedness, and many more.
Also, a different sport like "superstring" was not mentioned at all.
Yes, but lightweight discussions will follow in next chapters.

Ofcourse, section 0.4 was the absolute minimum of info on elementary particles. However, for the
remainder of this simple note, it's enough. So, we are able to "place" quarks, leptons, bosons etc..,
a bit, if I mention it at other places in this note. I hope you understand what I mean...


0.5 A few words on Classical Theories.

It's quite imposible, but I try to give some essentials of the true Classical theories, namely
classical Mechanics, and classical ElectroDynamics.

Then, in section 0.6, we will see some essentials of quantum theories, that is Quantum Mechanics,
and some essentials of QED/QFT and QCD.

I think, that when that is complete, we are armed to proceed to the next chapters.
Yeah..., let's do it !!!!!

For the classical theories, in Mechanics, it's Newton who laid the foundation. For ElectroDynamics,
it is Maxwell who made the framework coherent and even condensed it, in 22 fundamental equations,
which later could be captured in just 4 differential equations.

⇒ A few notes from Newtonian Classical Mechanics:

Ofcourse, Mechanics is an enormously large subject. Aside from the original Newtonian formulation,
later on, Lagrange and Hamilton, both changed the perspective a bit. Not really fundamentally different though,
but still important, since every one talks about the "Lagrangian" and the "Hamiltonian"

The concepts here, will simply help us when we study more modern theories.

Let's browse throug some of the monumental achievements of Sir Isaac Newton:

⇒ A very important notion in classical mechanics, is that it operates very accurately on a "human scale". That is,
the choosen domain is not too small, like on an atomic scale, and the velocities are not too large.
Indeed, on a microscopic scale, we need Quantum Mechanics. And when velocities are a relevant fraction of
the speed of light (c), then Special Relativity provides the best solutions.

⇒ Another important notion in classical mechanics, is that "the frame of reference" is a flat Euclidean 2 dimensional,
or 3 dimensional space. That, you could draw an x-axis, y-axis, ans z-axis, all perpendicular to each other
and study events in such space. Such a coordinate system is often called a Cartesian coordinate system.
Ofcourse, there is nothing "wrong" in choosing another type of coordinate system, like polar coordinates.

⇒ Time "t" is the same everywhere, and is not dependent on location or speed.
The mass "m" of an object is a constant, and does not dependent on location or speed.

⇒ First, Newton realised that if no forces operates on an object, then either it stays in rest, or it stays
permanently in an uniform motion, that is, with a constant velocity "v". Sounds pretty logical.
This is "Newton's First law".

⇒ If a constant force "F" is applied to aan object with mass "m", it will be subject to a constant acceleration "a".
Thus, per unit of time (say per second), the velocity "v" increases with a fixed amount.
It's expressed in the famous relation:

F = ma       (or F = m x a)

Intuitively, it's logical. If the Force F is larger, the so is the acceleration a, since the mass remains constant, obviously.
If the Force is lower, the so will be the acceleration.
Now, if instead the mass "m" is higher, there is more "resistence" to accelerate, and thus "a" will be lower.
With an equal magnitude of Force as the example above, if the mass is lower, then "a" will be higher compared to
to the example above.
This is "Newton's Second law".

⇒ Momentum "p" is an important concept to. For a single object of mass "m", it corresponds to:

p = mv

It's the property of a moving object, to effect a "punch" to another object, like in a billiard game.
Intuitively, it's logical. If "m" is larger, or "v" is larger, then "p" increases in a linear way.
If you have a system of multiple particles, like 2 particles, momentum is "conserved".
For example, if we have particle 1 with m1 and speed v1, and we have particle 2 with m2 and speed v2,
and they collide and move again apart (with any angle) having velocities u1 and u2, then it holds that:

m1v1 + m2v2 = m1u1 + m2u2

⇒ We can relate the momentum "p" to Force "F" too.
If a force acts on a particle, for a time Δ t, then momentum increases or decreases. If the Force acts in the direction
of the particle's velocity, then the velocity increases, and so does it's momentum p=mv.

So, let the Force F operate for a time Δ t. The velocity will thus increase from vBefore to vAfter.
Then we have:

F Δ t  =  m (v After - v Before)  =  mv After - mv Before = m Δv

Thus:

F  =  m Δ v
---
 Δ t
 =  Δ(mv)
----
 Δ t
 =  Δp
----
 Δ t

But as it always goes with differentials, if we make the Δ smaller and smaller, we arrive at the true derivative:

F = dp
--
dt

It's enough if you follow the red line here. You don't need to remember anything.

⇒ Newtonian Mechanics, is also in a large part the study of "motion", that is, velocity "v", acceleration "a",
and distance travelled "r", and all relations between those entities.

If a particle, at t=0, is at a certain position "r(t=0)", and at t=1 at position r(t=1), then we know
how to calculate the velocity. For example, if you travel 150 miles in 3 hours then your (average) speed
was 50 miles/hour.
Indeed, it goes like "distance travelled / time needed". So:

v = Δr
--
Δt

But as it always goes with differentials, if we make the Δ smaller and smaller, we arrive at the true derivative:

v = dr
--
dt

Example:

Suppose the distance travelled r(t) is given by the function r(t)=1/2at2, then "v" can be calculated
as taken the differential with respect to "t". This is:

v = dr
--
dt
= d(1/2at2)
------
dt
= at

Thus here the equation for v(t) is: v(t)=at
This is a motion having a constant acceleration "a". The velocity v(t) increases as t increases.
The distance travelled, r(t), is a parabola if you would plot r(t) as the "y", and t as "x", in a
common XY coordinate system.
So, if the object was first in rest, and the acceleration a=5 m/s2 and starts at t=0, then at t=10, we have:

v = at = 5 x 10 =50 m/s
r = ½ at2 = ½ x 5 x 100 = ½ x 500 = 250 m

⇒ Energy and conservation of Energy.
Suppose we have an object, with mass "m" and traveling with speed "v", then it's classical Kinetic energy is:

Ek = ½ m v2      

This one is quite logical too. The more mass the object has, the more "energy stored in it's movement" may be expected.
And ofcourse, the faster it goes, the more energy can be associated with it's motion.
You may wonder about the quadratic term. If the object collides to something, it's energy is altered.
But, that must correspond to the change in momentum too. So, d/dt(½ m v2)=mv.
So, indeed. The change in Energy corresponds to it's momentum change. Therefore, the equation of the kinetic energy above,
is indeed plausible.

An object may have "Potential Energy" too. We know about F=ma. Now, in the Earth's gravitional field, as we experience
on it's surface, there exists a "downward" (gravitational) Force. The corresponding acceleration for a free object
is usually denoted by "g", instead of "a". So, in this case, F=mg.

Suppose you have a mass "m", resting on Earth's surface. Now, you lift it upwards on a tower, which is 100m high.
Once at the top, the object has no kinetic Energy, since it does not move.
However, it has gained a "Potential Energy", due to it's difference in height in Earth's gravitational field.
If you would let go the object, it will ofcourse accelerate downwards, and gaining kinetic energy until it crashes
onto the surface. After that crash, the kinetic energy is converted into heat, and the energy to deform the object
and the surface.

While it was in free fall, the kinetic energy increased, while it's potential energy decreased.

This example is illustrative for how it works in Classical Mechanics. The kinetic energy EK, and the potential energy EP,
are "balanced". It is also rephrased into the sentence that "Energy is conserved".
In general, we may state that:

ETotal = EK + EP

⇒ Newton's law of Gravitation.

Especially in our solar system, that is, how the Sun "attracts" planets (and the other way around),
Newton's law of Gravitation, works remarkably well. However, General Relativity (sort of) replaced it, and
explains certain observations much better.

But Newton's law is still monumental, especially with masses of, say, like of the Earth, moon, Sun etc..

However, it's a general law, and should be in effect too for very small masses.
But, in the atomic scale, and elementary particles, it is fully negligable (unmeasurable).

Here it is. Suppose we have two masses M1 and M2, seperated by distance "r":

F = G M1 M2
---------
    r2

Remarkable is, that the force that M1 excerts on M2, equals the force that M2 excerts on M1.
Also, the fact that the force is an "inverse square" relation with "r", is rather profound.

As you know, the Surface of a sphere of radius "r", is A=4πr2.
If a "point source" emits "something" in all directions with equal intensity, then the "inverse square" relation must hold.
It's actually possibly a statement that Space is the same everywhere, and there are no strange bends and twists around.
In fact, viewed from that angle, it is a geometrical property which sits behind it.

Just suppose we have a point source again. Suppose it radiates some sort of energy, or radiation, or is responsible
for a radial force field etc.. In all such cases, the intensity fall with 1/r2.

No matter at what radius from the point source you are (like r, 2r, 10r etc..), anytime you draw a sphere around it,
the Total intensity (flux) is the same. Only, all those spheres ( with different radius like r, 2r, 10r etc..),
have different surface area's.

So, suppose we are on a sphere with radius "r". Suppose we select a small patch area of size "S".
Suppose further that the total flux through "S", is "I".

If we now move to a sphere of radius "2r", the corresponding patch area has inflated a bit, to "4S",
thus 4 times as large.

If we now move to a sphere of radius "3r", the corresponding patch area has inflated to "9S",
thus 9 times as large.
If we now take one patch area of of the original area "S", out of the "9S", we must conclude that the intensity
this time through "S", is 1/9 of the original intensity (when we considered a sphere of radius "r").

This argument, hopefully has demonstrated the 1/r2 law, which is used in Newton's law of Gravity,
but also in countless other cases too, like the classical Electric field, or Electric force.

Ok, above were just a few points from Classical Mechanics, which I liked to list here. Ofcourse, it was not much,
but some of those considerations might be helpful later on.

Now, what is meant by the Lagrangian and Hamiltonian formulation of classical mechanics?
Let's see about that next. In both formulations, often a derivative is not fully written out using d/dt (or similar),
but simply by using a dot or accent, like in:

dr
--
dt
= r'

A few words on the Lagrangian and Hamiltonian formulation:

The Newtonian formulation, might be considered to be too focussed on forces, masses, and velocities.
Especially when studying a system having multiple particles, it becomes very complex.

Furthermore, it is argued that it is probably tied too much to the Cartesian system.

Lagrange, and later Hamilton, changed the perspective quite a bit .

⇒ A few notes from Classical ElectroDynamics:


0.6 A few words on Modern Theories.


Chapter 1. A tiny bit of Math.

First, I started this doc using a certain methodology. However, recently I realized that some Physical Theories
cannot be explained (albeit at high-level), without some basic understanding of certain principles.
Indeed, I have a few of such principles in mind, so let's do those first.

Sections 1.1 up to 1.4, is centered around the "metric". Section 1.5 tries to say something useful on gauge symmetry.
Maybe not all sections are "fun" to read. If needed, you can skip those. But 1.1, 1.2, and 1.5 are quite essential.

When I look again at the stuff in this chapter (in hindsight), then I must admit that sections 1.3 and 1.4
is actually NOT so fantastic to go through. But, I leave it in place, would somebody care to take a look at it.
Indeed, it's good enough to go through 1.1, 1.2, and 1.5.

1.1 A few words about the Metric:

Ultimately, my main goal of this section is, to make the appearance of the "metric tensor" or Riemann tensor,
a bit "plausible", or a bit acceptable. It's a very important concept in this note.

The fundamental metric (or metric tensor) may be written as:

ds2 = gμν dxμdxν     (equation 1)

Do not worry about the upper and lower indices in that equation. It will become clear later on.

A "metric" is simply a rather expensive, and luxury word for "distance" in Space (or distance in SpaceTime).

The metric is important for any study, to begin with. For example, the equations of motion can be derived
from the metric for a given SpaceTime.

In a flat Space Eucedian Space, like an "ordinary" 3D Space (R3), you may draw a Cartesian coordinate system.
Basically, such Cartesian coordinate system uses three perpendicular axes, the x-, y- and z-axis.
The whole purpose of such coordinate system, is to describe or pinpoint "points" in Space.

A point in such Space, might be denoted by (x, y, z). It's also posssible to draw something that's called
a vector, from the Origin (center) of the coordinate system, to this random point (x, y, z).

Note 1:

If you like, you can try a simple introduction to vectors first. Only the first few pages
of the following link might be useful, here. If you like to try it, then use this link.
You can also use it for some illustrations of a flat R3 Space, and for some vector illustrations.


The fact that such Eucedian Space is "flat", means this. Suppose you are on the x-axis. Suppose you walk in
the +x direction. So, you position might then be, as time passes, something like (1,0,0), then (2,0,0), etc..
Your position does not depend in any way, on "y" or "z". That is Δ x, as you move, has no relation with changes
on "y" or "z": those does not happen at all. You will see this clearly in matrix form, in just a moment.

Distance in R3:

This is basically no more than applying the "Pythagorean theorem".

For example, in R3 we have the square of the distance between two points P=(x1, y1, z1), and Q=(x2, y2, z2):

|PQ|2 = (x2 - x1)2 + (y2 - y1)2 + (z2 - z1)2

Here, implicitly, it is assumed in this discussion, that the Coordinate system, or any point in it, is fully described by a linear
combination of the basis (or unit-) vectors (1,0,0), (0,1,0), and (0,0,1).

When we would consider the distance from some point (x,y,z) relative to the Origin of our coordinate system, we may simply write:

ds2=dx2 + dy2 + dz2   (equation 2)

The "ds" most often represents "very small distance", as if we would only consider very small variations.
Therefore, in true flat Space, you may view equation 2 to be equivalent to:

s2=x2 + y2 + z2   (equation 3)

However, I will stick most often to the "ds" (and "dx" etc..) notation.
In general, in Rn, of dimension "n" (with "n" axis):

ds2=dx12 + . . . + dxn2

Now, equation 1 might be very intruiging. But here is a very simple equivalent equation for Eucledian flat Space R3:

Suppose we review equation 1 again, however this time from a flat Space, and using plain matrices/vectors. Then:

ds2 = ┌ 1 0 0 ┐
│ 0 1 0 │
└ 0 0 1 ┘
┌ x ┐
│ y │
└ z ┘
┌ x ┐
│ y │
└ z ┘
= ┌ 1x+0y+0z ┐
│ 0x+1y+0z │
└ 0x+0y+1z ┘
┌ x ┐
│ y │
└ z ┘
= ┌ x ┐
│ y │
└ z ┘
┌ x ┐
│ y │
└ z ┘
= x2 + y2 + z2   (equation 4)

Let me explain this:

In equation 4, you see a matrix, then a columvector, then again a columvector.
Insteand of expressing the vector (x, y, z) as a rowvector, you may also express it as a columnvector,
which is very "common practice" in Euclidean space.

First, I will let operate the matrix, on the first columnvector. If you apply the rules from vector calculus,
with this specific matrix, you will get exactly the same columvector again.
Then, what is left, is no more than an "inner product" (scalar product), of (x, y, z), with itself.

I must say that equation 4 already "resembles" equation 1. Why? Since "gμν" is a tensor object,
(in general sense), and in most cases, such a tensor can be identified by a matrix.
The "dxμ and dxν" expressions in equation 1, are general expressions for vectors,
similar to (x, y, z).

The specific matrix, in this case:

gμν = ┌ 1 0 0 ┐
│ 0 1 0 │
└ 0 0 1 ┘
    (equation 5)

This really is an expression, that we are dealing with flat Space. Suppose you move along the z-axis, no matter
what direction, then while you are moving, there is no change in your x- coordinate, or y- coordinate.

Now, suppose the z-axis, is curved. Then, while moving along z, there would be changes in "x" and "y" too!

The fact that the matrix above, only have values in the diagonal elements (which are 1), is actually
the mathematical way of saying that we are in flat Space. And here it would be a flat 3D space.

In flat space, while traveling along a certain axis, there are no changes in the other coordinates.
In curved space, while traveling along a certain axis, there are changes in certain other coordinates.

A mathematical way to say "how" a certain coordinate changes, if you move along another axis, is taking
the partial differential. For example, to see how "z" would change due to a variation in "x",
we would write "∂ z / ∂ x".

This is rather similar to highschool math, where you may have seen expressions like "dy/dx",
which also expresses, how "y" would vary, under variations of "x".

If we would not be sure of our 3D space, if it would be really flat, or possibly strangely curved in some way,
then our "metric tensor" might have non-diagonal elements, or elements which are not "just" simple constants.

As a sort of pseudo code, you might take a look at the equation below:

gμν = ┌ ∂x/∂x ∂x/∂y ∂x/∂z ┐
│ ∂y/∂x ∂y/∂y ∂y/∂z │
└ ∂z/∂x ∂z/∂y ∂z/∂z ┘
    (equation 6)

For example, ∂y/∂x would mean: what is the change in "y" due to variations in "x"?
In case of a flat Space, it would be "O". In case of some curved Space, it could have some "non zero" value.

For example, ∂z/∂z would mean: what is the change in "z" due to variations in "z"?
This would be "1". The ratio of z/z, or dz/dz, is always "1".

So, in case of a flat Space, ∂x/∂x, ∂y/∂y, ∂z/∂z, would all be "1",
while all other matrix elements (non-diagonal) would be "0", exactly is we see in equation 5.

The above still is not really completely equal to equation 1. This will come later.
But, I hope that the appearance of equation 1, is a bit more acceptable now.

Note 2:

It helps to have a certain understanding on Matrices, although for this note it is not absolutely required.
But I surely recommend it. I have a small note on Matrices too. You only need to browse through it.
If you like to try it, then use this link.

1.2 A few words about the Einstein notation and coordinate transformation:

If section 1.1 above, helped in understanding the Riemann tensor, then that's really great !
This is so, since that object is very important in SpaceTime discussions.

Another important "thing" is the "Einstein notation".
Let's see what this is about.

Compact notations: an example:

A matrix may have "n" rows, and "m" columns. In many cases, "n=m", in which case it is called
a square matrix. Below we see an example of a 3x3 matrix "A":

A = ┌ a11 a12 a13
│ a21 a22 a23
└ a31 a32 a33
    (equation 7)

It's really true that mathematicians and physicists, do want to minimize, or compactify, their representation
of mathematical objects. Believe it, or not: the matrix above is often simply abbreviated by "aij".

In equation 7 above, we can see that the matrix consists of the elements a11, a12 etc.., all the way up to a33.
Note that there are always 2 indices needed, to point to a certain matrix element. We need an "i" and a "j",
to exactly specify a certain element. Ofcourse, the indices do not need to be denoted by "i" and a "j".
You are free in your choice. They may also be denoted by Greek symbols like μ and ν.

However, it saves a lot of writing if everybody agrees that such "difficult to write down object",
like shown in equation 7, can simply be abbreviated by "aij".
Ofcourse, somewhere in the context, it must then be clear that both "i" and "j" run from 1 to 3.
Otherwise, it would not be clear that we are dealing with a 3x3 matrix, and not e.g. with a 5x5 matrix.

I hope you have tried the "hint" in note 2 above, in order to find out more on matrices (if you would need it).

The Einstein notation: an example:

The story below, is (I hope) a nico intro into what is called the "Einstein notation".
It's really true that professional articles, almost never fully write out mathematical objects,
but instead, the use the "compact" notation (like e.g. the "Einstein notation").

We already have seen the metric in a flat Eucledian space R3:

ds2=dx2 + dy2 + dz2  

Now, we are very curious as to how to express such metric if we change our coordinate system, or what is the same, switch
from one set of basis vectors to another set of basis vectors.

This procedure is no more that a "vehicle" to illustrate the Einstein notation.
So, the transformation itself, is less important.

Often, folks choose an orthonormal coordinate system, like the Cartesian one, where the unit vectors are all perpendicular to each other.
However, in a general discussion, it is no requirement that the basis vectors are perpendicular, as longs as they are "independant",
meaning that any point in Space can be described by a linear combination of those vectors.
But those basisvectors may use certain "angles" between all of them, and indeed, not neccesarily 90 degrees.

Let's consider R3 again. Suppose we have two sets of basis vectors:

S1 = {v1,v2,v3}
S2 = {w1,w2,w3}

Suppose that the basis vectors are all independent. Then any vector in R3 may be expended
in a linear combination of the vectors of S1, or S2.

So, if we have just "some" vector a, then for example a might be written as:

a = a v1 + b v2 + c v3.

Then, any of the basis vectors (say from S1), can be expressed as a combination of the vectors of the other set.
So, we may have:

v1 = a11w1 + a12w2 + a13w3
v2 = a21w1 + a22w2 + a23w3
v3 = a31w1 + a32w2 + a33w3     (equations 8)

Please take notice that the aij coefficients, form a "matrix". In this case, it's a square 3x3 matrix.

Now, if we consider two decriptions of some point in space, depending on the chosen coordinate system, we can describe that point
as for example (x, y, z), or (x', y', z'). So, in this case, (x, y, z) might be the representation of that point using set S,
while (x', y', z') might be the representation of that point using the set S'.

(Note the "apostrophes" denoted by '.)

It's not hard to express the coordinates of one system in terms of the other one. For example:

x' = a11x + a12y + a13z
y' = a21x + a22y + a23z
z' = a31x + a32y + a33z     (equations 9)

In a condensed notation, mathematicians and physicists often use an expression like the one showed below.
In this case, a whole set of equations is simply captured in one simple expression:

x'i = Σ j=13 aij xj   (equation 10)

Here, we also have generalized the coordinates. Instead of talking about x, y, z etc.., we simply use an index "i",
to denote the coordinates. So, something like xi, or x'i, will replace the different letters like x, y, etc..

Also, now we have a nice extension when we would talk about Rn, where i then ranges from 1 to n.

Also, Σ is a symbol that is used to denote a "summation". In the example above, we sum over "j".
So, each time we select a certain "i", we sum over the j's.

You can try it out yourself. For example for the second coordinate x'2, we would have:

x'2 = a21x1 + a22x2 + a23x3 = Σ j=13 a2j xj

Equation 10 can even be written in a more "condensed" format. If it's trivial that the summation
is along a certain index (say for example "j"), then the summation symbol is often completely left out.
At first, it may appear somewhat strange, but it's heavily used in scientific articles.
Then, equation 10 becomes:

x'i = aij xj   (equation 11)

Note that the whole set of equations 9 is captured in equation 11.
Per "i", we have a sum over the "j's", resulting in all of the equations listed in (9).
This is also often called "the Einstein notation".

Again, in general, it must be evident that the summation would go along a certain index, otherwise it would be
somewhat obscure. You can check for yourself, just like above, that we indeed sum along "j".

We are still not at our "core" subjects, like Relativity, Kaluza-Klein theory, micro-Black Holes etc..
It's still just some preliminary theory, we are studying here, in this Chapter.

Let's now touch another subject, namely covariant and contravariant indices. For a metric, my feeling
is that it is "a bit" overrated. It is important ofcourse, but not "world-shocking".

1.3 A few words about the covariant and contravariant indices:

I hope I do a reasonable job here. I think that it is not so very important to "nitty-gritty"
plough through all equations listed below. It's good enough to follow the main theme.
It all in all is not "world-shocking", but it's indeed not very trivial stuff either.

1.3.1 Introduction:

The gμν in equation 1, is a tensor object. In this example, it uses two indices, here named μν.
Ofcourse, there is nothing wrong to use "i" and "j" instead, as index identifier. But in this case it is simply by convention
to use μν (sort of).

Since there are two indices, it can be identified as a matrix. You probably know that we in general may have an "mxn" matrix,
using "m" rows and "n" columns. At the same time, the "m" and "n" will function as "indices", that is, use the "m"
to walk along the consecutive rows, and use the "n", to walk along the consecutive columns of the matrix.

A vector uses only 1 index, like the (row) vector (v1, v2, .., vn), which is denoted
in Einstein notation. simply by vi (where it is understood that i ranges between 1 .. n).

But what if we see this object?: Ti,j,k. This object uses three indices, and cannot identified as
a matrix anymore. However, with some creativity, you may say that it can be associated with a 3D matrix, which looks
like a cube (if the max ranges of i,j, and k are the same), and at one side we may have a "i x j" matrix, but there is also
"depth", due to the "k" index.
Or, you may also read it as a "stack" of 3x3 matrices.

So, mathematical objects are possible, which are even "wider / more descritive" (so to say) than a matrix.

But why the distinction between upper- and lower indices?
Let's stick to vectors for a moment. The discussion will also hold for tensor elements.

The qualifiers "covariant and contravariant" only applies for the components of vectors,
and thus we can also only talk about covariant and contravariant indices.

How does, say, a 4 dimensional vector looks like? You may see some 4 dimensional vector like so:

V = (v1, v2, v3, v4)

which is a row vector.

It also may be notated as a column vector, like:

V =
┌ v1
│ v2
│ v3
└ v4

The difference is really subtle. In some cases, folks "couple" the fact if a vector is written
as a row- or column vector, to the ideas of covariant or contravariant components.
I say: take care! In some cases it holds, like with elements which are complex numbers.
But in general: this comparison s?cks (question mark inserted on purpose).

A better, and more physical interpretation is this:

Suppose we change the basis of our coordinate system. What happens to the components of a vector?
Note that I am talking purely about components, like vi.

-We know that there exists row vectors, and columnvectors. Is it only the representation which is different?
Almost always: yes. But not 100% "always". For example, in Quantum Mechanics, they might have different
interpretations, and they are called Bra's and Ket's, in Dirac's vector formulation.

-Why a distinction between covariant and contravariant vectors? This is maybe a distinction
between a constructed vector and a physically observable vector.

A position in Space, or a velocity, can be viewed as physically observable vectors (contravariant).
A gradient of a scalar field, can be viewed as a vector construction (covariant).

The typing of "physically observable" or "via construction", is not universally valid,
but I use it since it may help in the following discussion.
Also: The distinction is not about the qualification whether an object would be a "true" vector or not.

1.3.2. Covariant (constructed vector):

Suppose you have a scalar function defined on R3 Space. Since it's a scalar function,
the values of the function are simply numbers. Let's call the numbers "w".
So the function is w = ϕ(x,y,z). This is a R3 -> R function.

It could be a function that describes pressures in Space, or Temperatures, or whatever other sort of "pure" values.

If those pressures or temperatures is not constant in Space, then the values thus differ over various regions.
In such a case, it's possible to define a vector ∇ ϕ, which represents the direction and magnitude of
of the max change of ϕ(x,y,z) at a certain point (or at all points actually).
If we now rotate our coordinate system, ϕ and ∇ ϕ, simply sweep along with the rotation.
If such behaviour happes, the vector is called covariant.

If we have such a vector, say "A", then it's components are notated with lower indices, like Ai.

1.3.3. Contravariant vector (directly physically observable vectors):

You might review a position in Space, or even in R2.
If in the plane, suppose you draw a vector (do not care what direction). Now you rotate the x- and y-axes
counterclock wise by, say, 60 degrees. To view the vector from this new perspective, you must
rotate the vector components the other way, in the same amount.
Take notice that I say: view the vector from this new perspective.
It's a nice excercise to try to visualize that mentally.

If we have such a vector, say "B", then it's components are notated with upper indices, like Bi.

It was just an agreement some to ago, to use lower- and upper indices that way.

1.3.4. Again a remark on the metric tensor:

If we now take a look at the Riemann metric tensor again (equation 1):

ds2 = gμν dxμdxν    

Then we have two contravariant vectors (indices) listed, namely dxμdxν.

Einstein notation is in use, so here we have a sum of all spatial components, which looks like:

ds2=dx2 + dy2 + dz2, if we are in flat Eucledian R3.

If we now have curved space, or a constructed object is in effect, describing lower and thicker densities
which determine the curvature of Space, then we need the gμν object too,
to describe the full metric (or account for all factors which have an effect on the distance).

1.3.5. The equations 9, 10, 11 written slightly "differently" :

Let's repeat equations 9 again:

x' = a11x + a12y + a13z
y' = a21x + a22y + a23z
z' = a31x + a32y + a33z    

We have the transformed coordinates (x', y', z'), and the original coordinates (x, y, z).

Let's for example focus for a moment on the equation for x'.
If we now take the partial differential with respect to x:

∂x'/∂x = ∂/∂x  (a11x + a12y + a13z) = a11.

It's quite an amazing result. It just returned the element a11. However, the differential itself
should not arouse your amazement.
Even from highschool math, we have similar results. Suppose you have the linear function y=3x+5, then dy/dx = 3.
You might remember that from highschool math.

Obviously, similar results hold for all aij, for all coefficients in equations 9. Thus:

aij = ∂xi
----       (equation 12)
∂xj

Similarly, using general indices "i" and "j", then equation 10 (which we saw before):

x'i = Σ aij xj  

can be written as:

x'i = Σ ∂x'i
----   xj     (equation 13)
∂xj

When using Einstein notation, the summation symbol Σ is omitted.
Now, we need to link this result to the understanding of covariant and contravariant objects (like a vector).

We will see that the aij as expressed in equation 12, is no more than the projection
of the vector x' on the axes of the coordinate system in use, or, in other words, the projection
of the vector x' on the set of basis vectors in use.

Take a look at the figure below. Here we see R2, and two sets of basis vectors.

The set {e1, e2} is the usual set of orthonormal basisvectors, (1,0) and (0,1).

We have a Linear Mapping "L", which rotates e1 and e2, into a new coordinate system.
This mapping rotates counterclock wise, over an angle ϕ.

Fig 2: Rotation of a set of basisvectors, to produces mapped basisvectors




So, we have the sets:

S1 = {e1, e2}   which is shown in "red" in the figure above.
S2 = {L(e1), L(e2)}   which is shown in "blue" in the figure above.

If you take a look at L(e1), and you project this vector on the original basisvectors,
then you can see that L(e1) = cos(ϕ) e1 + sin(ϕ) e2.
Here, I assume that you are comfortable with cos() and sin() functions.

If you need some help with sin() and cos() then you might want to look here.

It is thus easy to find:

L(e1) = cos(ϕ) e1 + sin(ϕ) e2.
L(e2) = -sin(ϕ) e1 + cos(ϕ) e2     (equations 14).

By the way, the matrix associate with the mapping L is:

┌ cos(ϕ) -sin(ϕ) ┐
└ sin(ϕ) cos(ϕ.) ┘

Remember, in coordinate transformations, if you know the mappings of the original
basisvectors, expressed in the new ones, you immediately know the columns of the Matrix.

Let's now see what happens to the usual coordinates:

┌ x1' ┐
└ x2' ┘
= ┌ cos(ϕ) -sin(ϕ) ┐
└ sin(ϕ) cos(ϕ.) ┘
┌ x1
└ x2

This leads to:

x1' = cos(ϕ) x1 - sin(ϕ)x2
x2' = sin(ϕ) x2 + cos(ϕ)x2     (equations 15)

Let's apply equation 12 on the first equation:

∂x1' / ∂ x1 = ∂ / ∂ x1 (cos(ϕ) x1 - sin(ϕ)x2) = cos(ϕ)

It's simply the length of the projection of L(e1) on the (original) x-axis, or e1

This was not new really. I only wanted to "associate" the coefficient "aij" of equations 12, 13, with the differential,
and show (by example) that this is indeed true.

By the way, the difference between covariant indices, or contravariant indices, written as coefficients, and thus
also in differentials, was not showed here.
A good example would be, to fully write down an example in polar coordinates. And then compare that with
an example in Cartesian coordinates.

All of the above is not sufficient to prove the statements below. However, I hope that you think that
those two statements below are quite plausible. That would really be enough to follow this note.

Definitions 1:

A vector A, or tensor of the first rank, is called contravariant if all of it's components
transforms (under rotation) as:


A'i = Σ ∂x'i
----   Aj     (equation 16)
∂xj

A vector A, or tensor of the first rank, is called covariant if all of it's components
transforms (under rotation) as:


A'i = Σ ∂xj
----   Aj     (equation 17)
∂x'i

Note that in many general discussions about vectors, you might write upper- or lower indices for vector components.
However, strickly, the use of lower and upper indices is reserved for co- and contra variant vectors.

But if the distinction does not matter at all, you may sometimes see the usage of upper- or lower indices,
in various different textbooks or articles.
In Cartesian coordinates, all indices may be written as lower indices.

1.4 A few words om tensor operations:

You may call a simple number, a scalar (using 0 indices), a tensor of rank "0".

You may call a vector (using 1 index), as long as it adheres to one of the "definitions 1" above, a tensor of rank "1".

A tensor of rank 2 (using two indices) is a nxm matrix. However, not every nxm matrix is a tensor.
Indeed, similar to definitions 1, they need to conform to specific transformation rules.

A tensort of rank 3 (using three indices), looks like a cubic (nxmxl) matrix. However, not every nxmxl matrix is a tensor.
Indeed, similar to definitions 1, they need to conform to specific transformation rules.

The requirement that the object needs to adhere to to specific transformation rules, is simply not much more
than saying that their components (in a certain basis) are linear expressions in another basis, and thus
physically and mathematically consistent. And, thus they are "meaningfull".

Here are two examples of tensors of the second rank:

Example 1:

┌ 1 0 0 ┐
│ 0 1 0 │
└ 0 0 1 ┘

This tensor is the "tensor metric" of three dimensional flat Space. Yes, this is a very "unspicy" example.
However, it is a tensor.

Example 2:

┌ -xy -y2
└ x2 xy   ┘

Is this a tensor of the second rank? I do not know by just looking at it. Indeed, not every matrix is a tensor.
However, this example is a tensor. You can find out, by painstakingly investigate if all 4 components comply
to "transformation rules" for tensors of rank 2. These are quite similar to what we have seen in "definitions 1",
which hold for vectors.

You do not need to remember the "stuff" below, but the transformation rules for tensors of the second rank are:

Contravariant:

A'ij = Σ Σ ∂x'i ∂x'j
-----------   Akl     (equation 18)
∂xk∂xl

Covariant:

C'ij = Σ Σ ∂xk ∂xl
-----------   Ckl     (equation 19)
∂x'i∂x'j

(The summations go over "k" and "l").

There is also a "mixed" variant, with upper and lower indices, but I will skip that one.

There exists also tensors with a rank higher than two, for example the one using 3 indices,
which is then a tensor of rank 3 (often can be represented by a cubic matrix).
Mathematically, it can even be higher than 3, like for example a tensor of rank 5: Tijklm.

A tensor used in physics, should have a clear purpose. Suppose you place a CO2 molecule
in some ElectroMagnetic field. The various charges inside the molecule, will all respond to the field,
and the "form" of the dipole moment can be quite complex. You cannot describe it with a vector (as you
would do, for example, for the position or velocity of a particle).

To get the dipole moment right, you probably need a rank 2 tensor (nxn matrix), describing the
various stresses in various directions.

A tensor, which can be discribed by a mtrix, which shows only constant numbers, is probably less interesting.
The "something" it tries to describe, is terribly constant here.
On the other hand, if the elements are coordinates, it describes something very useful.

Further, some important operations are possible with tensors, like "inner multiplication",
or "contraction", "tensor product" and others. It's not important for this note.
When it is important at a certain occasion, we will deal with that at that particular time.

1.5 A few words om gauge symmetry / gauge invariance:

Above we have seen some theory around the "metric" and related stuff.

Although not quite so formulated in the sections above, you have seen some transformations, which are also needed
to verify, or validate, our formulations. That is, if you for example, rotate your lab, the laws of physics,
or how you describe a physical system using scalars, vectors or tensors, should not fundamentally change.
The same should hold for other Linear mappings (transformations), not only rotations.
It's possible that some elements in vectors and tensors change, but the description as a whole, must be
the same.

This section is rather similar, but from a different presepctive, I think.

The core idea of "Gauge symmetry" is maybe hard to capture. Here are a few examples.
I think example 6 is the best one, while the others may be seen as a road to example 6.

Example 1:

Suppose you measure the acceleration of a charged particle through a potential difference between two plates,
where one plate is on a potential of 1050V, and the other plate is on a potential of 1000V.

Now, you repeat this experiment with a potential difference of two plates, where one is now 50V and the other 0V.
There is no difference in the observed acceleration and final velocity, in both experiments.

So, the acceleration and final velocity of that particle (the "observables"), is invariant for
a potential transformation (an addition of a constant V), as long as ΔV remains the same.

The example above, is simple, but already very slightly "borders" to the core of gauge invariance.
Most people would say that this example is not truly describing gauge invariance.
And indeed, it is not. But the idea presented above, is useful for what's in the next examples.

Example 2:

Here is another well-known example from QM.

Suppose you have a wavefunction description from Quantum Mechanics, denoted by ψ.
Introducing a phase shift factor as in:

ψ' → e ψ

then ψ' does not fundamentally change the observable that ψ represents.

ψ is a superpostition of submode waves, which all go through the same shift, therby not changing
the Quantum probabilistic behaviour.

Also, using a QM description of an observable, either using "ket" notation, or wavefunction notation,
then two vectors that only differs by an overall phase, will describe the exact same state.

Note: you might argue that in some physical setup, described by a differential equation, with only one wave mode,
using exactly required "initial conditions", then results found by using ψ or e ψ, will be different.

This is indeed true. But still the fundamentals of the solutions are not really different except for a phase shift.
For QM, probabilities of eigenfunctions (subfunctions in the sum), are the entities which count.

However, such an example also calls for what physicists call a "global symmetry" (like the example above), and a much
stronger one, called "local symmetry".
If namely that phase factor is defined for all (local) points in SpaceTime (x,t), where the observable (which ψ represents),
"is found", and that factor works in such a way, that the the observable does not change, then we have found
a deeper symmetry. In effect:

ψ(x,t)' → eiθ(x,t) ψ(x,t).

However, if we study the evolution of ψ(x,t) or eiθ(x,t) ψ(x,t), using the time dependent
Schrodinger equation, we need to analyse the situation somewhat further. It turns out that we need to add a
"transformation function" (f(r,t)) to the potential energy.

We are still not very close to gauge invariance. However, some concepts in this example, like the preservation
of the "observables", is important.

Example 3:

A "coordinate transformation" (as we have seen in the sections above), should keep physics the same,
in those coordinate systems. We have seen how vectors or tensors "transform", when described from
another coordinate system.

Some folks might say this this is not equal to (or an example of) "gauge invariance".
However, it's related to it, I think. Actually, it can be maintained that it is
an example of "gauge invariance". However, not all forms of "gauge invariance" can be viewed as
just simply some sort of coordinate transformation. So, in general they are not the same.

Example 4:

The classical intro to "gauge invariance" is by using classical ElectroDynamics.

Finally, we see an example of "gauge transformations" and "gauge invariance".

In the early '20's of the former century, at least Fock saw (and indeed Lorentz and others as well),
that there exists a certain mathematical freedom of choosing the electromagnetic potentials in classical electrodynamics.

The electric field and magnetic field, are in principle determined (in a field perspective of the theory),
by a vector potential "A", and a scalar potential "φ"

There exists a function "χ" such that if we transform A" and "φ", according to:

A' = A + ∇ χ

φ' = φ - 1/c ∂χ / ∂t

then the observable electric- and magnetic fields stay invariant.

It turns out that the Maxwell equations of classical electromagnetism for the electric and magnetic fields are invariant
under the transformations as listed above.

It may seem amazing that it works this way. But the mathematical entities which are in the arena of some theory
but are not the "observables" themselves, may undergo a "gauge transformation", while the basic theory itself
then often is called "gauge invariant".

Example 5:

If you further analyze example 4, you still may wonder why you may "add" such function as χ(r,t)
to those potentials.

-Mathematically it can be done, and if the endresult delivers you a consistent formulation,
(and possibly with even solving former problems in the former formulation), then it's really "ok" to do so.

-The concept has been proven to be very succesful in e.g. the formulation of the strong- and weak (nuclear) forces.

-As example 4 has touched upon, the EM Theory of Maxwell is invariant for a transformation
of the potentials, which governs the Electric- and Magnetic fields.

-The probabilities of Quantum System is defined by the modulus of the wavefunction, which is not affected
by a phase factor. If that phase factor includes dependencies of space and time (r,t), and it is indeed so
that this phase factor has no physical significance, since the Quantum probabilities remain the same,
we may indeed speak of a "local gauge invariance".

-The observables itself should not be modified, but it is the other entities (like potentials) in the arena for which
the "gauge functions" are to be applied too. This must result to the same observables.

There are many "analogues" from other sorts of observations.

Let's consider experiment "X".

Two particles, after following some path in SpaceTime, will interact at location r = r1 and t = t1,
resulting in certain observables.

The experiment is repeated, but this time with the application of a transformation χ(r,t) to some
entity in it's environment for such particle. This is done for both particles. The particles follow different paths,
but will interact again at location r = r1 and t = t1.

If again the same observables are detected, then χ(r,t) is probably indeed a "gauge transformation".
Thus, when exactly χ(r1,t1) holds (thus when r = r1 and t = t1),
then χ(r,t) either cancels out for the combined particle system, or there must exist a sort of
continuous symmetry in the description of the system.

Example 6:

Review example 1 again. In general, transforming the scalar potential (φ) and vector potential (A),
does not change the E (electric) and B (magnetic) fields. These latter fields
are our observable fields.
Ofcourse, there are certain rules for such transformation of φ and/or A. See example 4.

It's remarkable that φ and vector potential A can be changed, and not effecting E and B.
Changing φ and A (according to certain rules), is called a "gauge transformation".
It turns out that E and B, are thus "gauge invariant".

However, in some sense, a staggering result is shown (or likely to be true) in the Aharonov-Bohm experiment.

There is no need to show any details here, of such experimental setup.

In short, the experiment allows that interference of electron waves can be detected, and the gauge fields
φ and A, can be transformed (changed).

When there are no E and/or B fields, the interfernce pattern looks exactly as expected from waves.
However, remarkable effects occur if in another region φ and A are altered.

A fundamental question namely, is: what is more fundamental? The fields E and B, or the
scalar potential (φ) and vector potential (A)?
Again, multiple interpretations live among physicists. However, the dominant view is that φ and A
are more fundamental.

First, (the observables), namely the fields E and B are mathematically derived from φ and A.
This can be found from classical electrodynamics.

Secondly, one result from the Aharonov-Bohm experiment is, that the phase of a moving charge could be changed in regions
of non-zero φ and A "elsewhere", even if an external E and B are both zero at or near the electron.

It's amazing. Here we have for example an electron, with a Quantum Mechanical description by a wave (which is
OK to do so). In that region, the electric field and magnetic field are "0".
However, in a box in another region the φ and A can be transformed. Care has been taken that
it cannot influence any E and B outside the box.
Still, the electron wave is influenced, by having a phase shift.

One design goal of the experiment, was to study the interaction of an electron wave with electromagnetic fields (E and B).
The remarkable effect described above, indeed suggests that φ and A are actually more fundamental
than the electromagnetic fields. However, some folks also say that the effect is a demonstration of the non-locality
of the electromagnetic fields.

However, many folks nowadays reason along the following lines. The potentials thus must be extended to "gauge fields",
and must be regarded as the most fundamental physical quantities.
Indeed, the observable E and B fields are thus less fundamental, but should be regarded as "gauge invariant".

It's another way to look at physical events. But a "structure" as the famous "Standard model" in physics,
is actually largely build with a sort of vision as decribed above.

This is only an simple introduction. No doubt that we must return to this sort of vision, later in this note.

As a side note: Usually it was formerly taken that the potentials were just some mathematical "constructs" in the theory,
in order to arrive to a description of the true physical observables (ElectroMagnetic fields E and B).
According to the findings above, it turns out that the potentials thus have more physical "substance"
than was formerly assumed.

Chapter 2. SpaceTime in Relativity.

The considerations in this section still quite conservative in character.

However, we will also touch on a subject like "Lorentz violation" which is a very interesting field indeed.

Einstein produced two magnificent theories in the early 1900's: The "Theory of Special Relativity" (1905),
and the "Theory of General Relativity" (1916), both often abbrieviated with "STR" and "GTR".

Both theories are highly involved in discovering the properties of SpaceTime.
Ofcourse, both theories are absolutely monumental! I can only distill a few points from those theories,
which is what I am going to do here.

2.1 The common (Regular) 3D coordinate system:

Ofcourse we can visualize a three Dimensional Cartesian coordinate system, using an x-axis, y-axis, and z-axis, all
perpendicular to each other. Nothing special here. This is highschool math. In that "3D space",
points can be described by (x,y,z), where x, y, and z can take on any value.
The x, y, and z, are "spatial" meaning that they are also involved in something called a "metric",
which you often can relate to the fact that you are able to define a distance between points.

For example, between the points (x1,y1,z1) and (x2,y2,z2),
you can draw a linesegment, which also means that we can speak of the distance "d" between those two points.

Simply using the Pythagorean theorem, the distance "d" squared, is:

ds2 = (x2 - x1)2 + (y2 - y1)2 + (z2 - z1)2=
dx2+dy2+dz2.

By the way, in math nothing prevents you from using e.g. a 6 dimensional space, where points might be described
as a 6 tuple (in general an n-tuple) like (x1,y1,z1,u1,v1,w1).

As a matter of fact, we already have seen this in Chapter 1.

2.2 4D SpaceTime (Minkowski SpaceTime):

3D space and time, together form a 4D SpaceTime. But how to really define it in terms of, say, "points"
in such a space, just like we did above? First, I must say that 4D SpaceTime, is not just like adding
one extra spatial dimension to 3D Space (like if you would go from 2D space (x,y), simply by adding the z-axis,
in order to arrive to 3D Space).
No, it must have a time "t" related dimension. But, If we would simply use (z,y,z,t), then we would not be able
to get a "metric" as we saw above (like a distance "d" between points).

Now, in order to let the fourth coordinate, relate to a spatial dimension, we can use "ct", where "c" is the
universally constant speed of light. A simple illustration might help: You know that if you bike 10 m/s,
then after 5 seconds, you have covered 5 x 10 = 50m. So with a constant speed, distance = speed x time.
In order to correctly indentify points (actually "events", about later more on that), we thus might use:

(x,y,z,ct) or, which is not much different, (ct,x,y,z), which you may also find in the literature.
The "order" is just simply a matter of convention. Let's use (ct,x,y,z).

In such a case, we are able to use a metric (like a "distance") between points in such SpaceTime:

ds2 = -c2t2 + dx2 + dy2 + dz2

Note: there is a little issue with the story above. In the true metric "ds" (distance), the c2t2
term must indeed be negative. (Or at least: most physicists follow that convention).

This was (probably) first due to an observation of Poincare: we must use "ict", instead of "ct", where "i"
is the imaginary number from Complex numbers. If we square "i", we have "-1".
So, following Poincare, our coordinates are (ict,x,y,z). However, coordinates in Minkowski SpaceTime
are more often written as (ct,x,y,z), and sometimes even as simple as (t,x,y,z) (normalized in standard units).

It's important to understand that 4D Minkowski SpaceTime is really supposed to be different from just a 4D Eucledian Space.

Poincare argued that it's more fundamental to view a socalled "Lorentz transformation" as a rotation
of coordinates, which indeed works if "ict" is used (later more on this).
Today, it is not regarded as simply "make-up", but it is reckognized that it does not directly impact
the most important purpose of Minkowski SpaceTime, namely to "relate" events in 4D SpaceTime.
By the way, you will see that the "minus" sign will be transferred to the matrix anyway.

The argumentation presented above on the "minus" sign, is not fully correct right now, and this will
be corrected somewhat later.

In order to "arrive" to a matrix equation, just as shown in section 1.1 on 3D Eucledean Space, let's try the following.
If needed, just take a look at section 1.1 again.

ds2 = ┌ -1 0 0 0 ┐
│  0 1 0 0 │
│  0 0 1 0 │
└  0 0 0 1 ┘
┌ ct ┐
│  x │
│  y │
└  z ┘
┌ ct ┐
│  x │
│  y │
└  z ┘
= ┌ -ct+0x+0y+0z ┐
│ 0ct+1x+0y+0z │
│ 0ct+0x+1y+0z │
└ 0ct+0x+0y+1z ┘
┌ ct ┐
│  x │
│  y │
└  z ┘
= ┌ -ct┐
│  x │
│  y │
└  z ┘
┌ ct ┐
│  x │
│  y │
└  z ┘
= -c2 t2 + x2 + y2 + z2  

Again, we have a good "resemblence" to the fundamental tensor, or Riemann tensor:

ds2 = gμν dxμdxν    

Here, gμν then corresponds to the matrix as shown above.
Since the matrix only contains non-zero diagonal elements, the matrix is also often portraid as "-+++".

Don't forget that Minkowski SpaceTime, as used in Special Relativity, is "flat".
You should be able to see that too from the matrix itself. It only has non-zero diagonal elements.
This principle also was explained in section 1.1.

Next, we will explore some main highlights of Special Relativity.

2.3 A few highlights of STR:

STR is mainly involved in "frames of reference" (coordinate systems) which move with a uniform or constant speed
with respect to each other.

Guided by reasonable assumptions, Einstein reasoned that:

-The laws of Physics should hold everywhere. The law of physics are the same in any frame of reference.
-There is no preferred "direction" in true space. Or Space itself is homogeneous and isotropic.

-And, what seems to be quite extraordinary, the speed of light (denoted by c) is constant, independend
of any frame of reference.

The last one is not so trivial. On a human scale, we know that if you are in a train, which
moves with a speed of 100 km/h with respect to the ground, and you are inside the train and shoot
an arrow with 100 km/h in the direction of movement, the speed of the arrow with respect to the ground
is 200 km/h. Likewise, if you drive a car with 70 km/h, and someone overtakes you with 72 km/h,
then for you the relative speed of the other car is only 2 km/h.

It's always simply a direct (vector) addition/subtraction of speeds (velocities).

Fig 3: Illustration of 2 frames of reference S and S', moving with constant speed.



Figure 2 illustrates this. An observer in S might think that he is stationary. Frame S' goes by,
with a speed of 20 m/s in the +x direction, relative to frame S.
Ofcourse, an observer in S' might think that it is he which is stationary, and that it is frame S
which moving in the -x direction with a speed of 20 m/s.
Let's return to the observer in S. If the observer in S' shoots an arrow with 30 m/s in the +x
direction (relative to S'), then the observer in S measures the speed of that arrow to be 50 m/s.

If you would replace the arrow, with any form of electromagnetic radiation, like
radiowave, radar, light etc.., then all observers, no matter which frame of reference, would
measure the same constant speed, namely the speed of light, which is universally constant.
This is highly remarkable, and will have profound implications to the structure of SpaceTime,
as seen by different observers in different frames of reference (different in the sense of speed
in some direction, like the x-axis).

In many articles, the speed of light ("c") is a central theme. However, visible light is just one of
the infinite manifestations of ElectroMagnetic (EM) radiation, which has an infinite spectrum
of frequencies (and energies).

So if you are in S', travelling with with 30% of "c" with respect to S, and you turn on a laser pointing in the +x direction,
then observers in S and S', still will only measure the same constant speed of light (denoted by "c").

The following is not an adequate solution to the riddle of the constant speed of light.
There is a relation that couples "c" to 2 fundamental electrical and magnetic constants of the Vacuum, namely
ε0 and μ0 which represents the "vacuum permittivity", or "permittivity of free space".
These constants say "something" about the capability/ability of the vacuum to permit electric- and magnetic fields.

c2   =    1
----
ε0 μ0

Viewed this way, and assuming ε0 and μ0 are constant throughout the Vacuum,
then c is constant too. Again, this is not adequate enough, as a full explanation as to why c is constant
in all frames of reference.

Let's go to the findings of Einstein in STR.

Suppose we have frames S and S' again. Suppose you are in S, which you think is stationary.
Ofcourse, you can specify Space coordinates in your frame, using (x,y,z). The time in your frame
is denoted by "t". While the x, y and z can vary ofcourse, you assume that t is the same throughout
your frame of reference. That's no more than a valid assumption.
Suppose you are located in the Origin of frame S, that is (0,0,0).

However, frame S' moves with speed "v" towards the +x direction, with respect to the (stationary) frame S.
An observer in S', uses the Spatial coordinates (x',y',z'), and time t'.

From a classical point of view, the times t and t' are exactly equal. This is also in correspondence
with all experiences in human life. The time in a plane is exactly equal to the time on the ground.
This is not exactly so in STR. However, the discrepancies will get clearer as v gets closer to c.
For now, we denote the time in S as t, and the time in S' by t' (although classically, they should be equal).

Classically, an observer in S' would say that the coordinates of S and S' relate in the following way:

x' = x - vt
y' = y
z' = z
t' = t

Since the relative movement of S and S' is only along the x-axis, it follows that y=y', and z=z'.

The set of equations above, is often referred to as a "Galilean Transformation".

Einstein futher reasoned in the following way. If a light explosion would take place, then the spherical
wavefront would be seen as equal by all observers in any moving frame of reference.
It means for our observers in S and S', that:

Sperical wavefront described from S:

x2 + y2 + z2 = (ct)2 = c2t2    

We can describe the sperical wavefront from the perspective of S' too. Then it will be:

Sperical wavefront described from S':

x'2 + y'2 + z'2 = (ct')2 = c2t'2    

Both equations describe the same "distance" in Minkowski SpaceTime.

Thus:

x'2 + y'2 + z'2 - c2t'2 =d

x2 + y2 + z2 - c2t2 =d

But S' is moving into the +x direction only (as viewed from S). There is no reason
to expect "any effect" along the y and z directions. Sure, as you will see in a few minutes,
in the dimension in which we indeed have a "speed" ("x"), we will see a large effect.
But in the transpose directions, thus in this case the directions "y" and "z", there is no effect at all.
It's still reasonable to say that:

y' = y
z' = z

The distances in Minkowski spacetime as showed above, then reduce to:

x2 = c2t2    (1)

x'2 = c2t'2     (2)

Thus:

c2t'2 - x'2 = c2t2 - x2     (3)

This is still the metric as we should use in Minkowski SpaceTime, but we were able to eliminate
the "y" and "z" coordinates.
Since (1) and (2), are the same distance in Minkowski SpaceTime, we were able to write down (3).

These equations can be solved, that is, express x' in terms of x and t, and express t' in terms
of x and t.

The math is not too hard, but a little too spaceous to write down here.
You can take a look at one of my earlier notes, which is says a little more on STR,
and indeed shows the derivation of the solutions.

If you are interested, then you might want to take a look at this note.

Below you will see the solutions for x' and t'. These are the famous "length contraction",
and "time dilation". It starts to "live" if you really see an example. That will be done below.
For now, let's first present the solutions for x' and t':
____________________________________________
x'  =    x-vt
-------------
√(1-v2/c2)

y'  =  y

z'  =  z

t'  =    t - (v/c2).x
-------------
√(1-v2/c2)
     
____________________________________________

Do you see that, for example t', is dependent on the speed "v" of S'?
From a classical viewpoint, that's absurd. However, from de deductions of Einstein,
it's really true. It simply means that the clocks in S and S', run at a different rates.
An observer in S, will see that the clock in S' runs slower.
When you see a simple example, these conclusions will start to "live".

The solutions of Einstein, as presented above, simply were possible by postulating
that "c" is constant in any from of reference, which already is "unclassical" by itself.

If we want, we can simplify the equations above, if we use the "gamma factor" γ, which is:

γ  =        1
-------------
√(1-v2/c2)
     

In many articles, however, folks call γ = √(1-v2/c2), "the Lorentz factor".

Since the γ factor is common among the transformation equations,
we may also write (for v along the x-direction):

____________________________________________
x'  =   γ  (x-vt)

y'  =  y

z'  =  z

t'  =   γ  (t - (v/c2).x)
     
____________________________________________


The equations above, are called the "Lorentz Transformations" (for "v" along the x-direction).
Note that the " γ factor", to a high degree, determines the relativistic effect here.

Take a look at the first equation for x'. Note that if v is ver low, then √(1-v2/c2) is practiclly "1".
Thus it means that the equations converges to the Galilean Transformations for low speeds.

Examples:

Example 1:

Suppose in S, we have a marked segment L0 = 1m, as a segment along the x-axis.
Suppose further, that frame S' is in rest too, just as S is, and they perfectly coincide.
In S', we have the same marked segment L', thus it has a length of 1m too. L0 and L' coincide too.

Now, suppose that "suddenly", S' moves with a constant speed of 0.7c along the +x direction.

What does the stationary observer in S measures of L', when S' moves with that speed?

Answer:

L' = √(1- 0.7c2/c2) L0 = 0.714 * 1 = 0.714 m

So, according to the observer in S, the L' has shrinked. In other words, the spatial dimension
along the direction of movement seems to be contracted.

Note that in this example, the speed "v" was extremely large. It's 70% of the speed of light,
which is extremely fast indeed. True, relativistic phenomena will show better if the speed
of the moving frame of reference is significantly above 0.1 c.

Example 2:

Suppose in S, we have L0 = 1m.
When S' is in rest, we have the same distance L = 1m.

Now S' moves along the x direction with (only) 1000 km/s, which is about 0.003c.

How long does a stationary observer in S, measures L', when S' moves with that speed?

Answer:

L' = √(1- 0.003c2/c2) L0 = (practically) √1 * 1 = 1 m

With low speeds, say below 0.01 c, relativistic phenomena are hardly observed.
That's why classical Newtonian mechanics works great with speeds that are only small fractions of "c".

Indeed, with speeds below 0.01 (where c is about 300000 km/s), the "world" looks fully classical again,
and that's why on a human scale, classical Mechanics still works fine.

"Length contraction", and "time dilation", have been experimentally confirmed at incredable precision.
For example, a clock on a sattelite runs a bit slower, exactly as predicted by the theory.
As another example, the decay rate of some elementary particles is longer, when they move with high velocity,
compared to Lab conditions.

This seems like a strange "flexibiliy" of Space. However, in SpaceTime (x,y,z,ct), it follows naturally
if the speed of "c" is constant in any moving frame of reference.

Ofcourse, the material above, evidently, only represents just a tiny glimpse on "The Theory of Special Relativity".

2.4 The essential meaning: Lorentz symmetry and SpaceTime distance:

Above, we already have seen an example of the Lorentz metrc (distance) in Minkowki SpaceTime.

ds2 = -c2t2 + dx2 + dy2 + dz2

The minus sign in "-c2t2" was not explained well above, but I can tell you
that the "extra" coordinate "ct", in fact should be "ict" (Henry Poincare, 1905) where "i" is the imaginary number
from Complex number theory. If you square that, it will give rise to the "-" sign.
I don't think that the very details are very important to the discussion I like to present.

To let the equation above, resemble more to a "distance", or interval "Δs", we can rewrite it like:

Δs2 = Δx2 + Δy2 + Δz2 -c2Δt2

where Δ is a universally accepted symbol for "small part", instead of infinitesemal qualifiers.

The equation means that the Lorentz distance (or Minkowski distance) between two "events" in SpaceTime,
is constant.

Since we speak of SpaceTime (Space, Time), points are better qualified by events (physical events),
that may take place, the one later than the other. It's possible to connect these events by light.
Suppose one particle (particle 1) emits a γ photon, which may be absorbed by another
particle (particle 2) somewhere else in SpaceTime.

Since the distance in SpaceTime is constant, you may sneaky contract a spatial component (say x), but them
the clock must run slower in order to have the same distance between the two events, again.

Using that as a principle, then apply some math, you will get the Lorentz transformations as
listed above.

You also might see that this framework enforces causality. It's not possible in this model, that,
for example, a particle 2 absorbs a γ photon, before it was send by particle 1 in the first place.

2.5 Lorentz violations:

In the above, we considered two frames of regerence, S and S', where S' had a constant velocity
along the +x direction., relative to S.
Ofcourse, we could have also choosen for a movement of S' along the -x direction, or along
the y-axis, or along the z-axis, or actually any direction in the coordinate system..

It would not have changed anything fundamentally.
The Lorentz transformations would still be the same format.

In SRT, there is no preffered direction in SpaceTime, and no dependency on whatever coordinate
system is used. This is also called "Lorentz symmetry".

Note:

You will see later that physicists appreciate (or nearly demand) that a "concept" is rotationally invariant,
invariant for transformatons, invariant for phase shifts, and invariant for change of coordinate systems.
This holds all the more for "something" that might be called a "fundamental concept".


This symmetry, or "gauge invariance" is reflected in theories which are (sort of) written or re-written
using the Yang-Mills fundamentals (or idea's).

Some theoretical considerations....

-(1):

Is it really true that there exist no preferred "something" in SpaceTime?

Maybe there exists an extremely small bias towards some "direction", or energy potential
in the Vacuum, or "hidden" yet undetected field in the Vacuum, or even location in the Universe,
or even location in our own local Milky way etc.. etc..

It's difficult to say something truly useful on the above speculations.

But there are some anti-symmetrical things indeed.

If you would observe some special physical systems, with some particles having electrical charge, and spins,
and "invert" the charges (so that + will get -, and the other way around), or mirror them (in an actual mirror),
then sometimes surprising effects can be witnessed: Violations of symmetry.

There are some fundamental forces in our world, like the Strong nuclear force, gravity etc..,
but something called the "weak interaction" displays, as many physicists believe, some un-symmetrical
"behaviour" indeed.

Contemplating using this sort of intel, and the principles of STR, still have not resulted
in very clear statements.

-(2):

Einsteins STR, uses a continuum, flat (not curved) 4D SpaceTime.

But what if the quantization of "Space" is true? Then, using the theory above, when a frame of referece
is (almost) infinitesmal close to "c", the quantization of Space must be "felt" in some way.
You can go very far in "length contraction", but what if when you come so close to the scale,
where Space quanta cannot be ignored anymore?

This sort theoretical considerations have also led to the search of "Lorentz violations".
Many experiments have been performed, to a very high prescision already, but no anomalies
have been detected yet.

The hope is, that measurments of any possible violation, might produce some insight
to which of the competing "Quantum Gravity" theories, is best.

2.6 Again, the metric in Special Relativity:

We already have seen it before above. However, I have a few additional remarks.
Here is the metric again:

ds2 = ┌ -1 0 0 0 ┐
│  0 1 0 0 │
│  0 0 1 0 │
└  0 0 0 1 ┘
┌ ct ┐
│  x │
│  y │
└  z ┘
┌ ct ┐
│  x │
│  y │
└  z ┘
= ┌ -ct+0x+0y+0z ┐
│ 0ct+1x+0y+0z │
│ 0ct+0x+1y+0z │
└ 0ct+0x+0y+1z ┘
┌ ct ┐
│  x │
│  y │
└  z ┘
= ┌ -ct┐
│  x │
│  y │
└  z ┘
┌ ct ┐
│  x │
│  y │
└  z ┘
= -c2 t2 + x2 + y2 + z2  

The matrix above, is the metric tensor in flat Minkowski SpaceTime, as used in Special Relativity.
Do not forget that this matrix is sometimes denoted as "-+++" in various articles (as explained above).
Also, we know the origin of the "-1" in that matrix (also explained above).

You might sometimes see the element -c2 instead of -1, as the first element in the matrix.
As you can see the equation above, it is indeed allowed to replace the element "ct" of both vectors,
with simply "t", and thus "transfer" c2 to the first element of the matrix.

Lastly, if only the diagonal elements are non-zero, as is true in the metric above, a metric is also often
portraid as "diag(list of diagonal elements)", which in STR then would be:

diag(-1, 1, 1, 1).

As you see, this is an easy way to describe a matrix, if it only has diagonal elements, which are non-zero.
In articles about Special Relativity, the notations "-+++", or "diag(-1, 1, 1, 1)", are often used.

2.7 A few words on General Relativity:

General Relativity is too much of a grande Theory, to discuss in any value in such a simple note like this one.
However, it is possible to distill a few main points.

General Description:

Einstein's Theory of General Relativity, is much more involved than Special Relativity.

One reason why it is called general, is because accelerated frames of reference are studied,
instead of "only" frames of reference moving with a uniform velocity.
In effect, all sorts of relative movements are considered.

One astounding finding was, that "gravity" is equivalent to acceleration.
The acceleration is then due to curved SpaceTime. Or, "gravity" is due to curved SpaceTime.

This was absolutely completely different from the common classical view, before 1916,
where gravity is a Force, just like the electrical- or other know forces.

The "core" idea of GTR, is that Einstein came up with the theory that SpaceTime is a
geometric object whose curvature is determined by the distribution of energy (or matter), "within" that region.
The curvature then determine how free objects will move in that curved SpaceTime.

Thus gravitational force is no longer a force in the classical Newtonian sense, but a mere
manifestation of the curvature of spacetime.


In a type of math, which was later called "differential geometry", curvatures of spaces (manifolds)
were already explored by Gauss, Riemann, Christoffel, Cauchy, and too many other mathematicians to name here.
For some important theorems in that realm, we can go back to the years around 1850, or even earlier.
Indeed incredable, that this mathematical branch needed well over 100 years to develop into a mature framework
where it is still intensely used by physicists today.

But Einstein too, relied heavily on "differential geometry" in the period he developed GR, from 1905 - 1016.

If you would consider some "manifold", like some 2D surface in 3D, it's possible to introduce
a tangent vectorfield "along" that surface, which describes the "rate of change" of how that surface
actually bends. It's a simple example, which hopefully you can visualize in Space.

An extension to a vectorfield, is a description using a tensor object. This mathematical object,
makes it possible to "express" more twists, in multiple directions, in any point.

An example of one of the field equations in GR:

A tensor is a very suitable mathemathical object to capture the differences in twists and bends,
from a point to other neighbouring points.
It's therefore no wonder Einstein found a way to describe the curvature in SpaceTime, using
implementations of tensor objects.

This can be illustratred by one of his field equations, where Guv and Τuv are tensor objects:

Guv + guv Λ = 8 π Τuv

(where G=c=1, or geometrised/normalized units)

In the field equation above, the curvature of SpaceTime (Guv) is related to the mass-energy distribution
uv) which is present "in that neighbourhood".

It's absolutely remarkable, that this mathematical expression "links" mass-energy (or simply mass, or simply energy) to
curvature in SpaceTime.
It's a departure from classical Physics, where Gravity was considered to be a "force", just like
for example the Electric force.
But Einstein managed to link the curvature of SpaceTime, to mass-energy.

Now, if somehow it can be made plausible that a free object follows the curvature of SpaceTime,
then we are close to understanding how "mass / curved spacetime / path of an object",
all are connected by the Theory.

Why does a small free object follows the curvature in SpaceTime?

If you would think that it's a trivial question, then you must be a relative of Einstein.

If a particle is small, there is hardly any "feedback" into the "warped" SpacetIime, which itself is due
to some larger mass distribution "nearby".
So, a small test particle, "in some way", finds it's path in curved SpaceTime. So, what is the path here?

If we would not consider a small object, then this object itself would significantly warp SpaceTime too,
which is covered by Einstein's GTR, but it's very complex.
It does not have to be really as small object, as long as it's small relative to the mass that curves SpaceTime
in the first place. It's a bit similar to Earth orbiting the warped spacetime due to the Sun.
The Sun is immensly more massive than Earth.

Short definition: In differential geometry, a "geodesic" is a generalization of the notion of a "straight line"
in "curved spaces".


Now, the question thus equivalent to:

The motion of a small test particle, is completely determined by the bending of the SpaceTime.

Some folks can prove it, by using the equivalence of intertial mass and passive gravitational mass.
These two interpretations of "mass" has not mentioned at all, in this simple text.

Others can prove it by using the general equation of motion in curved SpaceTime.

It's not so very trivial. One idea is using the concept of parallel transport. You can consider a tangent vector
along the motion, or orthogonal to the motion. The motion is in curved SpaceTime, ofcourse.
If the orientation of that vector does not change relative to the path of motion, then you stay on the geodesic.

If you are on a curved sphere (a surface) in R3, and you hold a stick exactly in front of you, and
you walk along a "great-circle" (a geodesic), the orientation of the stick (tangent vector) does not change.
So, if you go from the equator to the Northpole, and keep on going the straight line (the great-circle), the
tangent vector does not change. However, while on the Northpole, and you change suddenly direction, like turning left,
and then go back to the equator again, then there was a rather sudden disruption in the orientation
of the tangent vector. That does not correspond to the motion of a free particle moving in curved SpaceTime.

Relativity is a Theory using 4 dimensional SpaceTime:

Throughout section 1, it was hopefully clear that SpaceTime is 4-dimensional, which is reflected
for example in coordinates like (x,y,z,ct).

I like to stress that fact, since in section 4, Kaluza-Klein theory, which is a remarkable theory,
is an attempt to unify Einstein's GR, and the ElectroMagnetic (ElectroDynamics) Theory of Maxwell.
The arena where that seems to work, is a 5-dimensional SpaceTime, which is very remarkable.

The ideas in Kaluza-Klein, inspired many other Theories, even very modern ones.

However, Kaluza-Klein, does not seem to fit well enough in, e.g., modern Yang-Mills concepts, and even beside
that, Kaluza-Klein was more or less superseded by String-, M-, and Brane theories.

The metric in GR:

Today, it still is true that the field equations of Einstein in GR, do not have one exact solution,
fitting all situations. There exists "cases", like e.g. describing SpaceTime as you go near to a Black Hole.
Or, when studying Cosmological problems. Then, exact solutions to the equations can be found.

For example, you might assume a "homogeneous and isotropic expanding Universe", in which model
the Friedmann–Lemaître–Robertson–Walker (FLRW) metric is an exact solution.

Or, as another example, you might study the Schwarzschild Spacetime, which is known in the study
of, for example, Black Hole's, viewed from GR.

Still, certain assumptions are always been done, like a homogeneous and isotropic SpaceTime, a smooth SpaceTime,
and a very important one: the spherical symmetry of a spacetime.

Quite a few "metrics" have been proposed in GR, like the RW metric, the FLRW metric, and many more
exotic metrics.

Ofcourse you want to see tha matrix associated with the metric in GR. But, there is no general "one matrix".
Unless you would be happy with this:

┌ g11 g12 g13 g14
│ g21 g22 g23 g24
│ g31 g32 g33 g34
└ g41 g42 g43 g44

Given certain (very resonable) assumptions, like spherical symmetry of a spacetime, and orthogonal coordinates,
we might have a matrix like:

diag (-c2e,e,R2,R2sin2(θ) )


Chapter 3. A few words on Planck's length, and Planck's time.

The "length of Planck", is an extremely small length, namely about 1.6 x 10-35 m.

Associated with this length, are two other values, namely "Plancks time", and "Plancks mass".
Of those two, "Plancks time" is somewhat more easy to understand, since it's the time needed for light to "traverse" Planck's length.

In order to get an appreciation on how small the "length of Planck" actually is, then take
a look at the following figures:

-The Bohr radius, that is the classical radius of the Hydrogen atom is about: 5.3×10-11 m.
-The classical radius of a proton is about: 0.87 ×10-15 m.

If we compare Planck's length to those upper examples, like the radius of a Hydrogen atom,
or what is often taken as the "classical" size of a proton then we will really appreciate
how insanely small Planck's length actually is.
If you would "inflate" a proton to the size of the Sun, relatively speaking, you still could not even see Planck's length.

This length is formed from other Universal constants (like the speed of light and others),
but we will also see on what theoretical basis this length was originally derived from.

We have to be very careful on how exactly to interpret such a small length.
For example, not all physicists are convinced that those Planck values really represent fundamental
constants in Nature.

At the same time, it cannot be denied that "Quantum Gravity" theories take Planck's length
as a reference point, that is, a scale that represents the dimensions of Space quanta (spins, loops etc..).
So, especially theoretical physicists working in fields like String theories, Quantum Gravity, Cosmology etc..
interpret Planck's length as a fundamental building block, in some way.

Planck's length is the following:

lp = √ (ħ G / c3) = (about) 1.6 x 10-35 m.

where c is the speed of light, ħ is the socalled reduced Plancks constant, and G is the universal gravitational constant.
So, the lenght of Planck is "build" from very fundamental constants from physics.

The theoretical time required for light to cross a distance of 1 Planck length, is about 5.4 x 10-44 seconds.

How is Planck's length derived? Where does it come from?

We are not going to do much math in this text. But basically, if one would compress one of those other constants, namely "Plancks mass"
to the "Schwarzschild radius", which is the critical radius of a Black Hole, then one would arrive to Planck's length.
To be honest, we would need to consider the Compton wavelenght as well, but we skip that here.

Note that Schwarzschild radius" is that metric, where SpaceTime fully collapses (into something we are not fully sure of).
Usually, the Schwarzschild radius can be understood as the "border" of a black hole.

By the way: some modern ideas in physics around black holes, will certainly be a subject in this modest note.

Some physicist tie the Planck scale to a phenomenon called Quantum Fluctuations, where Energy "pops up" from
the Vacuum in the form of a particle-antiparticle pair, which quickly destroy each other again.

Now we may see why "length of Planck" could be of significance of our discussion of the "Vacuum and SpaceTime".
Here are a few "suggestions":
  1. It's possibly the lenght where all regular, smooth, continuous SpaceTime principles do not apply anymore.
  2. It's possibly the scale of SpaceTime quanta.
  3. It's the scale where a compressed Planck Mass (1.22 ×1019 GeV/c2) will collapse into a black hole.
  4. It's possibly the length where Quantum Mechanics and Gravity might unite in a single theory.
    Some Quantum Gravity theories define loops or spins with such fundamental dimension.
  5. It's possibly the most basic "container" of information in "Quantum Information Theory".
  6. It's possibly the characteristic length of "strings" in Superstring theory.
  7. It's possibly the characteristic length, related to "Quantum fluctuations" in the Vacuum.
I can simply list all that stuff above, but it then it simply just has to be illustrated, with some core concepts
of such theories. That is what I will try to do in the following chapters.

It's true that Physics is in full development, and a very definitive, complete, Theory is simply not present.

In the next sections, it's very important to give a quick overview on the fundamental themes that gradually,
found it's way into physics, like the Yang-Mill ideas, Gauge invariance, Quantum Mechanics, Quantum Field Theory,
the position of Relativity, Quantum Gravity, the Standard Model etc...

It's important to get a feel into that "stuff". Ofcourse, it will not be in depth, and I could not ever
cover it in depth, since it takes an incredable amount of knowledge, and thus an incredable amount of time to master.

In depth studies indeed takes years. But I am confident I am able to at least touch upon these subjects
in order to convey a feel for the fundamental ideas behind those themes.

However, what appeared shortly after General Relativity, namely the Kaluza-Klein theory (around 1921),
gives a certain perspective on SpaceTime and unification. In that sense it's important.
So, I like to do that first.

Chapter 4. Kaluza-Klein.

4.1 Introduction:

The original theory of Theodor Kaluza, is absolutely stunning! It appeared somewhere around 1920.
Kaluza first had send his paper to Einstein, who took some time to study it. Later, Einstein
(sort of) helped Kaluza to publish it.

Kaluza "unified" Einstein's theory of General Relativity, and Maxwell's ElectroDynamics (electromagnetism), into
a 5 dimensional framework.

The crux is, that by adding an extra dimension to the 4 dimensional metric (as is used in Einstein's equations),
it is possible to describle the motion of a charged particle, as to follow a "geodesic" as well, very similar as
to the original proposition of a motion in Einstein's General Relativity.

-In General Relativity, the presence and amount of mass "M" (or energy), determines the curvature of 4D SpaceTime.
This in turn determines how a "testparticle" with mass "m" will move.

-If you think of it, a distributed charge "Q" somewhere in SpaceTime, will determine how a charged "testparticle",
with charge "q", will move.

Even these two simple lines of text, show that there is a degree of analogy between the effects of Mass and Charge,
in the surrounding SpaceTime, and on the motion of a "test particle".

In fact, the manifestation of "force" due to curvature of SpaceTime, called "gravity", now is aligned with the forces
which were classically associated with ElectroMagnetic fields. Most often, this unified "force" still is called gravity,
but we must understand that this "superforce" includes "gravity-" and the "ElectroMagnetic" force.
You may also see it as a 5 dimensional Gravity.

Please note that the qualification "force" is not exactly as we should see it in GR. Namely: acceleration and gravity
are (as consensus exist among physicists), indistinguishable.

Some folks name Kaluza's theory "the Fifth dimensional Relativity theory".

In retrospect, we can see why Kaluza was motivated to try to unite both fundamental theories.

However, the two theories differ quite significantly, from a mathematical prespective.
Maxwell's field equations for the E and B fields (or the vector- and scalar potentials), are
rather regular linear differential equations, while Einstein's tensor equations are from another order.
Indeed, rather dis-similar.

Reconciling both worlds can only be done, using a new insight, namely that of the metric.

Then, in 1926, Oscar Klein proposed a modification to Kaluza's theory. In effect he managed to align the theory
with Quantum Mechanics, by scaling and quantifying this extra dimension to Planck's constant.
This extra spatial dimension was quantified, and curved in such a way that space comes back on itself like a circle.
In fact, this was the first introduction of a "compactified dimension".

Quite some time after both publications of Kaluza and Klein, the theory was renamed to the "Kaluza-Klein" theory.

4.2 Non technical description:

The study of the metric is often a starting point in the study of SpaceTime. For example:

- The equations of motion can be derived from the metric.

- Or, it can been seen that it has the consequence, that a testparticle must follow a certain trajectory, like
a geodesic in GR, in curved SpaceTime.

- Or, it follows from the metric that events can only be causally connected, like in STR.

Kaluza showed, using a 5 dimensional approach of Einstein’s GR, how both Electromagnetism and Gravity
could be treated in a uniform fashion, namely in the sense that both are described as parts of a five-dimensional metric.

So, a rather daring proposition then would be that our 5 dimensional metric looks like:

┌ g11 g12 g13 g14 ω15
│ g21 g22 g23 g24 ω25
│ g31 g32 g33 g34 ω35
│ g41 g42 g43 g44 ω45
└ ω51 ω52 ω53 ω54 ω55

Note the "new" ωij elements in this 5 dimensional metric.

Now, all elements in this new metric must contain information for the unified Theory, but the last column
and the lowest row, should express specifically the unified ElectroMagnetic components.

Skipping lots of mathematical formulation, it all turned out to be highly remarkable:

Using a "gauge like formulation" as in section 1.5, example 3, it turns out that the new g11 (as in the
matrix above), can be written as the old g11 plus components of a vector- and scalar potential.

What is more, the new ωij elements can be formulated exactly as was done in section 1.5, example 3.
This strongly suggested that these ωij elements are indeed the EM vector- and scalar potentials.
This convinced many physicists that the new metric indeed described Gravity, united with ElectroDynamics.

Later, the additional propositions of Klein, on Kaluza's work, convinced many physicists again, that the addition
of vector- and scalar potentials to the original 4D metric, is a working example of a Gauge transformation,
especially viewed from the socalled "cilindrical condition". The latter is no more than the idea that this
additional fifth spatial dimension can be viewed as an extremely small "curled up" (or cilindrical) dimension
which repeats itself continuously.

Chapter 5. A few words on "The Dirac Sea".

Nobody has the full answer on the structure of SpaceTime. Not yet. And maybe never.
Or maybe in a few years from now? Some physicist say that we are pretty close.
Who knows..., but I am a little sceptical on any statement that the final theory is "just around the corner" .

If SpaceTime quanta are real, then even somebody may postulate that it looks like a real Matrix movie,
but this time realistic and not psychic, since the association with quanta and memory elements, is quickly made.
However, there are almost no physicists which supports such a view.
But it is quite intruiging to persue theories centered around themes as "The Universe as a Emulation", or
"Physical reality is just Virtual"....

You can only get a better apprecation when studying older en modern ideas from Physics, and other sciences
like psychology, philosophy and others.

One old idea, from the early 1900's (1928), is the Dirac sea. It does not address the physical structure of SpaceTime,
but it might show an important property of SpaceTime.

Dirac managed to combine important principles of quantum mechanics and the theory of special relativity,
to arrive at a relativistic wave equation. One pecularity from his work, is the existence of negative energy states.
If you consider a free electron for example, then it could endlessly emit energy in the form of photons.
This is, however, not observed.

To solve this, Dirac postulated a negative "sea" of particles in the Vacuum, where all such negative states
are already occupied. Then, using the Pauli Exclusion Principle (PEP), a "normal" electron (silly word indeed),
could not reach that sea since it's forbidden by PEP.

There is no way to easily explain PEP, but it is absolutely very profound in Physics.
It holds for what most people sea as "real particles" or fermions, like the electron.
PEP, for example, explains the number of electrons in certain energy levels in an atom, and quantum numbers.
Simply stated: in an atom, or in close vincinity, each particle must have a unique set of quantum numbers.

If for some reason, an electron managed to escape that sea, then a "hole" would remain.
This hole would interact with EM fields, exactly as if it is a positively charged electron.
In effect: Dirac predicted the positron, which is the real anti-matter partner of the electron.
Not much later, the positron was indeed discovered.

Fig 4: Just an illustration of Holes in the Vacuum with -E states.



What is the status of such ideas today? An interpretation, litterally as Dirac proposed,
is not how physicists look at SpaceTime today.

But those early ideas certainly contributed to QFT theory. However, some physicist still place bets
on "hole theories", like for example "causal fermion systems".
Maybe you like to Google on those keywords in combination with "arxiv".

Chapter 6. Black Hole models.

It's one of the most facinating objects in our Universe... But what are they?
Do "reasonable" theories, on Black Holes, indeed exist?

Yes, quite a few of them, actually. I think that it is fair to say, that a rather large number of
(sligthly) "older" formulations exist, and some more recent formulations too, using (among other disciplines) knowledge
from (Quantum) Information sciences, where also entities as "Surface Area", "Information", and "Entropy",
are important themes.

The terms "classical" and "older" are ofcourse very relative. Those theories are not really "old", but I
need a way to discriminate between those theories, and ideas which primarily started to emerge after (around) 1985 (or so),
where newer concepts from String- Loop and other Quantum Gravity ideas where applied.
And at least equally important: new insights from Information sciences, can be evaluated too.

- I think it's best to start with a short review of the more "classical" theories, including the ones
based on General Relativity, and some of which also uses the "Schwarzschild" model (section 6.1).

- After that, I like to review some more recent Black Hole models (section 6.2).

6.1 Some "semi-classical" formulations:

6.1.1. Considerations using Classical Mechanics:

Even using the true classical mechanics (that is: Newtonian mechanics), it seems that a "dark star" would be
theoretically possible.

Just like what is true with Earth, the concept of "escape velocity" can be used with other large massive objects as well.

With Earth, to escape from it's gravitational pull, a certain minimum velocity (for e.g. a rocket), is needed
to enter interplanetary space.

The more "massive" (the more mass) a Planet or Star is, the higher that required minimum velocity is.

In the extreme, with an extremely massive object, it thus might be so, that the required minimum velocity
get's close to the speed of light, or even equates to the speed of light.
So with just using classical Mechanics, a "dark" (and extremely massive) star, could be a theoretical reality,
at least according to pure classical mechanics.

Ofcourse, using classical mechanics is not good enough. For example, questions arise as to what would
be the state of matter in such extremely massive object? This cannot be answered by using classical mechanics alone.

6.1.2. Considerations using General Relativity:

Einstein provided for a general framework on how to interpret situations in accelerating frames of reference.
This is his General Theory of Relativity. It says that acceleration and gravity are indistinguishable, and that
the presence of Mass (or mass-energy), curves SpaceTime, which produces the effect called "gravity".

Einstein's field equations, indeed relate the curvature of SpaceTime to the Mass-Energy distribution.
However, his field equations are indeed very "general" in nature.

For specific situations, "solutions" to the general equations can be found.

Karl Schwarzschild described such solution, namely the curvature of SpaceTime outside a spherical Mass,
where further several assumptions were taken (like that the angular momentum of the mass may be neglected, and several other assumptions).

So, please take good notice of the fact, that Schwarzshild's solution describes curvature of SpaceTime outside a spherical Mass.

Below we further concentrate on the solution where that Mass is concentrated within the critical radius "R".

Taking the metric again as a guide how to adress this situation, in Schwarzschild's solution, things get quite
spectacular when an amount of mass get's concentrated within a certain treshold distance, called the critical radius "R".

The general form of a metric was already described in section 1. Usually, we like to see it in
a matrix notation, which helps to visualize the dimension of all components.
However, ultimately, it should return a "number", which represents "the distance" ds.
If needed, take a look at (for example) section 2.6 again, which shows the metric in STR.

According to GR, if the "mass-density" would be very high, then the surrounding SpaceTime get's strongly curved too.
In the extreme, space would be streched so much, and gravity would be so high, that even light would not escape anymore.

This sort of "barrier" (where even light would not escape anymore), is called a "horizon" or critical radius.
In effect, at that "critical radius", the escape velocity is equal to the speed of light (c).

Schwarzschild also showed that a critical radius can be associated with any amount of Mass.

For example, I you would compress all of the mass of the Earth in into the size of a small marble (9 mm), then the escape velocity
from "Earth" would be the speed of light (c). In that case, the "horizon" would be 9 mm.

If you would do the same for the Sun, you need to compress all mass into a sphere of less than 3 km.

Karl Schwartzschild calculated a "horizon" for any mass ("M") compressed inside a critical radius R.
So, the terms "horizon" and "critical radius" are the same, actually.

This critical radius is called the "Schwarzschild radius". The full "exercise" is also called the "Schwarzschild solution".
When you, from the "outside", would move towards that radius, the curvature of spacetime increases, and the gravity strongly increases too.
At the Schwarzschild radius, the gravity is so strong that even light cannot escape anymore.
And SpaceTime is curved asymptotical once you get very near the critical radius.

Here I show the Schwarzschild metric, in spherical coordinates (ct, r, θ, φ).
Why spherical coordinates are used, is that Schwarzschild considered a spherical (symmetrical) Mass distribution.
In such a situation, spherical coordinates are the preferred choice (easier than using (ct, x, y, z)).

As applied to a spherical mass distribution, the Schwarzschild matrix gμν of the metric will be shown below.

In the equation for d2 below, it must be understood that the whole Mass now is compressed within the Horizon,
or "critical radius", with "radius" R, with respect to the centre of the distribution.
The Schwarzschild metric, or Schwarzschild solution, is valid for all SpaceTime "outside" the critical radius.

gμν = ┌ -(1-R/r) 0 0 0  ┐
│ 0 1/(1-R/r) 0 0 │
│ 0 0 r2 0        │
└ 0 0 0 r2sin2(θ) ┘

And the expression for ds2 was found by Schwarzschild to be:

ds2 = -(1 - R/r) c2 dt2 + dr2 /(1 - R/r) + r2(dθ2 + sin2(dθ)dφ2)

The equation also expresses the rate of curvature, as the rate of "ds" to "dr", where "dr" would be a measure
of distance in flat SpaceTime.

Note also that the term:

dr2 /(1 - R/r)

already implies "asymptotic behaviour" of the curvature, as "r" (from the outside) will get closer and closer
to "R" (the critical radius). You can easily see that when r → R, the fraction in the equation above → "0".
So, mathematically, that term will go to infinity as r → R.

The derivation of the matrix and ds2 is interesting, but I think it is not very important for our discussion,
so I leave that out. It's much more important to have a sort of "general" appreciation for this specific situation.

Now, please take a look at figure 5 below.

Fig 5. Curved SpaceTime goes asymptotic as "r" get closer to "R".



How to interpret figure 5?

In figure 5, you see some concentric circles, representing flat space, so, in this case, there is NO mass M in place.
You see a piece of lenght, called "dr". In fact, in this situation, just think of it as if those blue curved lines
are not present in the figure. Here, "dr" represents the spatial component of flat SpaceTime.

Now, imagine we place a large "mass density" in the centre of those concentric circles (point particle). How space will be "curved",
is now expressed by the "blue" curved lines. The closer you go to the centre, the more asymptotically the blue lines become.
In this situation, where M is present in the centre of the concentric circles,
the piece of lenght, is now represented by "ds" (that is: the spatial component of the metric "ds").

Figure 5 is not so great after all. The blue circle does not represent the critical radius, since if it was,
the asymptotic behaviour was already maximal near the blue circle. I think I placed it as a sort of reference.
In figure 5, the critical radius would coincide with the point Mass at the centre of the circle.


Again, note that the above tries to illustrate a piece of curved space, that gets more curved the closer you go
to the mass "M".

Note that the "true" full metric "ds" (time and spatial) between "events" should remain the same, to preserve causality.
However, just with STR, the seperate spatial- and time components may "seem" flexible, like in that clocks move slower.

Schwartzschild used the equation above, to finally arrive to a formula for the "Critical Radius":

Rs = 2 GM / c2

Where "G" is the gravitational constant, "c" is the speed of light, and "M" is the mass inside the critical radius "R".

For any mass "M" a critical Schwartzschild Radius rs (or "R") can be calculated, which defines the Horizon, and effectively
says when then mass becomes a "black hole". For example, if the Sun's mass were to be compressed within (about) 2 miles,
then it would become a black hole.

Actually, the results are quite spectecular. Although the Einstein Field equations are the fundaments, here Schwarzschild found
a practical solution for a common situation.

Note that this solution, does not neccessarily implies a "singularity".
When you have a certain amount of mass, compressed beyond it's "critical radius", it becomes a Black Hole.

However, many physicists reasoned that once a large mass starts to collapse, and gravity or the curvature of SpaceTime
goes asymptotic, the "end result" might be a "singularity", which is theoretically a point with 0 distance,
or near zero distance.

Note:

There is a slight problem in using phrases as "length stretches", or "SpaceTime stretches" and that sort of statements.
We know from STR, that the SpaceTime distance between "events" must be constant. That requirement has not dropped.
The only thing we can rightfully say is that SpaceTime gets very curved as you approach the critical radius,
And it will even be asymptotic at, or very near, the critical radius.
In human languages, we say things like "lengths stretches" etc.., but in fact it's wrong to do so.
However, as is often said in various articles, a remote observer may see a spacecraft to get "spagettified",
as it would get nearer and nearer to the critical radius. The observation however, is indeed correct.

6.1.3. Some Black Hole models:

We know that (theoretically) an extreme high "mass-density" cause SpaceTime, to curve extremely too.
It stretches to infinity, or, you may view the curving as getting asymptotic.

Usually, it is percieved that a Black hole "begins" at the Schwartzschild Radius (Horizon), since at that metric, light cannot escape anymore.
From a larger distance "outside", SpaceTime is flat. However, as you come nearer and nearer to the Horizon, SpaceTime curves more and more too.
From Eistein's theory, this curvation goes "asymptotically", or stated in terms of "distance", a spatial distance "ds1" close to
the Horizon is much more strechted compared to a similar spatial "piece" of SpaceTime "ds2" at a large distance from the Horizon.
At a certain point, very close to the Horizon, even light cannot overcome, or travel, the asymptotically streched SpaceTime no more.
Hence, we have a sort of black object in SpaceTime.

When formulating a theory on Black Holes, scientists differentiate between several initial conditions, like:

- "angular momentum" (rotation), is / or is not taken into consideration,
- charge is / or is not taken into consideration.

The considerations above, where a mass within the critical radius becomes a black hole,
leads to the most "simple" Black Hole model, "the Schwartzschild singularity".

However, it's reasonable to assume that a collapsing mass (which is about to become a black hole), has an angular momentum.
A wellknown theory that takes that into account too, is called the "Kerr metric" which leads to the "Kerr Black Hole" model.

Let's take a look at some of the proposed Black Hole models, as they exist today.

1. A true Singularity: the "classical" Black Hole (Schwartzschild Black Hole):

In this case, the Black hole is visualized as a "singularity", that is a point-like entity of infinite density.
The Black Hole has a Horizon ("outside", if one may speak in such a manner), or also called "Schartzschild radius",
at which the escape velocity matches the speed of light, so even light cannot escape anymore.

First, Einstein's theory of General Relativity (GR) is a piler in physics, but it's not a "theory of everything".
For example, Quantum Mechanics (QM) works in the atomic domain and elementary particles, where General Relativity does not work at all.

For physicists, it is an enormous challenge to, somehow, unite GR and QM. Up to now, that's still not fully solved.
Some say that "Quantum Gravity", once fully matured, or "Superstring Theory", will do the job, one day.

For now, a "singularity" is not nice in any sort of physics, and mathematically, it's a sort of a horror too, since equations
all break down at true singularities.

2. White Holes: Black hole as "input", White hole as "output":

Some physicists seriously explore the notion of a "White hole". A black hole is an entity where nothing escapes from,
and a White hole is sort of the reverse: it's an entity where nothing enters into.

It's tempting to view the Black hole and White hole to be connected in "some" way, where the Black hole sucks in stuff,
and the White hole spits out stuff...
But, the most viable theory that covers a White hole, then demands that the White hole is a small Big Bang in another Universe.
Yes, that's something...
But don't forget: here, we are still only reviewing some popular ideas on our two "objects".

3. Loop Quantum Gravity "Black holes / White holes".

Loop Quantum Gravity, sees "loops" as fundamental building blocks. When a Supermassive star explodes
into a Supernova, the corresponding "collapse" of matter will not ultimately end into a singularity,
but at a very small scale, the "loops" will hold a certain metric. Certainly, gravity, and matter-density
will be abnormally strong. However, the loop structure would halt the further collapse
of a black hole into a "true" singularity, and that's appealing to physicists.

Due to the high gravity, time "moves" hyper slow. Ultimately, as some physicists say, a "Quantum Bounce"
would change the Black Hole into a White hole, as some of the variants of the Theory predicts.
However, we have not observed that yet (probably), since time is almost frozen, extremely close the Black hole.

4. The "Firewall" black hole model:

In this particular model, matter that swoops towards the Black Hole, will just "smash" agains the Horizon.
Well, that might be a bit of a bad manner of speaking. What we mean is, that matter will not enter a singularity,
and the Horizon (Schwartzschild radius) functions as the unbreakable barrier (the "firewall").
As with all Black Hole models, we have unresolved problems. One might be pretty obvious, when we think about it:
according to GTR, time slows down more and more, when matter comes nearer and nearer to the Black Hole.
Many physicists relate this problem to who actually the observer is, like an observer from a larger distance,
compared to an observer that rides along the falling matter.

Anyway, characteristic of this mode, is that the Horizon (at the Schwartzschild radius), will "accumulate" all
matter (and associated information).
As a collary, it is generally assumed that the 'surface' "A" increases over time.

This model is very important from an Quantum Information perspective. We touch upon this in section 6.2.

5. Firewalls would be real, but black holes would not:

The Firewall model is not without inconsistencies. However, that statements holds for any model.
One interesting idea is that a true physical Firewall exists, that stores information of any object
that was victimized. SpaceTime "inside" however, is so abnormally deformed, that actually nothing can exist,
so victimized matter has not other degree of freedom than to radiate towards the Firewall again.

Formulated in such way, one might say that the Black Hole does not actually exist, but it is the Firewall,
which is the physical placeholder for victimized matter and information.

6. Black Rings:

General Relativity, and many other theories, regard SpaceTime as a 4 dimensional Space, using
three spatial dimensions, and one time dimension, also often referred to as a "(3+1)".

As you probably know, theories exists using additional dimensions, which often are assumed to be compactified.
In some "Brane" theories, physicists have assumed a (4+1) space, where the one additional dimension is not compactified,
but it goes undetected anyway, since only "gravity" is "aware" of it.

Applied to Black Holes, the topology might differ from the usual "spherical model" (chracterized by the Schwartzschild Radius).
In particular, donut shaped "Black Rings" seem to be possible.

7. The Fuzzball Black Hole:

One interesting theory, that might resolve the "Information loss" problem, is "Superstring" theory applied to Black Holes,
using Entropy considerations as an important tool to explore further.
This then results in the "Fuzzball" Theory, where the interior has a certain size, and thus avoids the singularity.
Furthermore, string and Branes preserve the "states" of the matter that was victimized, and thus it avoids the "Information loss" problem.

It seems from the literature, that some view this model as a "reasonable" cadidate.

8. The Kerr Black Hole:

If (due to some reason) a mass collapses to a region below the critical Schwartzschild radius, then, using GR only,
a singularity might be expected.
However, that conflicts with other pilers of physics like Quantum Mechanics.
It's fair to say that current theories does not fully solve the Black Hole problem.

However, when angular momentum (rotation) is taken into account too, different solutions may pop up.
That the rotation of matter is important, you might see from an ice-skater, which pulls his/her arms towards the body,
and then starts to spin faster and faster.

Roy Kerr (1963) found a "solution" to this problem, which uses multiple "horizons". An apparrant signularity may form,
which might be ring-shaped too. This is often called the Kerr Black hole, or the Kerr Ring.

6.2 Some "modern" formulations:


Chapter 7. Quantum Gravity (Loop Quantum, Strings).


Chapter 8. Quantum Information and Physics.


Chapter 9. SpaceTime and Entanglement.


Chapter 10. A few words on AdS/CFT and Dimensional Reduction.