Friday, May 29, 2009

MATHEMATICS........

"Maths" and "Math" redirect here. For other uses of "Mathematics" or "Math", see Mathematics (disambiguation) and Math (disambiguation).

Euclid, Greek mathematician, 3rd century BC, as imagined by Raphael in this detail from The School of Athens.

Mathematics is the study of quantity, structure, space, relation, change, and various topics of pattern, form and entity. Mathematicians seek out patterns and other quantitative dimensions, whether dealing with numbers, spaces, natural science, computers, imaginary abstractions, or other entities.Mathematicians formulate new conjectures and establish truth by rigorous deduction from appropriately chosen axioms and definitions.

There is debate over whether mathematical objects exist objectively by nature of their logical purity, or whether they are manmade and detached from reality. The mathematician Benjamin Peirce called mathematics "the science that draws necessary conclusions".Albert Einstein, on the other hand, stated that "as far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality."

Through the use of abstraction and logical reasoning, mathematics evolved from counting, calculation, measurement, and the systematic study of the shapes and motions of physical objects. Knowledge and use of basic mathematics have always been an inherent and integral part of individual and group life. Refinements of the basic ideas are visible in mathematical texts originating in the ancient Egyptian, Mesopotamian, Indian, Chinese, Greek and Islamic worlds. Rigorous arguments first appeared in Greek mathematics, most notably in Euclid's Elements. The development continued in fitful bursts until the Renaissance period of the 16th century, when mathematical innovations interacted with new scientific discoveries, leading to an acceleration in research that continues to the present day.

Today, mathematics is used throughout the world as an essential tool in many fields, including natural science, engineering, medicine, and the social sciences such as economics and psychology. Applied mathematics, the branch of mathematics concerned with application of mathematical knowledge to other fields, inspires and makes use of new mathematical discoveries and sometimes leads to the development of entirely new disciplines. Mathematicians also engage in pure mathematics, or mathematics for its own sake, without having any application in mind, although practical applications for what began as pure mathematics are often discovered later...

.

Friday, May 22, 2009

Antiparticle


 

Corresponding to most kinds of particles, there is an associated antiparticle with the same mass and opposite electric charge. For example, the antiparticle of the electron is the positively charged antielectron, or positron, which is produced naturally in certain types of radioactive decay.

The laws of nature are very nearly symmetrical with respect to particles and antiparticles. For example, an antiproton and a positron can form an antihydrogen atom, which has almost exactly the same properties as a hydrogen atom. A physicist whose body was made of antimatter, doing experiments in a laboratory also made of antimatter, using chemicals and substances comprised of antiparticles, would find almost exactly the same results in all experiments. This leads to the question of why the formation of matter after the Big Bang resulted in a universe consisting almost entirely of matter, rather than being a half-and-half mixture of matter and antimatter. The discovery of CP violation helped to shed light on this problem by showing that this symmetry, originally thought to be perfect, was only approximate.

Particle-antiparticle pairs can annihilate each other, producing photons; since the charges of the particle and antiparticle are opposite, charge is conserved. For example, the antielectrons produced in natural radioactive decay quickly annihilate themselves with electrons, producing pairs of gamma rays.

Antiparticles are produced naturally in beta decay, and in the interaction of cosmic rays in the Earth's atmosphere. Because charge is conserved, it is not possible to create an antiparticle without either destroying a particle of the same charge (as in beta decay), or creating a particle of the opposite charge. The latter is seen in many processes in which both a particle and its antiparticle are created simultaneously, as in particle accelerators. This is the inverse of the particle-antiparticle annihilation process.

Although particles and their antiparticles have opposite charges, electrically neutral particles need not be identical to their antiparticles. The neutron, for example, is made out of quarks, the antineutron from antiquarks, and they are distinguishable from one another because neutrons and antineutrons annihilate each other upon contact. However, other neutral particles are their own antiparticles, such as photons, the hypothetical gravitons, and WIMPs. These are called Majorana particles and can annihilate with themselves.


 


 

History

Experiment

In 1932, soon after the prediction of positrons by Paul Dirac, Carl D. Anderson found that cosmic-ray collisions produced these particles in a cloud chamber— a particle detector in which moving electrons (or positrons) leave behind trails as they move through the gas. The electric charge-to-mass ratio of a particle can be measured by observing the curling of its cloud-chamber track in a magnetic field. Originally, positrons, because of the direction that their paths curled, were mistaken for electrons travelling in the opposite direction.

The antiproton and antineutron were found by Emilio Segrè and Owen Chamberlain in 1955 at the University of California, Berkeley. Since then the antiparticles of many other subatomic particles have been created in particle accelerator experiments. In recent years, complete atoms of antimatter have been assembled out of antiprotons and positrons, collected in electromagnetic traps[citation needed].

Hole theory

... the development of quantum field theory made the interpretation of antiparticles as holes unnecessary, even though it lingers on in many textbooks.  —  Steven Weinberg in The quantum theory of fields, Vol I, p 14, ISBN 0-521-55001-7

Solutions of the Dirac equation contained negative energy quantum states. As a result, an electron could always radiate energy and fall into a negative energy state. Even worse, it could keep radiating infinite amount of energy because there were infinitely many negative energy states available. To prevent this unphysical situation from happening, Dirac proposed that a "sea" of negative-energy electrons fills the universe, already occupying all of the lower energy states so that, due to the Pauli exclusion principle no other electron could fall into them. Sometimes, however, one of these negative energy particles could be lifted out of this Dirac sea to become a positive energy particle. But when lifted out, it would leave behind a hole in the sea which would act exactly like a positive energy electron with a reversed charge. These he interpreted as the proton, and called his paper of 1930 A theory of electrons and protons.

Dirac was aware of the problem that his picture implied an infinite negative charge for the universe. Dirac tried to argue that we would perceive this as the normal state of zero charge. Another difficulty was the difference in masses of the electron and the proton. Dirac tried to argue that this was due to the electromagnetic interactions with the sea, until Hermann Weyl proved that hole theory was completely symmetric between negative and positive charges. Dirac also predicted a reaction e+p+ → γ+γ, where an electron and a proton annihilate to give two photons. Robert Oppenheimer and Igor Tamm proved that this would cause ordinary matter to disappear too fast. A year later, in 1931, Dirac modified his theory and postulated the positron, a new particle of the same mass as the electron. The discovery of this particle the next year removed the last two objections to his theory.

However, the problem of infinite charge of the universe remains. Also, as we now know, bosons also have antiparticles, but since they do not obey the Pauli exclusion principle, hole theory doesn't work for them. A unified interpretation of antiparticles is now available in quantum field theory, which solves both these problems.

Particle-antiparticle annihilation

Main article: Annihilation


An example of a virtual pion pair which influences the propagation of a kaon causing a neutral kaon to mix with the antikaon. This is an example of renormalization in quantum field theory— the field theory being necessary because the number of particles changes from one to two and back again.

If a particle and antiparticle are in the appropriate quantum states, then they can annihilate each other and produce other particles. Reactions such as e + e+ →  γ + γ (the two-photon annihilation of an electron-positron pair) is an example. The single-photon annihilation of an electron-positron pair, e + e+ → γ cannot occur because it is impossible to conserve energy and momentum together in this process. The reverse reaction is also impossible for this reason. However, in quantum field theory this process is allowed as an intermediate quantum state for times short enough that the violation of energy conservation can be accommodated by the uncertainty principle. This opens the way for virtual pair production or annihilation in which a one particle quantum state may fluctuate into a two particle state and back. These processes are important in the vacuum state and renormalization of a quantum field theory. It also opens the way for neutral particle mixing through processes such as the one pictured here: which is a complicated example of mass renormalization.

Properties of antiparticles

Quantum states of a particle and an antiparticle can be interchanged by applying the charge conjugation (C), parity (P), and time reversal (T) operators. If |p,σ,n> denotes the quantum state of a particle (n) with momentum p, spin J whose component in the z-direction is σ, then one has


where nc denotes the charge conjugate state, i.e., the antiparticle. This behaviour under CPT is the same as the statement that the particle and its antiparticle lie in the same irreducible representation of the Poincare group. Properties of antiparticles can be related to those of particles through this. If T is a good symmetry of the dynamics, then




where the proportionality sign indicates that there might be a phase on the right hand side. In other words, particle and antiparticle must have

the same mass m

the same spin state J

opposite electric charges
q and -q.

Quantum field theory

This section draws upon the ideas, language and notation of canonical quantization of a quantum field theory.

One may try to quantize an electron field without mixing the annihilation and creation operators by writing


where we use the symbol k to denote the quantum numbers p and σ of the previous section and the sign of the energy, E(k), and ak denotes the corresponding annihilation operators. Of course, since we are dealing with fermions, we have to have the operators satisfy canonical anti-commutation relations. However, if one now writes down the Hamiltonian


then one sees immediately that the expectation value of H need not be positive. This is because E(k) can have any sign whatsoever, and the combination of creation and annihilation operators has expectation value 1 or 0.

So one has to introduce the charge conjugate antiparticle field, with its own creation and annihilation operators satisfying the relations


where k has the same p, and opposite σ and sign of the energy. Then one can rewrite the field in the form


where the first sum is over positive energy states and the second over those of negative energy. The energy becomes


where E0 is an infinite negative constant. The vacuum state is defined as the state with no particle or antiparticle, i.e., and . Then the energy of the vacuum is exactly E0. Since all energies are measured relative to the vacuum, H is positive definite. Analysis of the properties of ak and bk shows that one is the annihilation operator for particles and the other for antiparticles. This is the case of a fermion.

This approach is due to Vladimir Fock, Wendell Furry and Robert Oppenheimer. If one quantizes a real scalar field, then one finds that there is only one kind of annihilation operator; therefore real scalar fields describe neutral bosons. Since complex scalar fields admit two different kinds of annihilation operators, which are related by conjugation, such fields describe charged bosons.

The Feynman-Stueckelberg interpretation

By considering the propagation of the negative energy modes of the electron field backward in time, Ernst Stueckelberg reached a pictorial understanding of the fact that the particle and antiparticle have equal mass m and spin J but opposite charges q. This allowed him to rewrite perturbation theory precisely in the form of diagrams. Richard Feynman later gave an independent systematic derivation of these diagrams from a particle formalism, and they are now called Feynman diagrams. Each line of a diagram represents a particle propagating either backward or forward in time. This technique is the most widespread method of computing amplitudes in quantum field theory today.

Since this picture was first developed by Ernst Stueckelberg, and acquired its modern form in Feynman's work, it is called the Feynman-Stueckelberg interpretation of antiparticles to honor both scientists.

See also

Gravitational interaction of antimatter

Parity, charge conjugation and time reversal symmetry.

CP violations and the baryon asymmetry of the universe.

Quantum field theory and the list of particles

Baryogenesis

References

Feynman, Richard P. "The reason for antiparticles", in The 1986 Dirac memorial lectures, R.P. Feynman and S. Weinberg. Cambridge University Press, 1987. ISBN 0-521-34000-4.

Weinberg, Steven. The quantum theory of fields, Volume 1: Foundations. Cambridge University Press, 1995. ISBN 0-521-55001-7.


 

Antimatter





AntimaternavigationsearchAntimatter (disambiguation)In particle physics, antimatter is the extension of the concept of the antiparticle to matter, where antimatter is composed of antiparticles in the same way that normal matter is composed of particles. For example, an antielectron (a positron, an electron with a positive charge) and an antiproton (a proton with a negative charge) could form an antihydrogen atom in the same way that an electron and a proton form a normal matter hydrogen atom. Furthermore, mixing matter and antimatter would lead to the annihilation of both in the same way that mixing antiparticles and particles does, thus giving rise to high-energy photons (gamma rays) or other particle–antiparticle pairs.

There is considerable speculation as to why the observable universe is apparently almost entirely matter, whether there exist other places that are almost entirely antimatter instead, and what might be possible if antimatter could be harnessed, but at this time the apparent asymmetry of matter and antimatter in the visible universe is one of the greatest unsolved problems in physics. The process by which this asymmetry between particles and antiparticles developed is called baryogenesis.

Notation

One way to denote an antiparticle is by adding a bar (or macron) over the particle's symbol. For example, the proton and antiproton are denoted as p and p, respectively. The same rule applies if you were to address a particle by its constituent components. A proton is made up of uud quarks, so an antiproton must therefore be formed from uud antiquarks. Another convention is to distinguish particles by their electric charge. Thus, the electron and positron are denoted simply as e and e+ respectively.

Origin (naturally occurring production)

Asymmetry

Almost every object observable from the Earth seems to be made of matter rather than antimatter. Many scientists believe that this preponderance of matter over antimatter (known as baryon asymmetry) is the result of an imbalance in the production of matter and antimatter particles in the early universe, in a process called baryogenesis. The amount of matter presently observable in the universe only requires an imbalance in the early universe on the order of one extra matter particle per billion matter-antimatter particle pairs.[1]

Antiparticles are created everywhere in the universe where high-energy particle collisions take place. High-energy cosmic rays impacting Earth's atmosphere (or any other matter in the solar system) produce minute quantities of antimatter in the resulting particle jets, which are immediately annihilated by contact with nearby matter. It may similarly be produced in regions like the center of the Milky Way Galaxy and other galaxies, where very energetic celestial events occur (principally the interaction of relativistic jets with the interstellar medium). The presence of the resulting antimatter is detectable by the gamma rays produced when positrons annihilate with nearby matter. The gamma rays' frequency and wavelength indicate that each carries 511 keV of energy (i.e. the rest mass of an electron or positron multiplied by c2).

Recent observations by the European Space Agency's INTEGRAL (International Gamma-Ray Astrophysics Laboratory) satellite may explain the origin of a giant cloud of antimatter surrounding the galactic center. The observations show that the cloud is asymmetrical and matches the pattern of X-ray binaries, binary star systems containing black holes or neutron stars, mostly on one side of the galactic center. While the mechanism is not fully understood, it is likely to involve the production of electron-positron pairs, as ordinary matter gains tremendous energy while falling into a stellar remnant.[2][3]

Antimatter may exist in relatively large amounts in far away galaxies due to cosmic inflation in the primordial time of the universe. NASA is trying to determine if this is true by looking for X-ray and gamma ray signatures of annihilation events in colliding superclusters.[4]

Artificial production

Antiparticles are also produced in any environment with a sufficiently high temperature (mean particle energy greater than the pair production threshold). During the period of baryogenesis, when the universe was extremely hot and dense, matter and antimatter were continually produced and annihilated. The presence of remaining matter, and absence of detectable remaining antimatter,[5] also called baryon asymmetry, is attributed to violation of the CP-symmetry relating matter and antimatter. The exact mechanism of this violation during baryogenesis remains a mystery.

Positrons are also produced via the radioactive beta+ decay, but this mechanism can be considered as "natural" as well as "artificial".

Antihydrogen

Main article: Antihydrogen

In 1995 CERN announced that it had successfully brought into existence nine antihydrogen atoms by implementing the SLAC/Fermilab concept during the PS210 experiment. The experiment was performed using the Low Energy Antiproton Ring (LEAR), and was led by Walter Oelert and Mario Macri. Fermilab soon confirmed the CERN findings by producing approximately 100 antihydrogen atoms at their facilities.

The antihydrogen atoms created during PS210, and subsequent experiments (at both CERN and Fermilab) were extremely energetic ("hot") and were not well suited to study. To resolve this hurdle, and to gain a better understanding of antihydrogen, two collaborations were formed in the late 1990s — ATHENA and ATRAP. In 2005, ATHENA disbanded and some of the former members (along with others) formed the ALPHA Collaboration, which is also situated at CERN. The primary goal of these collaborations is the creation of less energetic ("cold") antihydrogen, better suited to study.

In 1999 CERN activated the Antiproton Decelerator, a device capable of decelerating antiprotons from 3.5 GeV to 5.3 MeV — still too "hot" to produce study-effective antihydrogen, but a huge leap forward.

In late 2002 the ATHENA project announced that they had created the world's first "cold" antihydrogen. The antiprotons used in the experiment were cooled sufficiently by decelerating them (using the Antiproton Decelerator), passing them through a thin sheet of foil, and finally capturing them in a Penning trap. The antiprotons also underwent stochastic cooling at several stages during the process.

The ATHENA team's antiproton cooling process is effective, but highly inefficient. Approximately 25 million antiprotons leave the Antiproton Decelerator; roughly 10 thousand make it to the Penning trap.

In early 2004 ATHENA researchers released data on a new method of creating low-energy antihydrogen. The technique involves slowing antiprotons using the Antiproton Decelerator, and injecting them into a Penning trap (specifically a Penning-Malmberg trap[citation needed]). Once trapped the antiprotons are mixed with electrons that have been cooled to an energy potential significantly less than the antiprotons; the resulting Coulomb collisions cool the antiprotons while warming the electrons until the particles reach an equilibrium of approximately 4 K.

While the antiprotons are being cooled in the first trap, a small cloud of positron plasma is injected into a second trap (the mixing trap). Exciting the resonance of the mixing trap's confinement fields can control the temperature of the positron plasma; but the procedure is more effective when the plasma is in thermal equilibrium with the trap's environment. The positron plasma cloud is generated in a positron accumulator prior to injection; the source of the positrons is usually radioactive sodium.

Once the antiprotons are sufficiently cooled, the antiproton-electron mixture is transferred into the mixing trap (containing the positrons). The electrons are subsequently removed by a series of fast pulses in the mixing trap's electrical field. When the antiprotons reach the positron plasma further Coulomb collisions occur, resulting in further cooling of the antiprotons. When the positrons and antiprotons approach thermal equilibrium antihydrogen atoms begin to form. Being electrically neutral the antihydrogen atoms are not affected by the trap and can leave the confinement fields.

Utilizing this method, ATHENA researchers predict they will be able to create up to 100 antihydrogen atoms per operational second.

ATHENA and ATRAP are now seeking to further cool the antihydrogen atoms by subjecting them to an inhomogeneous field. While antihydrogen atoms are electrically neutral, their spin produces magnetic moments. These magnetic moments vary depending on the spin direction of the atom, and can be deflected by inhomogeneous fields regardless of electrical charge.

The biggest limiting factor in the production of antimatter is the availability of antiprotons. Recent data released by CERN states that when fully operational their facilities are capable of producing 107 antiprotons per second.[citation needed] Assuming an optimal conversion of antiprotons to antihydrogen, it would take two billion years to produce 1 gram or 1 mole of antihydrogen (approximately 6.02×1023 atoms of antihydrogen). Another limiting factor to antimatter production is storage. As stated above there is no known way to effectively store antihydrogen. The ATHENA project has managed to keep antihydrogen atoms from annihilation for tens of seconds — just enough time to briefly study their behaviour.

Hydrogen atoms are the simplest objects that can be considered as "matter" rather than as just particles.

Simultaneous trapping of antiprotons and antielectrons was reported[6] and the cooling is achieved;[7] there are patents on the way of production of antihydrogen.[8]

Antihelium

A small number of nuclei of the antihelium isotope, have been created in collision experiments.[9]

Positrons

Main article: Positrons

Positrons were reported[10] in November 2008 to have been generated by Lawrence Livermore National Laboratory in larger numbers than by any previous synthetic process. A laser drove ionized electrons through a millimeter radius gold target's nuclei, which caused the incoming electrons to emit energy
quanta, that decayed into both matter and antimatter. Positrons were detected at a higher rate and in greater density than ever previously detected in a laboratory.

Previous experiments made smaller quantities of positrons using lasers and paper-thin targets; however, new simulations showed that short, ultra-intense lasers and millimeter-thick gold are a far more effective source.[11]

Preservation

Antimatter cannot be stored in a container made of ordinary matter because antimatter reacts with any matter it touches, annihilating itself and the container. Antimatter that is composed of charged particles can be contained by a combination of an electric field and a magnetic field in a device known as a Penning trap. This device cannot, however, contain antimatter that consists of uncharged particles, for which atomic traps are used. In particular, such a trap may use the dipole moment (electrical or magnetic) of the trapped particles; at high vacuum, the matter or antimatter particles can be trapped (suspended) and cooled with slightly off-resonant laser radiation (see, for example, magneto-optical trap and Magnetic trap). Small particles can be also suspended by just intensive optical beam in the optical tweezers.

Cost

Antimatter is said to be the most costly substance in existence, with an estimated cost of $62.5 trillion per gram.[12] This is because production is difficult (only a few atoms are produced in reactions in particle accelerators), and because there is higher demand for the other uses of particle accelerators. According to CERN, it has cost a few hundred million Swiss Francs to produce about 1 billionth of a gram.[13]

Several NASA Institute for Advanced Concepts-funded studies are exploring whether it might be possible to use magnetic scoops to collect the antimatter that occurs naturally in the Van Allen belts of Earth, and ultimately, the belts of gas giants like Jupiter, hopefully at a lower cost per gram.[14]

Uses

Medical

Antimatter-matter reactions have practical applications in medical imaging, such as positron emission tomography (PET). In positive beta decay, a nuclide loses surplus positive charge by emitting a positron (in the same event, a proton becomes a neutron, and neutrinos are also given off). Nuclides with surplus positive charge are easily made in a cyclotron and are widely generated for medical use.

Fuel

In antimatter-matter collisions resulting in photon emission, the entire rest mass of the particles is converted to kinetic energy. The energy per unit mass (9×1016 J/kg) is about 10 orders of magnitude greater than chemical energy (compared to TNT at 4.2×106 J/kg, and formation of water at 1.56×107 J/kg), about 4 orders of magnitude greater than nuclear energy that can be liberated today using nuclear fission (about 40 MeV per 238U nucleus transmuted to Lead, or 1.5×1013 J/kg), and about 2 orders of magnitude greater than the best possible from fusion (about 6.3×1014 J/kg for the proton-proton chain). The reaction of 1 kg of antimatter with 1 kg of matter would produce 1.8×1017 J (180 petajoules) of energy (by the mass-energy equivalence formula E = mc²), or the rough equivalent of 47 megatons of TNT. For comparison, Tsar Bomba, the largest nuclear weapon ever detonated, reacted an estimated yield of 50 Megatons, which required the use of hundreds of kilograms of fissile material (Uranium/Plutonium).

Not all of that energy can be utilized by any realistic technology, because as much as 50% of energy produced in reactions between nucleons and antinucleons is carried away by neutrinos, so, for all intents and purposes, it can be considered lost.[15]

Antimatter rocketry ideas, such as the redshift rocket, propose the use of antimatter as fuel for interplanetary travel or possibly interstellar travel. A patent has been issued for an antimatter engine claiming speeds up to one third the speed of light.[16] Since the energy density of antimatter is vastly higher than conventional fuels, the thrust to weight equation for such craft would be very different from conventional spacecraft.

The scarcity of antimatter means that it is not readily available to be used as fuel, although it could be used in antimatter catalyzed nuclear pulse propulsion. Generating a single antiproton is immensely difficult and requires particle accelerators and vast amounts of energy—millions of times more than is released after it is annihilated with ordinary matter due to inefficiencies in the process. Known methods of producing antimatter from energy also produce an equal amount of normal matter, so the theoretical limit is that half of the input energy is converted to antimatter. Counterbalancing this, when antimatter annihilates with ordinary matter, energy equal to twice the mass of the antimatter is liberated—so energy storage in the form of antimatter could (in theory) be 100% efficient.

Antimatter production is currently very limited, but has been growing at a nearly geometric rate since the discovery of the first antiproton in 1955 by Segrè and Chamberlain.[citation needed] The current antimatter production rate is between 1 and 10 nanograms per year, and this is expected to increase to between 3 and 30 nanograms per year by 2015 or 2020 with new superconducting linear accelerator facilities at CERN and Fermilab. Some researchers claim that with current technology, it is possible to obtain antimatter for US$25 million per gram by optimizing the collision and collection parameters (given current electricity generation costs). Antimatter production costs, in mass production, are almost linearly tied in with electricity costs, so economical pure-antimatter thrust applications are unlikely to come online without the advent of such technologies as deuterium-tritium fusion power (assuming that such a power source actually would prove to be cheap). Many experts, however, dispute these claims as being far too optimistic by many orders of magnitude. They point out that in 2004; the annual production of antiprotons at CERN was several picograms at a cost of $20 million. This means to produce 1 gram of antimatter, CERN would need to spend 100 quadrillion dollars and run the antimatter factory for 100 billion years. Storage is another problem, as antiprotons are negatively charged and repel against each other, so that they cannot be concentrated in a small volume. Plasma oscillations in the charged cloud of antiprotons can cause instabilities that drive antiprotons out of the storage trap. For these reasons, to date only a few million antiprotons have been stored simultaneously in a magnetic trap, which corresponds to much less than a femtogram. Antihydrogen atoms or molecules are neutral so in principle they do not suffer the plasma problems of antiprotons described above. But cold antihydrogen is far more difficult to produce than antiprotons, and so far not a single antihydrogen atom has been trapped in a magnetic field.

One researcher of the CERN laboratories, which produces antimatter regularly, said:

"

If we could assemble all of the antimatter we've ever made at CERN and annihilate it with matter, we would have enough energy to light a single electric light bulb for a few minutes.[17]

"

See also


External links

DARK ENERGY

In physical cosmology and astronomy, dark energy is a hypothetical form of energy that permeates all of space and tends to increase the rate of expansion of the universe.[1] Dark energy is the most popular way to explain recent observations that the universe appears to be expanding at an accelerating rate. In the standard model of cosmology, dark energy currently accounts for 74% of the total mass-energy of the universe.

Two proposed forms for dark energy are the cosmological constant, a constant energy density filling space homogeneously,[2] and scalar fields such as quintessence or moduli, dynamic quantities whose energy density can vary in time and space. Contributions from scalar fields that are constant in space are usually also included in the cosmological constant. The cosmological constant is physically equivalent to vacuum energy.[3][4] Scalar fields which do change in space can be difficult to distinguish from a cosmological constant because the change may be extremely slow.

High-precision measurements of the expansion of the universe are required to understand how the expansion rate changes over time. In general relativity, the evolution of the expansion rate is parameterized by the cosmological equation of state. Measuring the equation of state of dark energy is one of the biggest efforts in observational cosmology today.

Adding the cosmological constant to cosmology's standard FLRW metric leads to the Lambda-CDM model, which has been referred to as the "standard model" of cosmology because of its precise agreement with observations. Dark energy has been used as a crucial ingredient in a recent attempt[5] to formulate a cyclic model for the universe.


 


 

Evidence for dark energy

Supernovae

In 1998, published observations of Type Ia supernovae ("one-A") by the High-z Supernova Search Team
[6] followed in 1999 by the Supernova Cosmology Project
[7] suggested that the expansion of the universe is accelerating. Since then, these observations have been corroborated by several independent sources. Measurements of the cosmic microwave background, gravitational lensing, and the large scale structure of the cosmos as well as improved measurements of supernovae have been consistent with the Lambda-CDM model.[8]

Supernovae are useful for cosmology because they are excellent standard candles across cosmological distances. They allow the expansion history of the Universe to be measured by looking at the relationship between the distance to an object and its redshift, which gives how fast it is receding from us. The relationship is roughly linear, according to Hubble's law. It is relatively easy to measure redshift, but finding the distance to an object is more difficult. Usually, astronomers use standard candles: objects for which the intrinsic brightness, the absolute magnitude, is known. This allows the object's distance to be measured from its actually observed brightness, or apparent magnitude. Type Ia supernovae are the best-known standard candles across cosmological distances because of their extreme, and extremely consistent, brightness.

Cosmic Microwave Background



Estimated distribution of dark matter and dark energy in the universe

The existence of dark energy, in whatever form, is needed to reconcile the measured geometry of space with the total amount of matter in the universe. Measurements of cosmic microwave background (CMB) anisotropies, most recently by the WMAP satellite, indicate that the universe is very close to flat. For the shape of the universe to be flat, the mass/energy density of the universe must be equal to a certain critical density. The total amount of matter in the universe (including baryons and dark matter), as measured by the CMB, accounts for only about 30% of the critical density. This implies the existence of an additional form of energy to account for the remaining 70%.[8] The most recent WMAP observations are consistent with a universe made up of 74% dark energy, 22% dark matter, and 4% ordinary matter.

Large-Scale Structure

The theory of large scale structure, which governs the formation of structure in the universe (stars, quasars, galaxies and galaxy clusters), also suggests that the density of baryonic matter in the universe is only 30% of the critical density.

Late-time Integrated Sachs-Wolfe Effect

Accelerated cosmic expansion causes gravitational potential wells and hills to flatten as photons pass through them, producing cold spots and hot spots on the CMB aligned with vast supervoids and superclusters. This so-called late-time Integrated Sachs-Wolfe effect (ISW) is a direct signal of dark energy in a flat universe,[9] and has recently been detected at high significance by Ho et al.[10] and Giannantonio et al.[11] In May 2008, Granett, Neyrinck & Szapudi found arguably the clearest evidence yet for the ISW effect,[12] imaging the average imprint of superclusters and supervoids on the CMB.

Nature of dark energy

The exact nature of this dark energy is a matter of speculation. It is known to be very homogeneous, not very dense and is not known to interact through any of the fundamental forces other than gravity. Since it is not very dense — roughly 10−29 grams per cubic centimeter — it is hard to imagine experiments to detect it in the laboratory. Dark energy can only have such a profound impact on the universe, making up 74% of all energy, because it uniformly fills otherwise empty space. The two leading models are quintessence and the cosmological constant. Both models include the common characteristic that dark energy must have negative pressure.

Negative pressure

Independently from its actual nature, dark energy would need to have a strong negative pressure in order to explain the observed acceleration in the expansion rate of the universe.

According to General Relativity, the pressure within a substance contributes to its gravitational attraction for other things just as its mass density does. This happens because the physical quantity that causes matter to generate gravitational effects is the Stress-energy tensor, which contains both the energy (or matter) density of a substance and its pressure and viscosity.

In the Friedmann-Lemaître-Robertson-Walker metric, it can be shown that a strong constant negative pressure in all the universe causes an acceleration in universe expansion if the universe is already expanding, or a deceleration in universe contraction if the universe is already contracting. More exactly, the second derivative of the universe scale factor, , is positive if the equation of state of the universe is such that .

This accelerating expansion effect is sometimes labeled "gravitational repulsion", which is a colorful but possibly confusing expression. In fact a negative pressure does not influence the gravitational interaction between masses - which remains attractive - but rather alters the overall evolution of the universe at the cosmological scale, typically resulting in the accelerating expansion of the universe despite the attraction among the masses present in the universe.

Cosmological constant

Main article: Cosmological constant

For more details on this topic, see Equation of state (cosmology).

The simplest explanation for dark energy is that it is simply the "cost of having space": that is, a volume of space has some intrinsic, fundamental energy. This is the cosmological constant, sometimes called Lambda (hence Lambda-CDM model) after the Greek letter Λ, the symbol used to mathematically represent this quantity. Since energy and mass are related by E = mc2, Einstein's theory of general relativity predicts that it will have a gravitational effect. It is sometimes called a vacuum energy because it is the energy density of empty vacuum. In fact, most theories of particle physics predict vacuum fluctuations that would give the vacuum this sort of energy. This is related to the Casimir Effect, in which there is a small suction into regions where virtual particles are geometrically inhibited from forming (e.g. between plates with tiny separation). The cosmological constant is estimated by cosmologists to be on the order of 10−29g/cm³, or about 10−120 in reduced Planck units. However, particle physics predicts a natural value of 1 in reduced Planck units, a large discrepancy which is still lacking in explanation.

The cosmological constant has negative pressure equal to its energy density and so causes the expansion of the universe to accelerate. The reason why a cosmological constant has negative pressure can be seen from classical thermodynamics; Energy must be lost from inside a container to do work on the container. A change in volume dV requires work done equal to a change of energy −p dV, where p is the pressure. But the amount of energy in a box of vacuum energy actually increases when the volume increases (dV is positive), because the energy is equal to ρV, where ρ (rho) is the energy density of the cosmological constant. Therefore, p is negative and, in fact, p = −ρ.

A major outstanding problem is that most quantum field theories predict a huge cosmological constant from the energy of the quantum vacuum, more than 100 orders of magnitude too large.[13] This would need to be cancelled almost, but not exactly, by an equally large term of the opposite sign. Some supersymmetric theories require a cosmological constant that is exactly zero, which does not help. The present scientific consensus amounts to extrapolating the empirical evidence where it is relevant to predictions, and fine-tuning theories until a more elegant solution is found. Technically, this amounts to checking theories against macroscopic observations. Unfortunately, as the known error-margin in the constant predicts the fate of the universe more than its present state, many such "deeper" questions remain unknown.

Another problem arises with inclusion of the cosmic constant in the standard model: i.e., the appearance of solutions with regions of discontinuities (see classification of discontinuities for three examples) at low matter density.[14] Discontinuity also affects the past sign of the pressure assigned to the cosmic constant, changing from the current negative pressure to attractive, as one looks back towards the early Universe. A systematic, model-independent evaluation of the supernovae data supporting inclusion of the cosmic constant in the standard model indicates these data suffer systematic error. The supernovae data are not overwhelming evidence for an accelerating Universe expansion which may be simply gliding.[15] A numerical evaluation of WMAP and supernovae data for evidence that our local group exists in a local void with poor matter density compared to other locations, uncovered possible conflict in the analysis used to support the cosmic constant.[16] These findings should be considered shortcomings of the standard model, but only when a term for vacuum energy is included.

In spite of its problems, the cosmological constant is in many respects the most economical solution to the problem of cosmic acceleration. One number successfully explains a multitude of observations. Thus, the current standard model of cosmology, the Lambda-CDM model, includes the cosmological constant as an essential feature.

Quintessence

Main article: Quintessence (physics)

In quintessence models of dark energy, the observed acceleration of the scale factor is caused by the potential energy of a dynamical field, referred to as quintessence field. Quintessence differs from the cosmological constant in that it can vary in space and time. In order for it not to clump and form structure like matter, the field must be very light so that it has a large Compton wavelength.

No evidence of quintessence is yet available, but it has not been ruled out either. It generally predicts a slightly slower acceleration of the expansion of the universe than the cosmological constant. Some scientists think that the best evidence for quintessence would come from violations of Einstein's equivalence principle and variation of the fundamental constants in space or time. Scalar fields are predicted by the standard model and string theory, but an analogous problem to the cosmological constant problem (or the problem of constructing models of cosmic inflation) occurs: renormalization theory predicts that scalar fields should acquire large masses.

The cosmic coincidence problem asks why the cosmic acceleration began when it did. If cosmic acceleration began earlier in the universe, structures such as galaxies would never have had time to form and life, at least as we know it, would never have had a chance to exist. Proponents of the anthropic principle view this as support for their arguments. However, many models of quintessence have a so-called tracker behavior, which solves this problem. In these models, the quintessence field has a density which closely tracks (but is less than) the radiation density until matter-radiation equality, which triggers quintessence to start behaving as dark energy, eventually dominating the universe. This naturally sets the low energy scale of the dark energy.

In 2004, when scientists fit the evolution of dark energy with the cosmological data, they found that the equation of state had possibly crossed the cosmological constant boundary (w=-1) from above to below. A No-Go theorem has been proved that to give this scenario at least two degrees of freedom are required for dark energy models. This scenario is so-called Quintom scenario.

Some special cases of quintessence are phantom energy, in which the energy density of quintessence actually increases with time, and k-essence (short for kinetic quintessence) which has a non-standard form of kinetic energy. They can have unusual properties: phantom energy, for example, can cause a Big Rip.

Alternative ideas

Some theorists think that dark energy and cosmic acceleration are a failure of general relativity on very large scales, larger than superclusters. It is a tremendous extrapolation to think that our law of gravity, which works so well in the solar system, should work without correction on the scale of the universe. Most attempts at modifying general relativity, however, have turned out to be either equivalent to theories of quintessence, or inconsistent with observations. It is of interest to note that if the equation for gravity were to approach r instead of r2 at large, intergalactic distances, then the acceleration of the expansion of the universe becomes a mathematical artifact,[clarification needed] negating the need for the existence of Dark Energy.

Alternative ideas for dark energy have come from string theory, brane cosmology and the holographic principle, but have not yet proved as compelling as quintessence and the cosmological constant. On string theory, an article in the journal Nature described:

String theories, popular with many particle physicists, make it possible, even desirable, to think that the observable universe is just one of 10500 universes in a grander multiverse, says [Leonard Susskind, a cosmologist at Stanford University in California]. The vacuum energy will have different values in different universes, and in many or most it might indeed be vast. But it must be small in ours because it is only in such a universe that observers such as ourselves can evolve.[13]

Paul Steinhardt in the same article criticizes string theory's explanation of dark energy stating "...Anthropics and randomness don't explain anything... I am disappointed with what most theorists are willing to accept".[13]

In a rather radical departure, an article in the open access journal, Entropy, by Paul Gough, put forward the suggestion that information energy must make a significant contribution to dark energy and that this can be shown by referencing the equation of the state of information in the universe. [17]

Yet another, "radically conservative" class of proposals aims to explain the observational data by a more refined use of established theories rather than through the introduction of dark energy, focusing, for example, on the gravitational effects of density inhomogeneities[18][19][20] or on consequences of electroweak symmetry breaking in the early universe.

Implications for the fate of the universe

Cosmologists estimate that the acceleration began roughly 5 billion years ago. Before that, it is thought that the expansion was decelerating, due to the attractive influence of dark matter and baryons. The density of dark matter in an expanding universe decreases more quickly than dark energy, and eventually the dark energy dominates. Specifically, when the volume of the universe doubles, the density of dark matter is halved but the density of dark energy is nearly unchanged (it is exactly constant in the case of a cosmological constant).

If the acceleration continues indefinitely, the ultimate result will be that galaxies outside the local supercluster will move beyond the cosmic horizon: they will no longer be visible, because their line-of-sight velocity becomes greater than the speed of light.[21] This is not a violation of special relativity, and the effect cannot be used to send a signal between them. (Actually there is no way to even define "relative speed" in a curved spacetime. Relative speed and velocity can only be meaningfully defined in flat spacetime or in sufficiently small (infinitesimal) regions of curved spacetime). Rather, it prevents any communication between them as the objects pass out of contact. The Earth, the Milky Way and the Virgo supercluster, however, would remain virtually undisturbed while the rest of the universe recedes. In this scenario, the local supercluster would ultimately suffer heat death, just as was thought for the flat, matter-dominated universe, before measurements of cosmic acceleration.

There are some very speculative ideas about the future of the universe. One suggests that phantom energy causes divergent expansion, which would imply that the effective force of dark energy continues growing until it dominates all other forces in the universe. Under this scenario, dark energy would ultimately tear apart all gravitationally bound structures, including galaxies and solar systems, and eventually overcome the electrical and nuclear forces to tear apart atoms themselves, ending the universe in a "Big Rip". On the other hand, dark energy might dissipate with time, or even become attractive. Such uncertainties leave open the possibility that gravity might yet rule the day and lead to a universe that contracts in on itself in a "Big Crunch". Some scenarios, such as the cyclic model suggest this could be the case. While these ideas are not supported by observations, they are not ruled out. Measurements of acceleration are crucial to determining the ultimate fate of the universe in big bang theory.

History

The cosmological constant was first proposed by Einstein as a mechanism to obtain a stable solution of the gravitational field equation that would lead to a static universe, effectively using dark energy to balance gravity. Not only was the mechanism an inelegant example of fine-tuning, it was soon realized that Einstein's static universe would actually be unstable because local inhomogeneities would ultimately lead to either the runaway expansion or contraction of the universe. The equilibrium is unstable: if the universe expands slightly, then the expansion releases vacuum energy, which causes yet more expansion. Likewise, a universe which contracts slightly will continue contracting. These sorts of disturbances are inevitable, due to the uneven distribution of matter throughout the universe. More importantly, observations made by Edwin Hubble showed that the universe appears to be expanding and not static at all. Einstein famously referred to his failure to predict the idea of a dynamic universe, in contrast to a static universe, as his greatest blunder. Following this realization, the cosmological constant was largely ignored as a historical curiosity.

Alan Guth proposed in the 1970s that a negative pressure field, similar in concept to dark energy, could drive cosmic inflation in the very early universe. Inflation postulates that some repulsive force, qualitatively similar to dark energy, resulted in an enormous and exponential expansion of the universe slightly after the Big Bang. Such expansion is an essential feature of most current models of the Big Bang. However, inflation must have occurred at a much higher energy density than the dark energy we observe today and is thought to have completely ended when the universe was just a fraction of a second old. It is unclear what relation, if any, exists between dark energy and inflation. Even after inflationary models became accepted, the cosmological constant was thought to be irrelevant to the current universe.

The term "dark energy" was coined by Michael Turner in 1998.[22] By that time, the missing mass problem of big bang nucleosynthesis and large scale structure was established, and some cosmologists had started to theorize that there was an additional component to our universe. The first direct evidence for dark energy came from supernova observations of accelerated expansion, in Riess
et al.[6] and later confirmed in Perlmutter
et al...[7] This resulted in the Lambda-CDM model, which as of 2006 is consistent with a series of increasingly rigorous cosmological observations, the latest being the 2005 Supernova Legacy Survey. First results from the SNLS reveal that the average behavior (i.e., equation of state) of dark energy behaves like Einstein's cosmological constant to a precision of 10 per cent.[23] Recent results from the Hubble Space Telescope Higher-Z Team indicate that dark energy has been present for at least 9 billion years and during the period preceding cosmic acceleration.

See also

References

  1. ^ P. J. E. Peebles and Bharat Ratra (2003). "The cosmological constant and dark energy" (subscription required). Reviews of Modern Physics
    75: 559–606. doi:10.1103/RevModPhys.75.559. http://www.arxiv.org/abs/astro-ph/0207347
  2. ^
    Sean Carroll (2001). "The cosmological constant". Living Reviews in Relativity
    4: 1. doi:10.1038/nphys815-<span (inactive 2008-06-26). http://relativity.livingreviews.org/Articles/lrr-2001-1/index.html. Retrieved on 2006-09-28. 
  3. ^ T.E.Bearden"Dark Matter or Dark Energy?"
  4. ^ Philippe Jetzer and Norbert Straumann(2008)"Josephson Junctions And Dark Energy"
  5. ^ L.Baum and P.H. Frampton (2007). "Turnaround in Cyclic Cosmology" (subscription required). Physical Review Letters
    98: 071301. doi:10.1103/PhysRevLett.98.071301. http://www.arxiv.org/abs/hep-th/0610213
  6. ^ a
    b
    Adam G. Riess
    et al. (Supernova Search Team) (1998). "Observational evidence from supernovae for an accelerating universe and a cosmological constant" (subscription required). Astronomical J.
    116: 1009–38. doi:10.1086/300499. http://www.arxiv.org/abs/astro-ph/9805201
  7. ^ a
    b
    S. Perlmutter
    et al. (The Supernova Cosmology Project) (1999). "Measurements of Omega and Lambda from 42 high redshift supernovae" (subscription required). Astrophysical J.
    517: 565–86. doi:10.1086/307221. http://www.arxiv.org/abs/astro-ph/9812133
  8. ^ a
    b D. N. Spergel et al. (WMAP collaboration) (March 2006). Wilkinson Microwave Anisotropy Probe (WMAP) three year results: implications for cosmology. http://lambda.gsfc.nasa.gov/product/map/current/map_bibliography.cfm
  9. ^
    "Looking for Lambda with the Rees-Sciama Effect", Crittenden R.G., & Turok N., 1996, Phys. Rev. Lett., 76, 575
  10. ^
    "Correlation of CMB with large-scale structure: I. ISW Tomography and Cosmological Implications", Ho et al., 2008, Phys Rev. D, submitted
  11. ^
    "Combined analysis of the integrated Sachs-Wolfe effect and cosmological implications", Giannantonio et al., 2008, Phys. Rev. D, in press
  12. ^
    "An Imprint of Super-Structures on the Microwave Background due to the Integrated Sachs-Wolfe Effect", Granett, Neyrinck & Szapudi, 2008, ApJL, submitted
  13. ^ a
    b
    c Hogan, Jenny (2007). "Unseen Universe: Welcome to the dark side". Nature
    448 (7151): 240–245. doi:10.1038/448240a
  14. ^ A.M. Öztas and M.L. Smith (2006). "Elliptical Solutions to the Standard Cosmology Model with Realistic Values of Matter Density". International Journal of Theoretical Physics
    45: 925–936. doi:10.1007/s10773-006-9082-7
  15. ^ D.J. Schwarz and B. Weinhorst (2007). "(An)isotropy of the Hubble diagram: comparing hemispheres". Astronomy & Astrophysics
    474: 717–729. doi:10.1051/0004-6361:20077998
  16. ^ Stephon Alexander, Tirthabir Biswas, Alessio Notari, and Deepak Vaid (2008). "Local Void vs Dark Energy: Confrontation with WMAP and Type Ia Supernovae". ArΧiv ePrint. arΧiv:astro-ph/0712.0370v2
  17. ^ Gough, Paul (2008). "Information Equation of State" (PDF). Entropy
    10: 150–159. doi:10.3390/entropy-e10030150. http://www.mdpi.com/1099-4300/10/3/150/pdf
  18. ^ Wiltshire, David L. (2007). "Exact Solution to the Averaging Problem in Cosmology". Phys. Rev. Lett.
    99: 251101. doi:10.1103/PhysRevLett.99.251101
  19. ^
    [1] arXiv:0708.2943v1 Dark energy as a mirage HIP-2007-64/TH
  20. ^ Clifton, Timothy; Pedro Ferreira (April 2009). "Does Dark Energy Really Exist?". Scientific American
    300 (4): 48-55. http://www.sciam.com/article.cfm?id=does-dark-energy-exist. Retrieved on April 30, 2009. 
  21. ^
    http://www.sciam.com/article.cfm?id=the-end-of-cosmology
  22. ^ The first appearance of the term "dark energy" is in the article with another cosmologist and Turner's student at the time, Dragan Huterer, "Prospects for Probing the Dark Energy via Supernova Distance Measurements", which was posted to the ArXiv.org e-print archive in August 1998 and published in Physical Review D in 1999 (Huterer and Turner, Phys. Rev. D 60, 081301 (1999)), although the manner in which the term is treated there suggests it was already in general use. Cosmologist Saul Perlmutter has credited Turner with coining the term in an article they wrote together with Martin White of the University of Illinois for Physical Review Letters, where it is introduced in quotation marks as if it were a neologism.
  23. ^ Pierre Astier et al. (Supernova Legacy Survey) (2006). "The Supernova legacy survey: Measurement of omega(m), omega(lambda) and W from the first year data set" (subscription required). Astronomy and Astrophysics
    447: 31–48. doi:10.1051/0004-6361:20054185. http://www.arxiv.org/abs/astro-ph/0510447

Bibliography

External links

La brain

La brain