Physics of the microworld and megaworld. Atomic physics

· Microscopy Path 3

· Microscopy limit 5

· Invisible radiations 7

· Electrons and electron optics 9

· Electrons are waves!? 12

· Electron microscope structure 13

· Electron microscopy objects 15

· Types of electron microscopes 17

· Features of working with an electron microscope 21

· Ways to overcome the diffraction limit of electron microscopy 23

· References 27

· Pictures 28


Notes:

1. Symbol means raising to a power. For example, 2 3 means "2 to the power of 3".

2. Symbol e means writing a number in exponential form. For example, 2 e3 means "2 times 10 to the 3rd power."

3. All pictures are on the last page.

4. Due to the use of not entirely “recent” literature, the data in this abstract are not particularly “fresh”.

The eye would not see the sun,

if he weren't like

To the sun.

Goethe.

The way of microscopy.

When the first microscope was created at the turn of the 17th century, hardly anyone (or even its inventor) could have imagined the future successes and numerous applications of microscopy. Looking back, we are convinced that this invention marked something more than the creation of a new device: for the first time, a person was able to see the previously invisible.

Around the same time, another event dates back to the invention of the telescope, which made it possible to see the invisible in the world of planets and stars. The invention of the microscope and telescope represented a revolution not only in the ways of studying nature, but also in the method of research itself.

Indeed, natural philosophers of antiquity observed nature, learning about it only what the eye saw, the skin felt, and the ear heard. One can only be surprised at how much correct information about the world around them they received using “naked” senses and without conducting special experiments, as they do now. At the same time, along with accurate facts and brilliant guesses, how many false “observations”, statements and conclusions were left to us by scientists of antiquity and the Middle Ages!

Only much later was a method of studying nature found, which consists in setting up consciously planned experiments, the purpose of which is to test assumptions and clearly formulated hypotheses. Francis Bacon, one of its creators, expressed the features of this research method in the following, now famous, words: “To conduct an experiment is to interrogate nature.” The very first steps of the experimental method, according to modern ideas, were modest, and in most cases, experimenters of that time made do without any devices that “enhance” the senses. The invention of the microscope and telescope represented a tremendous expansion in the possibilities of observation and experiment.

Already the first observations, carried out using the simplest and most imperfect technology according to modern concepts, discovered “a whole world in a drop of water.” It turned out that familiar objects look completely different when examined through a microscope: surfaces that are smooth to the eye and touch turn out to be actually rough, and myriads of tiny organisms move in “clean” water. In the same way, the first astronomical observations using telescopes made it possible for people to see the familiar world of planets and stars in a new way: for example, the surface of the Moon, sung by poets of all generations, turned out to be mountainous and dotted with numerous craters, and Venus was found to have a change of phases, just like Moons.

In the future, these simple observations will give birth to independent fields of science: microscopy and observational astronomy. Years will pass, and each of these areas will develop into numerous ramifications, expressed in a number of very different applications in biology, medicine, technology, chemistry, physics, and navigation.

Modern microscopes, which, in contrast to electronic ones, we will call optical, are perfect instruments that allow obtaining high magnifications with high resolution. Resolution is determined by the distance at which two adjacent structural elements can still be seen separately. However, as research has shown, optical microscopy has practically reached the fundamental limit of its capabilities due to diffraction and interference - phenomena caused by the wave nature of light.

The degree of monochromaticity and coherence is an important characteristic of waves of any nature (electromagnetic, sound, etc.). Monochromatic vibrations ¾ are vibrations consisting of sine waves of one specific frequency. When we imagine oscillations in the form of a simple sinusoid, respectively, with constant amplitude, frequency and phase, then this is a certain idealization, since, strictly speaking, in nature there are no oscillations and waves that are absolutely accurately described by a sine wave. However, as studies have shown, real oscillations and waves can approach an ideal sinusoid with a greater or lesser degree of accuracy (have a greater or lesser degree of monochromaticity). Oscillations and waves of complex shape can be represented as a set of sinusoidal oscillations and waves. In fact, this mathematical operation is carried out by a prism, which decomposes sunlight into a color spectrum.

Monochromatic waves, including light waves, of the same frequency (under certain conditions!) can interact with each other in such a way that, as a result, “light turns into darkness” or, as they say, the waves can interfere. During interference, local “amplification and suppression” of waves by each other occur. In order for the wave interference pattern to remain unchanged over time (for example, when viewing it with the eye or photographing), it is necessary that the waves be coherent with each other (two waves are coherent with each other if they give a stable interference pattern, which corresponds to the equality of their frequencies and constant phase shift).

If obstacles are placed in the path of wave propagation, they will significantly influence the direction of propagation of these waves. Such obstacles can be the edges of holes in screens, opaque objects, as well as any other types of inhomogeneities in the path of wave propagation. In particular, objects that are transparent (for a given radiation), but differ in refractive index, and therefore in the speed of passage of waves inside them, can also be inhomogeneities. The phenomenon of changing the direction of propagation of waves when they pass near obstacles is called diffraction. Diffraction is usually accompanied by interference phenomena.

The limit of microscopy.

The image obtained using any optical system is the result of the interference of different parts of the light wave passing through this system. In particular, it is known that the restriction of a light wave by the entrance pupil of the system (the edges of the lenses, mirrors and diaphragms that make up the optical system) and the associated phenomenon of diffraction leads to the fact that the luminous point will be depicted in the form of a diffraction circle. This circumstance limits the ability to distinguish small details of the image formed by the optical system. The image, for example, of an infinitely distant light source (star) as a result of diffraction by a round pupil (spotting scope frame) is a rather complex picture (see Fig. 1). In this picture you can see a set of concentric light and dark rings. The distribution of illumination, which can be fixed if you move from the center of the picture to its edges, is described by rather complex formulas, which are given in optics courses. However, the patterns inherent in the position of the first (from the center of the picture) dark ring look simple. Let us denote by D the diameter of the entrance pupil of the optical system and by l the wavelength of light sent by an infinitely distant source.

Rice. 1. Diffraction image of a luminous point (the so-called Airy disk).

If we denote by j the angle at which the radius of the first dark ring is visible, then, as proven in optics,

sin j » 1,22 * ( l /D) .

Thus, as a result of limiting the wavefront to the edges of the optical system (the entrance pupil), instead of imaging a luminous point corresponding to an object at infinity, we get a set of diffraction rings. Naturally, this phenomenon limits the ability to distinguish between two closely located point light sources. Indeed, in the case of two distant sources, for example two stars located very close to each other in the vault of heaven, two systems of concentric rings are formed in the observation plane. Under certain conditions, they can overlap and distinguishing between sources becomes impossible. It is no coincidence that, in accordance with the “recommendation” of the formula given above, they strive to build astronomical telescopes with large entrance pupil sizes. The resolution limit at which two closely spaced light sources can be observed is determined as follows: for definiteness, the resolution limit is taken to be the position of the diffraction images of two point light sources at which the first dark ring created by one of the sources coincides with the center of the light spot, created by another source.


MATTER IN THE MICROWORLD

According to modern scientific views, all natural objects are ordered, structured, hierarchically organized systems. Using a systems approach, natural science does not simply identify types of material systems, but reveals their connections and relationships. There are three levels of the structure of matter.

Macroworld- the world of macro-objects, the dimension of which is correlated with scales human experience; spatial quantities are expressed in millimeters, centimeters and kilometers, and time - in seconds, minutes, hours, years.

Microworld- the world of the extremely small, not directly observable microobjects, the spatial dimension of which is calculated from 10 -8 to 10 -16 cm, and the lifetime - from infinity to 10 -24 sec.

Megaworld- the world is huge cosmic scale and speeds, the distance in which is measured in light years, and the lifetime of space objects is measured in millions and billions of years.

And although these levels have their own specific laws, the micro-, macro- and mega-worlds are closely interconnected.

Microworld: concepts of modern physics

Quantum mechanical concept of describing the microworld. While studying microparticles, scientists were faced with a paradoxical situation, from the point of view of classical science: the same objects exhibited both wave and corpuscular properties. The first step in this direction was taken by the German physicist M. Planck (1858-1947).

In the process of studying the thermal radiation of an “absolutely black” body, M. Planck came to the stunning conclusion that in radiation processes energy can be given off or absorbed not continuously and not in any quantities, but only in certain indivisible portions - quanta. The magnitude of these smallest portions of energy is determined through the number of oscillations of the corresponding type of radiation and the universal natural constant, which M. Planck introduced into science under the symbol h: E = hy , who later became famous (where – quantum of energy, at – frequency).

Planck reported the resulting formula on December 19, 1900 at a meeting of the Berlin Physical Society. In the history of physics, this day is considered the birthday of quantum theory and all atomic physics; this day marks the beginning of a new era of natural science.

The great German theoretical physicist A. Einstein (1879-1955) transferred in 1905 the idea of ​​quantizing energy during thermal radiation to radiation in general and thus substantiated the new doctrine of light. The idea of ​​light as a rain of fast-moving quanta was an extremely bold one that few initially believed was correct. M. Planck himself did not agree with the expansion of the quantum hypothesis to the quantum theory of light, who attributed his quantum formula only to the laws of thermal radiation of a black body considered by him.

A. Einstein suggested that we are talking about a natural pattern universal character, and came to the conclusion that the corpuscular structure of light should be recognized. Quantum theory of light A. Einstein, argued that light is a wave phenomenon constantly propagating in space. And at the same time, light energy has a discontinuous structure. Light can be considered as a stream of light quanta, or photons. Their energy is determined by the elementary quantum of the Planck action and the corresponding number of vibrations. Light different colors consists of light quanta of different energies.

It has become possible to visualize the phenomenon of the photoelectric effect, the essence of which is the knocking out of electrons from a substance under the influence of electromagnetic waves. The phenomenon of the photoelectric effect was discovered in the second half of the 19th century, and in 1888-1890 the photoelectric effect was systematically studied by the Russian physicist Alexander Grigorievich Stoletov. Externally, the effect was manifested in the fact that when a light flux falls on a negatively charged metal plate, an electroscope connected to the plate shows the presence of an instantaneous electric current. However, current flows only through a closed circuit, and the “metal plate – electroscope” circuit is not closed. A. Einstein showed that such a circuit closure occurs through a flow of electrons knocked out by photons from the surface of the plate.

Experiments have shown that the presence or absence of the photoelectric effect is determined by the frequency of the incident wave. If we assume that each electron is ejected by one photon, then the following becomes clear: the effect occurs only if the energy of the photon, and therefore its frequency, is high enough to overcome the binding forces between the electron and matter.

Rice. Photoelectric effect diagram

For this work, Einstein received the Nobel Prize in Physics in 1922. His theory was confirmed in the experiments of an American physicist R. E. Millikan(1868-1953). Discovered in 1923 by an American physicist A. H. Compton(1892-1962) the phenomenon (Compton effect), which is observed when atoms with free electrons are exposed to very hard X-rays, again and finally confirmed the quantum theory of light.

A paradoxical situation arose: it was discovered that light behaves not only as a wave, but also as a flow of corpuscles. In experiments on diffraction And interference his wave properties, and when photoelectric effect - corpuscular. The main characteristic of its discreteness (its inherent portion of energy) was calculated through a purely wave characteristic - frequency y (E = hy). Thus, it was discovered that to describe fields necessary not only continual, but also corpuscular an approach.

The idea of ​​approaches to the study of matter did not remain unchanged: in 1924, the French physicist Louis de Broglie(1892-1987) put forward the idea of ​​the wave properties of matter, the need to use wave and corpuscular concepts not only in the theory of light, but also in theory of matter. He claimed that wave properties, along with corpuscular, apply to all types of matter: electrons, protons, atoms, molecules and even macroscopic bodies. According to de Broglie, any body with mass T , moving at speed v , corresponds to the wave

In fact, a similar formula was known earlier, but only in relation to light quanta - photons.

In 1926, the Austrian physicist E. Schrödinger(1887-1961), found a mathematical equation that determines the behavior of matter waves, the so-called Schrödinger equation. English physicist P. Dirac(1902-1984) summarized it. The bold thought of L. de Broglie about the universal “dualism” of particles and waves made it possible to construct a theory with the help of which it was possible to cover properties of matter and light in their unity.

The most convincing evidence that De Broglie was right was the discovery of electron diffraction by American physicists in 1927 K. Davisson and L. Germer. Subsequently, experiments were carried out to detect the diffraction of neutrons, atoms and even molecules. Even more important was the discovery of new elementary particles predicted on the basis of a system of formulas of developed wave mechanics.

Thus, to replace two different approaches to the study of two different forms of matter: corpuscular and wave - came single approach – wave-particle dualism. Confession wave-particle duality has become universal in modern physics: any material object is characterized by the presence of both corpuscular and wave properties.

The quantum mechanical description of the microworld is based on uncertainty relationship, established by a German physicist W. Heisenberg(1901-76), and principle of complementarity Danish physicist N. Bora(1885-1962),.

The essence uncertainty relations V. Heisenberg is that it is impossible to equally accurately determine the complementary characteristics of a microparticle, for example, the coordinates of the particle and its momentum (momentum). If an experiment is performed that shows exactly where the particle is at a given moment, then the movement is disrupted to such an extent that the particle cannot be found after that. And, conversely, with an accurate measurement of speed it is impossible to determine the location of the particle.

From the point of view of classical mechanics, the uncertainty relation seems absurd. However, we humans live in a macrocosm and, in principle, We cannot build a visual model that would be adequate to the microworld. The uncertainty relation is an expression of the impossibility of observing the microworld without disturbing it. At corpuscular description measurement is carried out in order to obtain an accurate value energy and magnitude of microparticle movement, for example, during electron scattering. In experiments aimed at precise location determination, on the contrary, is used wave explanation, in particular when electrons pass through thin plates or when observing the deflection of rays.

A fundamental principle of quantum mechanics is also principle of complementarity, to whom N. Bor gave the following formulation: “The concepts of particles and waves complement each other and at the same time contradict each other, they are complementary pictures of what is happening.”

Thus, the corpuscular and wave patterns must complement each other, i.e. be complementary. Only by taking both aspects into account can you get an overall picture of the microworld. There are two classes of devices: in some, quantum objects behave like waves, in others, like particles. M. Born(1882-1970) noted that waves and particles are “projections” of physical reality onto the experimental situation.

Atomistic concept of the structure of matter. Atomistic hypothesis of the structure of matter put forward in antiquity Democritus, was revived in the 18th century. chemist J. Dalton. In physics, the concept of atoms as the last indivisible structural elements of matter came from chemistry.

Actually physical research atoms begin at the end of the 19th century, when the French physicist A. A. Becquerel(1852 – 1908) the phenomenon of radioactivity was discovered. The study of radioactivity was continued by French physicists and spouses P. Curie(1859-1906) and M. Sklodowska-Curie(1867-1934), who discovered the new radioactive elements polonium and radium.

History of the study atomic structure began in 1895 thanks to the discovery by an English physicist J. J. Thomson(1856 – 1940)electron. Since electrons have a negative charge, and the atom as a whole is electrically neutral, an assumption was made about the presence of a positively charged particle. The mass of the electron was calculated to be 1/1836 of the mass of a positively charged particle.

Based on such a mass of a positively charged particle, the English physicist W. Thomson(1824 – 1907, from 1892 Lord Kelvin), proposed the first model of the atom in 1902: a positive charge is distributed over a fairly large area, and electrons are interspersed with it, like “raisins in pudding.” However, this model could not resist experimental testing.

In 1908 E. Marsden And X. Geig er, employees of the English physicist E. Rutherford, conducted experiments on the passage of alpha particles through thin metal plates and found that almost all particles pass through the plate as if there were no obstacle, and only 1/10,000 of them experience strong deflection. E. Rutherford(1871-1937) concluded that they were hitting some kind of obstacle. which is a positively charged nucleus of an atom, the size of which (10 -12 cm) is very small compared to the size of an atom (10 -8 cm), but the mass of the atom is almost completely concentrated in it.

The atomic model proposed by E. Rutherford in 1911 resembled the solar system: in the center there is an atomic nucleus, and around it electrons move in their orbits. An irresolvable contradiction this model was that electrons, in order not to lose stability, must move around the core. At the same time, moving electrons, according to the laws of electrodynamics, must radiate electromagnetic energy. But in this case, the electrons very quickly lost all their energy and would fall on the core.

The next contradiction is related to the fact that the emission spectrum of an electron must be continuous, since the electron, approaching the nucleus, would change its frequency. However, atoms emit light only at certain frequencies. Rutherford's planetary model of the atom turned out to be incompatible with the electrodynamics of J. C. Maxwell.

In 1913, the great Danish physicist N. Bor put forward a hypothesis of the structure of the atom, based on two postulates, completely incompatible with classical physics, and based on the principle of quantization:

1) in each atom there are several stationary orbits electrons, moving along which an electron can exist, not radiating;

2) when transition electron from one stationary orbit to another atom emits or absorbs a portion of energy.

Bohr's postulates explain stability of atoms: electrons in stationary states do not emit electromagnetic energy without an external reason. Explained and line spectra of atoms: Each line of the spectrum corresponds to the transition of an electron from one state to another.

N. Bohr's theory of the atom made it possible to give an accurate description of the hydrogen atom, consisting of one proton and one electron, which agreed quite well with experimental data. Further extension of the theory to multielectron atoms encountered insurmountable difficulties. The wavelength of a moving electron is approximately 10 -8 cm, i.e. it is of the same order as the size of the atom. But the movement of a particle belonging to any system can be described with a sufficient degree of accuracy as the mechanical movement of a material point along a certain orbit only if the wavelength of the particle negligible compared to the size of the system.

Consequently, it is fundamentally impossible to accurately describe the structure of an atom based on the idea of ​​the orbits of point electrons, since such orbits do not actually exist. Due to their wave nature, electrons and their charges are, as it were, smeared throughout the atom, but not evenly, but in such a way that at some points the time-averaged electron charge density is greater, and at others it is less.

N. Bohr's theory represents, as it were, the borderline of the first stage in the development of modern physics. This is the latest effort to describe the structure of the atom based on classical physics, supplemented with only a small number of new assumptions. Processes in the atom, in principle, cannot be visually represented in the form of mechanical models by analogy with events in the macrocosm. Even the concepts of space and time in the form existing in the macroworld turned out to be unsuitable for describing microphysical phenomena.

Elementary particles and the quark model of the atom. Further development of the ideas of atomism was associated with the study of elementary particles. Term "elementary particle" originally meant the simplest, further indecomposable particles that underlie any material formations. It has now been established that particles have one structure or another, however, the historically established name continues to exist. Currently, more than 350 microparticles have been discovered.

Main characteristics elementary particles are mass, charge, average lifetime, spin and quantum numbers.

Rest mass of elementary particles determined in relation to the rest mass of the electron. There are elementary particles that do not have rest mass - photons. The remaining particles according to this criterion are divided into leptons- light particles (electron and neutrino); mesons- medium particles with a mass ranging from one to a thousand electron masses; baryons- heavy particles whose mass exceeds a thousand electron masses and which includes protons, neutrons, hyperons and many resonances.

Electric charge. All known particles have a positive, negative or zero charge. Each particle, except the photon and two mesons, corresponds to antiparticles with opposite charges. It is believed that quarks are particles with fractional electric charge.

By lifetime particles are divided into stable(photon, two types of neutrino, electron and proton) and unstable. It is stable particles that play the most important role in the structure of macrobodies. All other particles are unstable, they exist for about 10 -10 - 10 -24 s, after which they decay. Elementary particles with an average lifetime of 10 -23 - 10 -22 sec. called resonances, which decay before they even leave the atom or atomic nucleus. Therefore, it is not possible to detect them in real experiments.

Concept "back", which has no analogues in classical physics, denote the intrinsic angular momentum of a microparticle.

"Quantum numbers" express discrete states of elementary particles, for example, the position of an electron in a specific electron orbit, magnetic moment, etc.

All elementary particles are divided into two classes - fermions(named after E. Fermi) And bosons(named after S. Bose). Fermions make up substance, bosons carry interaction, those. are field quanta. In particular, fermions include quarks and leptons, and bosons include field quanta (photons, vector bosons, gluons, gravitinos and gravitons). These particles are considered truly elementary those. further indecomposable. The remaining particles are classified as conditionally elementary, those. composite particles formed from quarks and corresponding field quanta.

Elementary particles participate in all types of known interactions. There are four types fundamental interactions in nature.

Strong interaction occurs at the level of atomic nuclei and represents the mutual attraction and repulsion of their constituent parts. It acts at a distance of about 10 -13 cm. Under certain conditions, strong interaction binds particles very tightly, resulting in the formation of material systems with high binding energy - atomic nuclei. It is for this reason that the nuclei of atoms are very stable and difficult to destroy.

Electromagnetic interaction about a thousand times weaker than a strong one, but much longer-range. This type of interaction is characteristic of electrically charged particles. The carrier of electromagnetic interaction is a photon that has no charge - a quantum of the electromagnetic field. In the process of electromagnetic interaction, electrons and atomic nuclei combine into atoms, and atoms into molecules. In a certain sense, this interaction is major in chemistry and biology.

Weak interaction possibly between different particles. It extends over a distance of the order of 10 -13 - 10 -22 cm and is associated mainly with the decay of particles, for example, with the transformation of a neutron into a proton, electron and antineutrino occurring in the atomic nucleus. According to the current state of knowledge, most particles are unstable precisely because of the weak interaction.

Gravitational interaction- the weakest, not taken into account in the theory of elementary particles, since at characteristic distances of the order of 10 -13 cm it gives extremely small effects. However, on ultra-small distances (about 10 -33 cm) and at ultra-large energies, gravity again acquires significant significance. Here the unusual properties of the physical vacuum begin to appear. Superheavy virtual particles create a noticeable gravitational field around themselves, which begins to distort the geometry of space. On a cosmic scale, gravitational interaction is critical. Its range of action is not limited.

Table Fundamental Interactions

All four interactions necessary and sufficient to build a diverse world. Without strong interactions atomic nuclei would not exist, and stars and the Sun would not be able to generate heat and light using lizard energy. Without electromagnetic interactions there would be no atoms, no molecules, no macroscopic objects, and no heat or light. Without weak interactions Nuclear reactions would not be possible in the depths of the Sun and stars, supernova explosions would not occur, and the heavy elements necessary for life would not be able to spread throughout the Universe. Without gravitational interaction The Universe could not evolve, since gravity is a unifying factor that ensures the unity of the Universe as a whole and its evolution.

Modern physics has come to the conclusion that all four fundamental interactions can be obtained from one fundamental interaction - superpowers. The most striking achievement was the proof that at very high temperatures (or energies) all four forces combine to form one.

At an energy of 100 GeV (100 billion electron volts), electromagnetic and weak interactions combine. This temperature corresponds to the temperature of the Universe 10 -10 s after the Big Bang. At an energy of 10 15 GeV, a strong interaction joins them, and at an energy of 10 19 GeV, all four interactions combine.

Advances in particle research have further contributed to development of the concept of atomism. Currently, it is believed that among the many elementary particles, 12 fundamental particles and the same number of antiparticles can be distinguished. Six particles are quarks with exotic names “upper”, “lower”, “enchanted”, “strange”, “true”, “charming”. The other six are leptons: electron, muon, tau particle and their corresponding neutrinos (electron, muon, tau neutrino).

These 12 particles are grouped into three generations, each of which consists of four members.

The first contains “upper” and “downward” quarks, an electron and an electron neutrino.

The second contains “charm” and “strange” quarks, a muon and a muon neutrino.

In the third - “true” and “lovely” quarks and tau particles with their neutrinos.

All ordinary matter consists of particles of the first generation. It is assumed that the remaining generations can be created artificially at charged particle accelerators.

Based on the quark model, physicists have developed a modern solution to the problem structure of atoms.

Each atom is made up of heavy core(strongly bound by the gluon fields of protons and neutrons) and electron shell. A proton has a positive electric charge, a neutron has a zero charge. A proton is made up of two “up” quarks and one “down” quark, and a neutron is made up of one “up” and two “down” quarks. They resemble a cloud with blurred boundaries, consisting of virtual particles that appear and disappear.

There are still questions about the origin of quarks and leptons, whether they are the basic “building blocks” of nature and how fundamental they are? Answers to these questions are sought in modern cosmology. Of great importance is the study of the birth of elementary particles from vacuum, the construction of models of primary nuclear fusion that gave birth to certain particles at the moment of the birth of the Universe.

Questions for self-control

1. What is the essence of a systematic approach to the structure of matter?

2. Reveal the relationship between the micro, macro and mega worlds.

3. What ideas about matter and field as types of matter were developed within the framework of classical physics?

4. What does the concept of “quantum” mean? Tell us about the main stages in the development of ideas about quanta.

5. What does the concept of “wave-particle duality” mean? What is the significance of N. Bohr’s principle of complementarity in describing the physical reality of the microworld?

6. What is the structure of the atom from the point of view of modern physics?

8. Characterize the properties of elementary particles.

9. Highlight the main structural levels of the organization of matter in the microcosm and reveal their relationship.

10. What ideas about space and time existed in the pre-Newtonian period?

11. How have ideas about space and time changed with the creation of a heliocentric picture of the world?

12. How did I. Newton interpret time and space?

13. What ideas about space and time became decisive in A. Einstein’s theory of relativity?

14. What is the space-time continuum?

15. Expand modern metric and topological properties of space and time.

Mandatory:

4.2.1. Quantum mechanical concept of describing the microworld

When moving to the study of the microworld, it was discovered that physical reality is unified and there is no gap between matter and field.

While studying microparticles, scientists were faced with a paradoxical situation from the point of view of classical science: the same objects exhibited both wave and corpuscular properties.

The first step in this direction was taken by the German physicist M. Planck. As is known, at the end of the 19th century. A difficulty arose in physics, which was called the “ultraviolet catastrophe.” In accordance with calculations using the formula of classical electrodynamics, the intensity of thermal radiation of a completely black body should have increased without limit, which clearly contradicted experience. In the process of researching thermal radiation, which M. Planck called the hardest in his life, he came to the stunning conclusion that in radiation processes energy can be given off or absorbed not continuously and not in any quantities, but only in known indivisible portions - quanta. The energy of quanta is determined through the number of oscillations of the corresponding type of radiation and the universal natural constant, which M. Planck introduced into science under the symbol h : E= h u.

If the introduction of the quantum had not yet created a real quantum theory, as M. Planck repeatedly emphasized, then on December 14, 1900, the day the formula was published, its foundation was laid. Therefore, in the history of physics, this day is considered the birthday of quantum physics. And since the concept of an elementary quantum of action subsequently served as the basis for understanding all the properties of the atomic shell and atomic nucleus, December 14, 1900 should be considered both as the birthday of all atomic physics and the beginning of a new era of natural science.

The first physicist who enthusiastically accepted the discovery of the elementary quantum of action and creatively developed it was A. Einstein. In 1905, he transferred the brilliant idea of ​​quantized absorption and release of energy during thermal radiation to radiation in general and thus substantiated the new doctrine of light.

The idea of ​​light as a stream of rapidly moving quanta was extremely bold, almost daring, and few initially believed in its correctness. First of all, M. Planck himself did not agree with the expansion of the quantum hypothesis to the quantum theory of light, referring his quantum formula only to the laws of thermal radiation of a black body that he considered.

A. Einstein suggested that we are talking about a natural law of a universal nature. Without looking back at the prevailing views in optics, he applied Planck’s hypothesis to light and came to the conclusion that it should be recognized corpuscular structure of light.

The quantum theory of light, or Einstein's photon theory A, argued that light is a wave phenomenon constantly propagating in space. And at the same time, light energy, in order to be physically effective, is concentrated only in certain places, so light has a discontinuous structure. Light can be considered as a stream of indivisible energy grains, light quanta, or photons. Their energy is determined by the elementary quantum of the Planck action and the corresponding number of vibrations. Light of different colors consists of light quanta of different energies.

Einstein’s idea of ​​light quanta helped to understand and visualize the phenomenon of the photoelectric effect, the essence of which is the knocking out of electrons from a substance under the influence of electromagnetic waves. Experiments have shown that the presence or absence of a photoelectric effect is determined not by the intensity of the incident wave, but by its frequency. If we assume that each electron is ejected by one photon, then the following becomes clear: the effect occurs only if the energy of the photon, and therefore its frequency, is high enough to overcome the binding forces between the electron and matter.

The correctness of this interpretation of the photoelectric effect (for this work Einstein received the Nobel Prize in Physics in 1922) was confirmed 10 years later in the experiments of an American physicist R.E. Milliken. Discovered in 1923 by an American physicist OH. Compton the phenomenon (Compton effect), which is observed when atoms with free electrons are exposed to very hard X-rays, again and finally confirmed the quantum theory of light. This theory is one of the most experimentally confirmed physical theories. But the wave nature of light had already been firmly established by experiments on interference and diffraction.

A paradoxical situation arose: it was discovered that light behaves not only as a wave, but also as a flow of corpuscles. In experiments on diffraction and interference, its wave properties are revealed, and in the photoelectric effect, its corpuscular properties are revealed. In this case, the photon turned out to be a very special kind of corpuscle. The main characteristic of its discreteness - its inherent portion of energy - was calculated through a purely wave characteristic - frequency y (E= Well).

Like all great natural scientific discoveries, the new doctrine of light had fundamental theoretical and epistemological significance. The old position about the continuity of natural processes, which was thoroughly shaken by M. Planck, was excluded by Einstein from the much larger field of physical phenomena.

Developing the ideas of M. Planck and A. Einstein, the French physicist Louis de Broche in 1924 he put forward the idea of ​​the wave properties of matter. In his work “Light and Matter,” he wrote about the need to use wave and corpuscular concepts not only in accordance with the teachings of A. Einstein in the theory of light, but also in the theory of matter.

L. de Broglie argued that wave properties, along with corpuscular ones, are inherent in all types of matter: electrons, protons, atoms, molecules and even macroscopic bodies.

According to de Broglie, any body with mass T, moving at speed V, wave corresponds to:

In fact, a similar formula was known earlier, but only in relation to light quanta - photons.

In 1926, the Austrian physicist E. Schrödinger found a mathematical equation that determines the behavior of matter waves, the so-called Schrödinger equation. English physicist P. Dirac summarized it.

The bold thought of L. de Broglie about the universal “dualism” of particles and waves made it possible to construct a theory with the help of which it was possible to embrace the properties of matter and light in their unity. In this case, light quanta became a special moment of the general structure of the microworld.

Waves of matter, which were initially presented as visually real wave processes similar to acoustic waves, took on an abstract mathematical form and received thanks to the German physicist M. Bornu symbolic meaning as "waves of probability".

However, de Broglie's hypothesis needed experimental confirmation. The most convincing evidence of the existence of wave properties of matter was the discovery of electron diffraction by American physicists in 1927 K. Davisson And L. Ger- measure. Subsequently, experiments were carried out to detect the diffraction of neutrons, atoms and even molecules. In all cases, the results fully confirmed de Broglie's hypothesis. Even more important was the discovery of new elementary particles predicted on the basis of a system of formulas of developed wave mechanics.

Recognition of wave-particle duality in modern physics has become universal. Any material object is characterized by the presence of both corpuscular and wave properties.

The fact that the same object appears as both a particle and a wave destroyed traditional ideas.

The form of a particle implies an entity contained in a small volume or finite region of space, while a wave spreads over vast regions of space. In quantum physics, these two descriptions of reality are mutually exclusive, but equally necessary in order to fully describe the phenomena in question.

The final formation of quantum mechanics as a consistent theory occurred thanks to the work of the German physicist V. Heisenberg, who established the uncertainty principle? and Danish physicist N. Bora, who formulated the principle of complementarity, on the basis of which the behavior of microobjects is described.

The essence uncertainty relations V. Heisenberg is as follows. Let's say the task is to determine the state of a moving particle. If it were possible to use the laws of classical mechanics, then the situation would be simple: one only had to determine the coordinates of the particle and its momentum (quantity of motion). But the laws of classical mechanics cannot be applied to microparticles: it is impossible not only practically, but also in general to establish with equal accuracy the location and magnitude of the movement of a microparticle. Only one of these two properties can be determined accurately. In his book “Physics of the Atomic Nucleus” W. Heisenberg reveals the content of the uncertainty relation. He writes that you can never know exactly both pairs at the same time meters - coordinate and speed. You can never simultaneously know where a particle is and how fast and in what direction it is moving. If an experiment is performed that shows exactly where the particle is at a given moment, then the movement is disrupted to such an extent that the particle cannot be found after that. Conversely, with an accurate measurement of velocity, it is impossible to determine the location of the particle.

From the point of view of classical mechanics, the uncertainty relation seems absurd. To better assess the current situation, we must keep in mind that we humans live in a macrocosm and, in principle, We cannot build a visual model that would be adequate to the microworld. The uncertainty relationship is an expression of the impossibility of observing the microworld without disturbing it. Any attempt to provide a clear picture of microphysical processes must rely on either a corpuscular or wave interpretation. In the corpuscular description, a measurement is carried out in order to obtain an accurate value of the energy and magnitude of the movement of a microparticle, for example, during electron scattering. In experiments aimed at accurately determining the location, on the contrary, the wave explanation is used, in particular, when electrons pass through thin plates or when observing the deflection of rays.

The existence of an elementary quantum of action serves as an obstacle to establishing simultaneously and with equal accuracy quantities that are “canonically related,” i.e. position and magnitude of particle motion.

The fundamental principle of quantum mechanics, along with the uncertainty relation, is the principle additional ness, to which N. Bohr gave the following formulation: “The concepts of particles and waves complement each other and at the same time contradict each other, they are complementary pictures of what is happening”1.

The contradictions in the particle-wave properties of microobjects are the result of the uncontrolled interaction of microobjects and macrodevices. There are two classes of devices: in some, quantum objects behave like waves, in others, like particles. In experiments, we do not observe reality as such, but only a quantum phenomenon, including the result of the interaction of a device with a microobject. M. Born figuratively noted that waves and particles are “projections” of physical reality onto an experimental situation.

A scientist studying the microworld thus turns from an observer into an actor, since physical reality depends on the device, i.e. ultimately from the arbitrariness of the observer. Therefore, N. Bohr believed that a physicist does not know reality itself, but only his own contact with it.

An essential feature of quantum mechanics is the probabilistic nature of predictions of the behavior of microobjects, which is described using the E. Schrödinger wave function. The wave function determines the parameters of the future state of a micro object with varying degrees of probability. This means that when conducting the same experiments with the same objects, different results will be obtained each time. However, some values ​​will be more likely than others, e.g. will only be known probability distribution of values.

Taking into account the factors of uncertainty, complementarity and probability, N. Bohr gave the so-called “Copenhagen” interpretation of the essence of quantum theory: “Previously, it was generally accepted that physics describes the Universe. We now know that physics describes only what we can say about the Universe.”1

N. Bohr's position was shared by W. Heisenberg, M. Born, W. Pauli and a number of other lesser-known physicists. Proponents of the Copenhagen interpretation of quantum mechanics did not recognize causality or determinism in the microworld and believed that the basis of physical reality is fundamental uncertainty - indeterminism.

Representatives of the Copenhagen school were sharply opposed by G.A. Lorentz, M. Planck, M. Laue, A. Einstein, P. Langevin and others. A. Einstein wrote about this to M. Born: “In our scientific views, we have developed into antipodes. You believe in a God who plays dice, and I believe in the complete lawfulness of objective existence... What I am firmly convinced of is that in the end they will settle on a theory in which not probabilities, but facts, will be naturally connected "2. He opposed the principle of uncertainty, for determinism, and against the role assigned to the act of observation in quantum mechanics. The further development of physics showed that Einstein was right, who believed that quantum theory in its existing form is simply incomplete: the fact that physicists cannot yet get rid of uncertainty does not indicate the limitations of the scientific method, as N. Bohr argued, but only incompleteness of quantum mechanics. Einstein gave more and more new arguments to support his point of view.

The most famous is the so-called Einstein-Podolsky-Rosen paradox, or EPR paradox, with the help of which they wanted to prove the incompleteness of quantum mechanics. The paradox is a thought experiment: what would happen if a particle consisting of two protons decayed so that the protons flew apart in opposite directions? Due to their common origin, their properties are related or, as physicists say, correlate with each other. According to the law of conservation of momentum, if one proton flies upward, then the second must fly downwards. Having measured the momentum of one proton, we will definitely know the momentum of the other, even if it has flown to the other end of the Universe. There is a nonlocal connection between particles, which Einstein called “the action of ghosts at a distance,” in which each particle at any given moment knows where the other is and what is happening to it.

The EPR paradox is incompatible with the uncertainty postulated in quantum mechanics. Einstein believed that there were some hidden parameters that were not taken into account. Questions: do determinism and causality exist in the microworld; Is quantum mechanics complete? whether there are hidden parameters that it does not take into account has been the subject of debate among physicists for more than half a century and found its resolution at the theoretical level only at the end of the 20th century.

In 1964 J.S. Bela substantiated the position according to which quantum mechanics predicts a stronger correlation between mutually connected particles than what Einstein spoke about.

Bell's theorem states that if some objective Universe exists, and if the equations of quantum mechanics are structurally similar to this Universe, then some kind of nonlocal connection exists between two particles that ever come into contact. The essence of Bell's theorem is that there are no isolated systems: every particle of the Universe is in “instantaneous” communication with all other particles. The entire system, even if its parts are separated by huge distances and there are no signals, fields, mechanical forces, energy, etc. between them, functions as a single system.

In the mid 80s A. Aspect(University of Paris) tested this connection experimentally by studying the polarization of pairs of photons emitted by a single source towards isolated detectors. When comparing the results of two series of measurements, consistency was found between them. From the point of view of a famous physicist D. Boma, A. Aspect's experiments confirmed Bell's theorem and supported the positions of nonlocal hidden variables, the existence of which was assumed by A. Einstein. In D. Bohm's interpretation of quantum mechanics, there is no uncertainty in the coordinates of the particle and its momentum.

Scientists have suggested that communication is carried out through the transfer of information, the carriers of which are special fields.

4.2.2. Wave genetics

The discoveries made in quantum mechanics had a fruitful impact not only on the development of physics, but also on other areas of natural science, primarily biology, within which the concept of wave, or quantum, genetics was developed.

When in 1962 J. Watson, A. Wilson and F. Crick received the Nobel Prize for the discovery of the double helix of DNA carrying hereditary information, it seemed to geneticists that the main problems of the transmission of genetic information were close to being resolved. All information is recorded in genes, the combination of which in cellular chromosomes determines the development program of the organism. The task was to decipher the genetic code, which meant the entire sequence of nucleotides in DNA.

However, reality did not live up to scientists' expectations. After the discovery of the structure of DNA and a detailed consideration of the participation of this molecule in genetic processes, the main problem of the phenomenon of life - the mechanisms of its reproduction - remained essentially unsolved. Deciphering the genetic code made it possible to explain the synthesis of proteins. Classical geneticists proceeded from the fact that genetic molecules, DNA, are of a material nature and work like a substance, representing a material matrix on which a material genetic code is written. In accordance with it, a carnal, material and material organism is developed. But the question of how the spatiotemporal structure of an organism is encoded in chromosomes cannot be resolved on the basis of knowledge of the nucleotide sequence. Soviet scientists A.A. Liu Bishchevym And A.G. Gurvich Back in the 20-30s, the idea was expressed that considering genes as purely material structures is clearly insufficient for a theoretical description of the phenomenon of life.

A.A. Lyubishchev, in his work “On the Nature of Hereditary Factors,” published in 1925, wrote that genes are neither pieces of a chromosome, nor molecules of autocatalytic enzymes, nor radicals, nor a physical structure. He believed that the gene should be recognized as a potential substance. A better understanding of the ideas of A.A. Lyubishchev is encouraged by the analogy of a genetic molecule with musical notation. Music notation itself is material and represents icons on paper, but these icons are realized not in material form, but in sounds, which are acoustic waves.

Developing these ideas, A.G. Gurvich argued that in genetics “it is necessary to introduce the concept of a biological field, the properties of which are formally borrowed from physical concepts”1. The main idea of ​​A.G. Gurvich was that the development of the embryo occurs according to a predetermined program and takes on the forms that already exist in its field. He was the first to explain the behavior of the components of a developing organism as a whole on the basis of field concepts. It is in the field that the forms taken by the embryo during development are contained. Gurvich called the virtual form that determines the result of the development process at any moment a dynamically preformed form and thereby introduced an element of teleology into the original formulation of the field. Having developed the theory of the cell field, he extended the idea of ​​the field as a principle that regulates and coordinates the embryonic process, also to the functioning of organisms. Having substantiated the general idea of ​​the field, Gurvich formulated it as a universal principle of biology. He discovered bio-photon radiation from cells.

Ideas of Russian biologists A.A. Lyubishchev and A.G. Gurvich are a gigantic intellectual achievement, ahead of its time. The essence of their thoughts is contained in the triad:

    Genes are dualistic - they are substance and field at the same time.

    The field elements of chromosomes mark out space—the time of the organism—and thereby control the development of biosystems.

    Genes have aesthetic-imaginative and speech regulatory functions.

These ideas remained underestimated until the appearance of works V.P. Kaznacheeva in the 60s of the 20th century, in which the predictions of scientists about the presence of left forms of information transfer in living organisms were experimentally confirmed. The scientific direction in biology, represented by the school of V.P. Treasurer, was formed as a result of numerous fundamental studies on the so-called mirror cytopathic effect, expressed in the fact that living cells separated by quartz glass, which does not allow a single molecule of substance to pass through, nevertheless exchange information. After Kaznacheev’s work, the existence of a sign wave channel between the cells of biosystems was no longer in doubt.

Simultaneously with the experiments of V.P. Kaznacheeva Chinese researcher Jiang Kanzhen conducted a series of supergenetic experiments that echoed precognition A.L. Lyubishchev and A.G. Gurvich. The difference between Jiang Kanzhen's work is that he conducted experiments not at the cellular level, but at the level of the organism. He proceeded from the fact that DNA - genetic material - exists in two forms: passive (in the form of DNA) and active (in the form of an electromagnetic field). The first form preserves the genetic code and ensures the stability of the body, while the second is able to change it by influencing it with bioelectric signals. A Chinese scientist designed equipment that was capable of reading, transmitting over a distance and introducing wave supergenetic signals from a donor biosystem into an acceptor organism. As a result, he developed unimaginable hybrids, “forbidden” by official genetics, which operates in terms of only real genes. This is how animal and plant chimeras were born: chicken-ducks; corn, from the cobs of which wheat ears grew, etc.

The outstanding experimenter Jiang Kanzhen intuitively understood some aspects of the experimental wave genetics he actually created and believed that the carriers of field genetic information were the ultrahigh frequency electromagnetic radiation used in his equipment, but he could not give a theoretical justification.

After the experimental work of V.P. Kaznacheev and Jiang Kanzheng, which could not be explained in terms of traditional genetics, there was an urgent need for the theoretical development of the wave genome model, in the physical, mathematical and theoretical biological understanding of the work of the DNA chromosome in the field and material dimensions.

The first attempts to solve this problem were made by Russian scientists P.P. Garyaev, A.A. Berezin And A.A. Vasiliev, which set the following tasks:

    show the possibility of a dualistic interpretation of the work of the cell genome at the levels of matter and field within the framework of physical and mathematical models;

    show the possibility of normal and “anomalous” modes of operation of the cell genome using phantom wave figurative-sign matrices;

Find experimental evidence of the correctness of the proposed theory.

Within the framework of the theory they developed, called wave genetics, several basic principles were put forward, substantiated and experimentally confirmed, which significantly expanded the understanding of the phenomenon of life and the processes occurring in living matter.

Genes are not only material structures, but also wave matrices, according to which, as if according to templates, the organism is built.

The mutual transfer of information between cells, which helps to form the body as an integral system and correct the coordinated functioning of all body systems, occurs not only chemically - through the synthesis of various enzymes and other “signal” substances. P.P. Garyaev suggested and then experimentally proved that cells, their chromosomes, DNA, proteins transmit information using physical fields - electromagnetic and acoustic waves and three-dimensional holograms, read by laser chromosomal light and emitting this light, which is transformed into radio waves and transmits hereditary new information in the space of the body. The genome of higher organisms is considered as a bioholographic computer that forms the spatiotemporal structure of biosystems. The carriers of the field matrices on which the organism is built are wave fronts set by genogolograms and the so-called solitons on DNA - a special type of acoustic and electromagnetic fields produced by the genetic apparatus of the organism itself and capable of mediating functions in the exchange of strategic regulatory information between cells, tissues and organs of the biosystem.

In wave genetics, the ideas of Gurvich - Lyubishchev - Kaznacheev - Jiang Kanzhen about the field level of gene information were confirmed. In other words, the dualism of the combining unity “wave - particle” or “matter - field”, accepted in quantum electrodynamics, turned out to be applicable in biology, which was predicted by AG at one time. Gurvich and AA. Lyubishchev. Gene-substance and gene-field do not exclude each other, but complement each other.

Living matter consists of nonliving atoms and elementary particles that combine the fundamental properties of waves and particles, but these same properties are used by biosystems as the basis for wave energy-information exchange. In other words, genetic molecules emit an information-energy field in which the entire organism, its physical body and soul are encoded.

Genes are not only what constitutes the so-called genetics ical code, but also everything else, most of the DNA that used to be was considered meaningless.

But it is precisely this large part of the chromosomes that is analyzed within the framework of wave genetics as the main “intelligent” structure of all cells of the body: “Non-coding regions of DNA are not just junk, but structures intended for some purpose with an unclear purpose... non-coding DNA sequences (which is 95-99% of the genome) are the strategic information content of chromosomes... The evolution of biosystems has created genetic texts and the genome - biocomputer - biocomputer as a quasi-intelligent “subject”, at its own level “reading and understanding” these "texts"1. This component of the genome, which is called the supergene continuum, i.e. supergene, ensures the development and life of humans, animals, plants, and also programs natural dying. There is no sharp and insurmountable boundary between genes and supergenes; they act as a single whole. Genes provide material “replicas” in the form of RNA and proteins, and supergenes transform internal and external fields, forming from them wave structures in which information is encoded. The genetic commonality of people, animals, plants, and protozoa is that at the protein level these variants are practically the same or slightly different in all organisms and are encoded by genes that make up only a few percent of the total length of the chromosome. But they differ at the level of the “junk part” of the chromosomes, which makes up almost their entire length.

Chromosomes' own information is not enough for development body. Chromosomes are physically reversed along some dimension Chinese vacuum, which provides the main part of information for the development of the embryo. The genetic apparatus is capable of itself and with the help of vacuum generate command wave structures such as holograms, providing affecting the development of the organism.

Significant for a deeper understanding of life as a cosmo-planetary phenomenon were the experimental data obtained by P.P. Garyaev, who proved the insufficiency of the cell genome to fully reproduce the organism’s development program in conditions of biofield information isolation. The experiment consisted of building two chambers, in each of which all natural conditions were created for the development of tadpoles from frog eggs - the necessary composition of air and water, temperature, lighting conditions, pond silt, etc. The only differences were that one chamber was made of permalloy, a material that does not transmit electromagnetic waves, and the second was made of ordinary metal, which does not interfere with waves. An equal amount of fertilized frog eggs was placed in each chamber. As a result of the experiment, in the first chamber all freaks appeared, which died after a few days; in the second chamber, tadpoles hatched in due time and developed normally, which later turned into frogs.

It is clear that for the normal development of tadpoles in the first chamber, they lacked some factor that carried the missing part of the hereditary information, without which the organism could not be “assembled” in its entirety. And since the walls of the first chamber cut off the tadpoles only from the radiation that freely penetrated the second chamber, it is natural to assume that filtering or distortion of the natural information background causes deformity and death of the embryos. This means that communication of genetic structures with the external information field is certainly necessary for the harmonious development of the organism. External (exobiological) field signals carry additional, and perhaps the main information into the Earth's gene continuum.

DNA texts and chromosomal continuum holograms can be read in multidimensional space-time and semantic options. There are wave languages ​​of the cell genome, similar to human.

In wave genetics, the substantiation of the unity of the fractal (repeating itself on different scales) structure of DNA sequences and human speech deserves special attention. The fact that the four letters of the genetic alphabet (adenine, guanine, cytosine, thymine) in DNA texts form fractal structures was discovered back in 1990 and did not cause any particular reaction. However, the discovery of gene-like fractal structures in human speech came as a surprise to both geneticists and linguists. It became obvious that the accepted and already familiar comparison of DNA with texts, which was of a metaphorical nature after the discovery of the unity of the fractal structure and human speech, is completely justified.

Together with the staff of the Mathematical Institute of the Russian Academy of Sciences, the group of P.P. Garyaeva developed the theory of fractal representation of natural (human) and genetic languages. Practical testing of this theory in the field of “speech” characteristics of DNA showed the strategically correct orientation of research.

Just as in the experiments of Jiang Kanzhen, the group of P.P. Garyaev, the effect of translation and introduction of wave supergenetic information from donor to acceptor was obtained. Devices were created - generators of soliton fields, into which speech algorithms could be entered, for example, in Russian or English. Such speech structures turned into soliton modulated fields - analogues of those that cells operate in the process of wave communications. The body and its genetic apparatus “recognize” such “wave phrases” as their own and act in accordance with the speech recommendations introduced by the person from the outside. It was possible, for example, by creating certain speech and verbal algorithms, to restore radiation-damaged wheat and barley seeds. Moreover, plant seeds “understood” this speech, regardless of what language it was spoken in - Russian, German or English. Experiments were carried out on tens of thousands of cells.

To test the effectiveness of growth-stimulating wave programs in control experiments, meaningless speech pseudocodes were introduced into the plant genome through generators, which had no effect on plant metabolism, while semantic entry into the biofield semantic layers of the plant genome gave a dramatic but short-term effect. significant acceleration of growth.

Recognition of human speech by plant genomes (regardless of language) is fully consistent with the position of linguistic genetics about the existence of a protolanguage in the genome of biosystems at the early stages of their evolution, common to all organisms and preserved in the general structure of the Earth's gene pool. Here one can see the correspondence with the ideas of the classic of structural linguistics N. Chomsky, who believed that all natural languages ​​have a deep innate universal grammar, invariant for all people and, probably, for their own supergenetic structures.

4.2.3. Atomistic concept of the structure of matter

Atomistic hypothesis of the structure of matter put forward in antiquity Democritus, was revived in the 18th century. chemist J. Dalton, who took the atomic weight of hydrogen as one and compared the atomic weights of other gases with it. Thanks to the works of J. Dalton, the physical and chemical properties of the atom began to be studied. In the 19th century DI. Mendeleev constructed a system of chemical elements based on their atomic weight.

In physics, the concept of atoms as the last weekable structural elements of matter came from chemistry. The actual physical research of the atom began at the end of the 19th century, when the French physicist A.A. Becquerel The phenomenon of radioactivity was discovered, which consisted in the spontaneous transformation of atoms of some elements into atoms of other elements. The study of radioactivity was continued by the French physicists and spouses Pierre And Marie Curie, who discovered new radioactive elements polonium and radium.

The history of research into the structure of the atom began in 1897 thanks to the discovery J. Thomson electron - a negatively charged particle that is part of all atoms. Since electrons have a negative charge, and the atom as a whole is electrically neutral, it was assumed that in addition to the electron there is a positively charged particle. According to calculations, the mass of the electron was 1/1836 of the mass of a positively charged particle - a proton.

Based on the huge, compared to the electron, mass of a positively charged particle, the English physicist W. Thomson(lord Kelvin) proposed in 1902 the first model of the atom - a positive charge is distributed over a fairly large area, and electrons are interspersed in it, like “raisins in pudding.” This idea was developed J. Thomson. J. Thomson's model of the atom, on which he worked for almost 15 years, could not resist experimental verification.

In 1908 E. Marsden And X . Geiger, E. Rutherford's collaborators conducted experiments on the passage of alpha particles through thin plates of gold and other metals and found that almost all of them passed through the plate as if there were no obstacle, and only 1/10,000 of them experienced a strong deflection. J. Thomson's model could not explain this, but E. Rutherford found a way out. He drew attention to the fact that most of the particles are deflected by a small angle, and a small part - up to 150°. E. Rutherford came to the conclusion that they hit some kind of obstacle; this obstacle is the nucleus of an atom - a positively charged microparticle, the size of which (10-12 cm) is very small compared to the size of an atom (10-8 cm), but it focuses almost entirely on the mass of the atom.

The model of the atom, proposed by E. Rutherford in 1911, resembled the solar system: in the center there is an atomic nucleus, and electrons move around it in their orbits.

The nucleus has a positive charge and the electrons have a negative charge. Instead of the gravitational forces acting in the solar system, electrical forces act in the atom. The electric charge of the nucleus of an atom, numerically equal to the serial number in the periodic system of Mendeleev, is balanced by the sum of the charges of the electrons - the atom is electrically neutral.

The insoluble contradiction of this model was that electrons, in order not to lose stability, must move around the nucleus. At the same time, according to the laws of electrodynamics, they must radiate electromagnetic energy. But in this case, the electrons would very quickly lose all their energy and fall onto the nucleus.

The next contradiction is related to the fact that the emission spectrum of an electron must be continuous, since the electron, approaching the nucleus, would change its frequency. Experience shows that atoms emit light only at certain frequencies. This is why atomic spectra are called line spectra. In other words, Rutherford's planetary model of the atom turned out to be incompatible with the electrodynamics of J. C. Maxwell.

In 1913, the great Danish physicist N. Bor applied the principle of quantization when solving the problem of the structure of the atom and the characteristics of atomic spectra.

N. Bohr's atomic model was based on the planetary model of E. Rutherford and on the quantum theory of atomic structure developed by him. N. Bohr put forward a hypothesis about the structure of the atom, based on two postulates that are completely incompatible with classical physics:

1) in each atom there are several stationary with standing(in the language of the planetary model, several stationary orbits) of electrons, moving along which an electron can exist, not radiating;

2) when transition electron from one stationary state to another atom emits or absorbs a portion of energy.

Bohr's postulates explain the stability of atoms: electrons in stationary states do not emit electromagnetic energy without an external reason. It becomes clear why atoms of chemical elements do not emit radiation if their state does not change. The line spectra of atoms are also explained: each line of the spectrum corresponds to the transition of an electron from one state to another.

N. Bohr's theory of the atom made it possible to give an accurate description of the hydrogen atom, consisting of one proton and one electron, which agreed quite well with experimental data. Further extension of the theory to multielectron atoms and molecules encountered insurmountable difficulties. The more theorists tried to describe the movement of electrons in an atom and determine their orbits, the greater the discrepancy between theoretical results and experimental data. As it became clear during the development of quantum theory, these discrepancies were associated mainly with the wave properties of the electron. The wavelength of an electron moving in an atom is approximately 10-8 cm, i.e. it is of the same order as the size of the atom. The motion of a particle belonging to any system can be described with a sufficient degree of accuracy as the mechanical motion of a material point along a certain orbit (trajectory) only if the wavelength of the particle is negligible compared to the size of the system. In other words, it should be taken into account that the electron is not a point or a solid ball, it has an internal structure, which may vary depending on its condition. However, the details of the internal structure of the electron are unknown.

Consequently, it is fundamentally impossible to accurately describe the structure of an atom based on the idea of ​​the orbits of point electrons, since such orbits do not actually exist. Due to their wave nature, electrons and their charges are, as it were, smeared throughout the atom, but not evenly, but in such a way that at some points the time-averaged electron charge density is greater, and at others it is less.

A description of the distribution of electron charge density was given in quantum mechanics: the electron charge density at certain points gives a maximum. The curve connecting the points of maximum density is formally called the electron orbit. The trajectories calculated in the theory of N. Bohr for a one-electron hydrogen atom coincided with the curves of the maximum average charge density, which determined the agreement with the experimental data.

N. Bohr's theory represents, as it were, the borderline of the first stage in the development of modern physics. This is the latest effort to describe the structure of the atom based on classical physics, supplemented with only a small number of new assumptions. The postulates introduced by Bohr clearly showed that classical physics unable to explain even the simplest experiments related to structure of the atom. Postulates alien to classical physics violated its integrity, but made it possible to explain only a small range of experimental data.

It seemed that N. Bohr's postulates reflected some new, unknown properties of matter, but only partially. The answers to these questions were obtained as a result of the development quantum mechanics. It revealed, that atomic model N. Bora is not should be taken literally, How It was at first. Processes in atom basically it is forbidden visually represent it in mechanical form skies models by analogy With events in macrocosm. I don't even understand tia of space and time in the existing macrocosm form turned out to be unsuitable for describing microphysical phenomena. The atom of theoretical physicists became more and more an abstractly unobservable sum of equations.

4.2.4. Elementary particles and the quark model of the atom

Further development of the ideas of atomism was associated with the study of elementary particles. Particles that make up a previously “indivisible” atom are called elementary. These also include those particles that are produced under experimental conditions at powerful accelerators. Currently, more than 350 microparticles have been discovered.

Term "elementary particle" originally meant the simplest particles, which are not further decomposable into anything, underlying any material formations. Later, physicists realized the entire convention of the term “elementary” in relation to micro-objects. Now there is no doubt that particles have one structure or another, but nevertheless the historically established name continues to exist.

The main characteristics of elementary particles are mass, charge, average lifetime, spin and quantum numbers.

Resting mass elementary particles are determined in relation to the rest mass of the electron. There are elementary particles that do not have rest mass - photons. The remaining particles according to this criterion are divided into: leptons- light particles (electron and trino); mesons - medium particles with masses ranging from one to a thousand electron masses; baryons- heavy particles whose mass exceeds a thousand electron masses and which include protons, neutrons, hyperons and many resonances.

Electric charge is another important characteristic of elementary particles. All known particles have a positive, negative or zero charge. Each particle, except the photon and two mesons, corresponds to antiparticles with opposite charges. In 1967, American physicist M. Gell- Mann put forward a hypothesis about the existence of quarks - particles with a fractional electric charge.

Based on their lifetime, particles are divided into stable And unstable new There are five stable particles: the photon, two types of neutrinos, the electron and the proton. It is stable particles that play the most important role in the structure of macrobodies. All other particles are unstable, they exist for about 10-10 - 10-24 , after which they disintegrate.

In addition to charge, mass and lifetime, elementary particles are also described by concepts that have no analogues in classical physics: the concept "spin", or the intrinsic angular momentum of a microparticle, and the concept "quantum numbers la", expressing the state of elementary particles.

According to modern concepts, all elementary particles are divided into two classes: fermions(named after E. Fermi) and bosons(named after S. Bose).

Fermions include quarks and leptons, and bosons include field quanta (photons, vector bosons, gluons, gravitinos and gravitons). These particles are considered truly elementary those. further indecomposable. The remaining particles are classified as conditionally elementary, those. composite particles formed from quarks and corresponding field quanta. Fermions make up the substance bosons carry interaction.

Elementary particles participate in all types of known interactions. There are four types of fundamental interactions in nature: strong, electromagnetic, weak and gravitational.

Strong interaction occurs at the level of atomic nuclei and represents the mutual attraction of their constituent parts. It acts at a distance of about 10-13 cm. Under certain conditions, strong interaction binds particles very tightly, resulting in the formation of material systems with high binding energy - atomic nuclei. It is for this reason that the nuclei of atoms are very stable and difficult to destroy.

Electromagnetic interaction about a thousand times weaker than a strong one, but much longer-range. This type of interaction is characteristic of electrically charged particles. The carrier of electromagnetic interaction is a photon that has no charge - a quantum of the electromagnetic field. In the process of electromagnetic interaction, electrons and atomic nuclei combine into atoms, and atoms into molecules. In a certain sense, this interaction is fundamental in chemistry and biology.

Weak interaction possibly between different particles. It extends over a distance of the order of 10-15-10-22 cm and is associated mainly with the decay of particles, for example, with the transformation of a neutron into a proton, electron and antineutrino occurring in the atomic nucleus. According to the current state of knowledge, most particles are unstable precisely because of the weak interaction.

Gravitational interaction - the weakest, not taken into account in the theory of elementary particles, since at characteristic distances of about 10-13 cm it gives extremely small effects. However, at ultra-short distances (on the order of 10-33 cm) and at ultra-high energies, gravity again becomes significant. Here the unusual properties of the physical vacuum begin to appear. Superheavy virtual particles create a noticeable gravitational field around themselves, which begins to distort the geometry of space. On a cosmic scale, gravitational interaction is critical. Its range of action is not limited.

The time during which the transformation of elementary particles occurs depends on the strength of interaction. Nuclear reactions associated with strong interactions occur within 10-24-10-23 s. This is approximately the shortest time interval during which a particle, accelerated to high energies, to a speed close to the speed of light, passes through an elementary particle with a size of about 10-13 cm. Changes caused by electromagnetic interactions take place within 10-19-10-21 s, and weak ones (for example, the decay of elementary particles) - mainly within 10-10 s.

By the time of various transformations one can judge the strength of the interactions associated with them.

All four interactions are necessary and sufficient to build a diverse world.

Without strong interactions, atomic nuclei would not exist, and stars and the Sun would not be able to generate heat and light using nuclear energy.

Without electromagnetic interactions there would be no atoms, no molecules, no macroscopic objects, as well as heat and light.

Without weak interactions, nuclear reactions in the depths of the Sun and stars would not be possible, supernova explosions would not occur, and the heavy elements necessary for life could not spread throughout the Universe.

Without gravitational interaction, not only would there be no galaxies, stars, planets, but the entire Universe could not evolve, since gravity is a unifying factor that ensures the unity of the Universe as a whole and its evolution.

Modern physics has come to the conclusion that all four fundamental interactions necessary to create a complex and diverse material world from elementary particles can be obtained from one fundamental interaction - the superforce. The most striking achievement was the proof that at very high temperatures (or energies) all four interactions combine into one.

At an energy of 100 GeV (100 billion electron volts), the electromagnetic and weak forces combine. This temperature corresponds to the temperature of the Universe 10 - 10 s after the Big Bang. At an energy of 1015 GeV, a strong interaction joins them, and at an energy of 1019 GeV, a combination of all four interactions occurs.

This assumption is purely theoretical, since it cannot be verified experimentally. These ideas are indirectly confirmed by astrophysical data, which can be considered as experimental material accumulated by the Universe.

Advances in the field of elementary particle research contributed to the further development of the concept of atomism. It is currently believed that among the many elementary particles we can distinguish 12 fundamental particles and the same number of antiparticles1. The six particles are quarks with exotic names: “upper”, “lower”, “charmed”, “strange”, “true”, “lovely”. The remaining six are leptons: electron, muon, tau particle and their corresponding neutrinos (electron, muon, tau neutrino).

These 12 particles are grouped into three generations, each of which consists of four members.

In the first generation there are “upper” and “downward” quarks, an electron and an electron neutrino.

In the second generation there are “charm” and “strange” quarks, muons and muon neutrinos.

In the third generation - “true” and “lovely” quarks and tau particles with their neutrinos.

Ordinary matter consists of particles of the first generation.

It is assumed that the remaining generations can be created artificially at charged particle accelerators.

Using the quark model, physicists have developed a simple and elegant solution to the problem of atomic structure.

Each atom consists of a heavy nucleus (strongly bound by the gluon fields of protons and neutrons) and an electron shell. The number of protons in the nucleus is equal to the ordinal number of the element in the periodic table of chemical elements D.I. Mendeleev. A proton has a positive electric charge, a mass 1836 times greater than the mass of an electron, dimensions of the order of 10 - 13 cm. The electric charge of a neutron is zero. A proton, according to the quark hypothesis, consists of two “up” quarks and one “down”, and a neutron - from one “up” and two “down” quarks. They cannot be imagined as a solid ball; rather, they resemble a cloud with blurred boundaries, consisting of virtual particles that are born and disappear.

There are still questions about the origin of quarks and leptons, whether they are the main “building blocks” of nature and how fundamental they are. Answers to these questions are sought in modern cosmology. Of great importance is the study of the birth of elementary particles from vacuum, the construction of models of primary nuclear fusion that gave rise to certain particles at the moment of the birth of the Universe.

4.2.5. Physical vacuum

Vacuum translated from Latin ( vacuum ) means emptiness.

Even in antiquity, the question was raised about whether cosmic space is empty or filled with some kind of material environment, something different from emptiness.

According to the philosophical concept of the great ancient Greek philosopher Democritus, All substances consist of particles, between which there is emptiness. But according to the philosophical concept of another equally famous ancient Greek philosopher Ari Stotel, There is not the slightest place in the world where there is “nothing.” This medium, permeating all spaces of the Universe, was called ether.

The concept of “ether” entered European science. The great Newton understood that the law of universal gravitation will make sense if space has a physical reality, i.e. is a medium with physical properties. He wrote: “The idea that... one body could influence another through emptiness at a distance, without the participation of something that would transfer action and force from one body to another, seems absurd to me.”1

In classical physics there was no experimental data that would confirm the existence of the ether. But there was no data to refute this. Newton's authority contributed to the fact that the ether began to be considered as the most important concept in physics. The concept of “ether” began to include everything that was caused by gravitational and electromagnetic forces. But since other fundamental interactions were practically not studied before the advent of atomic physics, they began to explain any phenomena and any process with the help of the ether.

The ether was supposed to ensure the operation of the law of universal gravitation; the ether turned out to be the medium through which light waves travel; the ether was responsible for all manifestations of electromagnetic forces. The development of physics forced us to endow the ether with more and more contradictory properties.

Michelson's experiment, the greatest of all “negative” experiments in the history of science, led to the conclusion that the hypothesis of a stationary world ether, on which classical physics had placed great hopes, was incorrect. Having considered all the assumptions regarding the ether from the time of Newton until the beginning of the 20th century, A. Einstein summed up the results in his work “The Evolution of Physics”: “All our attempts to make the ether real have failed. He did not discover either his mechanical structure or absolute movement. Nothing remained of all the properties of the ether... All attempts to discover the properties of the ether led to difficulties and contradictions. After so many failures, there comes a time when you should completely forget about the broadcast and try never to mention it again.”

In the special theory of relativity, the concept of “ether” was abandoned.

In the general theory of relativity, space was considered as a material medium interacting with bodies with gravitational masses. The creator of the general theory of relativity himself believed that some omnipresent material environment must still exist and have certain properties. After the publication of works on the general theory of relativity, Einstein repeatedly returned to the concept of “ether” and believed that “in theoretical physics we cannot do without ether, that is, a continuum endowed with physical properties.”

However, the concept of “ether” already belonged to the history of science, there was no return to it, and “a continuum endowed with physical properties” was called a physical vacuum.

In modern physics, it is believed that the role of the fundamental material basis of the world is played by the physical vacuum, which is a universal medium that permeates all space. A physical vacuum is a continuous medium in which there are neither particles of matter nor a field, and at the same time it is a physical object, and not “nothing” devoid of any properties. The physical vacuum is not directly observed; in experiments only the manifestation of its properties is observed.

Work is of fundamental importance for solving vacuum problems P. Dirac. Before their appearance, it was believed that the vacuum is pure “nothing”, which, no matter what transformations it undergoes, is not capable of changing. Dirac's theory opened the way to transformations of the vacuum, in which the former “nothing” would turn into many “particle-antiparticle” pairs.

Dirac's vacuum is a sea of ​​electrons with negative energy as a homogeneous background that does not affect the occurrence of electromagnetic processes in it. We do not observe electrons with negative energy precisely because they form a continuous invisible background against which all world events take place. Only changes in the state of the vacuum, its “disturbances,” can be observable.

When an energy-rich light quantum—a photon—enters a sea of ​​electrons, it causes a disturbance and an electron with negative energy can jump to a state with positive energy, i.e. will be observed as a free electron. Then a “hole” is formed in the sea of ​​negative electrons and a pair is born: electron + hole.

It was initially assumed that the holes in the Dirac vacuum were protons, the only elementary particles known at that time with a charge opposite to the electron. However, this hypothesis was not destined to survive: in the experiment

No one has ever observed the annihilation of an electron with a proton.

The question of the real existence and physical meaning of holes was resolved in 1932 by an American physicist K.A. Andersen, engaged in photographing the tracks of particles coming from space in a magnetic field. He discovered in cosmic rays a trace of a previously unknown particle, identical in all respects to an electron, but having a charge of the opposite sign. This particle was called a positron. When approaching an electron, a positron annihilates with it into two high-energy photons (gamma quanta), the necessity of which is determined by the laws of conservation of energy and momentum:

Subsequently, it turned out that almost all elementary particles (even those without electrical charges) have their “mirror” counterparts - antiparticles that can annihilate with them. The only exceptions are a few truly neutral particles, such as photons, which are identical to their antiparticles.

The great merit of P. Dirac was that he developed a relativistic theory of electron motion, which predicted the positron, annihilation and the birth of electron-positron pairs from the vacuum. It became clear that the vacuum has a complex structure, from which pairs can be born: particle + antiparticle. Experiments at accelerators confirmed this assumption.

One of the features of vacuum is the presence in it of fields with energy equal to zero and without real particles. The question arises: how can an electromagnetic field exist without photons, an electron-positron field without electrons and positrons, etc.

To explain zero-point field oscillations in a vacuum, the concept of a virtual (possible) particle was introduced - a particle with a very short lifetime of the order of 10 - 21 - 10-24 s. This explains why particles - quanta of the corresponding fields - are constantly born and disappear in a vacuum. Individual virtual particles cannot be detected in principle, but their overall effect on ordinary microparticles is detected experimentally. Physicists believe that absolutely all reactions, all interactions between real elementary particles occur with the indispensable participation of a vacuum virtual background, which elementary particles also influence. Ordinary particles give rise to virtual particles. Electrons, for example, constantly emit and immediately absorb virtual photons.

Further research in quantum physics was devoted to studying the possibility of the emergence of real particles from a vacuum, a theoretical justification for which was given E. Schrödinge rum in 1939

Currently, the concept of physical vacuum, most fully developed in the works of Academician of the Russian Academy of Natural Sciences G.I. Shipova1, is debatable: there are both supporters and opponents of his theory.

In 1998 G.I. Shipov developed new fundamental equations that describe the structure of the physical vacuum. These equations are a system of first-order nonlinear differential equations, which includes the geometrized Heisenberg equations, the geometrized Einstein equations, and the geometrized Yang-Mills equations. Space - time in the theory of G.I. Shipov is not only curved, as in Einstein’s theory, but also twisted, as in Riemann-Cartan geometry. French mathematician Eli Carton was the first to express the idea that fields generated by rotation should exist in nature. These fields are called torsion fields. To take into account the torsion of space G.I. Shipov introduced a set of angular coordinates into geometrized equations, which made it possible to use the angular metric in the theory of physical vacuum, which determines the square of an infinitesimal rotation of a four-dimensional reference system.

The addition of rotational coordinates, with the help of which the torsion field is described, led to the extension of the principle of relativity to physical fields: all physical fields included in the vacuum equations are relative in nature.

The vacuum equations, after appropriate simplifications, lead to the equations and principles of quantum theory. The quantum theory thus obtained turns out to be deterministic Noah, although a probabilistic interpretation of the behavior of quantum objects remains inevitable. Particles represent the limiting case of a purely field formation when the mass (or charge) of this formation tends to a constant value. In this limiting case, particle-wave dualism occurs. Since the relative nature of physical fields associated with rotation is not taken into account, That quantum theory is not complete and thus confirms A. Einstein’s assumptions that “a more perfect quantum theory can be found by expanding the principle of relativity”2.

Shilov's vacuum equations describe curved and twisted space - time, interpreted as vacuum-smart excitations in a virtual state.

In the ground state, absolute vacuum has zero average values ​​of angular momentum and other physical characteristics and is observable in an unperturbed state. Different states of vacuum arise during its fluctuations.

If the source of disturbance is a charge q , then its state manifests itself as an electromagnetic field.

If the source of disturbance is mass T, This state of vacuum is characterized as a gravitational field, which was first expressed by A.D. Sakharov.

If the source of the disturbance is spin, then the vacuum state is interpreted as a spin field, or torsion field (torsion field).

Based on the fact that the physical vacuum is a dynamic system with intense fluctuations, physicists believe that the vacuum is a source of matter and energy, both already realized in the Universe and in a latent state. According to the academician G.I. Naana,“vacuum is everything, and everything is vacuum.”

4.3. Megaworld: modern astrophysical and cosmological concepts

Modern science views the megaworld, or space, as an interacting and developing system of all celestial bodies. The megaworld has a systemic organization in the form of planets and planetary systems that arise around stars and stellar systems - galaxies.

All existing galaxies are included in the system of the highest order - the Metagalaxy. The dimensions of the Metagalaxy are very large: the radius of the cosmological horizon is 15-20 billion light years.

The concepts “Universe” and “Metagalaxy” are very close concepts: they characterize the same object, but in different aspects. Concept "Universe" denotes the entire existing material world; concept "Metagalaxy"- the same world, but from the point of view of its structure - like an ordered system of galaxies.

The structure and evolution of the Universe are studied cosmology. Cosmology as a branch of natural science is located at a unique intersection of science, religion and philosophy. Cosmological models of the Universe are based on certain ideological premises, and these models themselves have great ideological significance.

4.3.1. Modern cosmological models of the Universe

As indicated in the previous chapter, in classical science there was a so-called steady state theory All Lenna, according to which the Universe has always been almost the same as it is now. Science of the 19th century considered atoms as the eternal simplest elements of matter. The energy source of the stars was unknown, so it was impossible to judge their lifetime. When they go out, the Universe will become dark, but will still be stationary. Cold stars would continue their chaotic and eternal wandering in space, and the planets would generate their constant flight in risky orbits. Astronomy was static: the movements of planets and comets were studied, stars were described, their classifications were created, which was, of course, very important. But the question of the evolution of the Universe was not raised.

Classical Newtonian cosmology explicitly or implicitly accepted the following postulates1:

    The universe is everything that exists, the “world as a whole.” Cosmology cognizes the world as it exists in itself, regardless of the conditions of knowledge.

    The space and time of the Universe are absolute; they do not depend on material objects and processes.

    Space and time are metrically infinite.

    Space and time are homogeneous and isotropic.

    The Universe is stationary and does not undergo evolution. Specific space systems can change, but not the world as a whole.

In Newtonian cosmology, two paradoxes arose related to the postulate of the infinity of the Universe.

The first paradox is called gravitational Its essence is that if the Universe is infinite and there is an infinite number of celestial bodies in it, then the gravitational force will be infinitely large, and the Universe should collapse, and not exist forever.

The second paradox is called photometric: if there is an infinite number of celestial bodies, then there must be an infinite luminosity of the sky, which is not observed.

These paradoxes, which cannot be resolved within the framework of Newtonian cosmology, are resolved by modern cosmology, within the boundaries of which the idea of ​​an evolving Universe was introduced.

Modern relativistic cosmology builds models of the Universe, starting from the basic equation of gravity introduced by A. Einstein in the general theory of relativity (GTR).

The basic equation of general relativity connects the geometry of space (more precisely, the metric tensor) with the density and distribution of matter in space.

For the first time in science, the Universe appeared as a physical object. The theory includes its parameters: mass, density, size, temperature.

Einstein’s gravitational equation has not one, but many solutions, which explains the existence of many cosmological models of the Universe. The first model was developed by A. Einstein in 1917. He rejected the postulates of Newtonian cosmology about the absoluteness and infinity of space. In accordance with A. Einstein’s cosmological model of the Universe, the world space is homogeneous and isotrotic, matter is uniformly distributed on average, the gravitational attraction of masses is compensated by the universal cosmological repulsion. A. Einstein's model is stationary in nature, since the metric of space is considered as independent of time. The existence of the Universe is infinite, i.e. has no beginning or end, and space is limitless, but finite.

The universe in A. Einstein’s cosmological model is stationary, infinite in time and limitless in space.

This model seemed quite satisfactory at the time, since it was consistent with all known facts. But new ideas put forward by A. Einstein stimulated further research, and soon the approach to the problem changed decisively.

Also in 1917, the Dutch astronomer W. de Sitter proposed another model, which is also a solution to the gravitational equations. This solution had the property that it would exist even in the case of an “empty” Universe, free of matter. If masses appeared in such a Universe, then the solution ceased to be stationary: a kind of cosmic repulsion between the masses arose, tending to move them away from each other. Expansion trend By V. de Sitter, became noticeable only at very large distances.

In 1922, Russian mathematician and geophysicist A.A. Friedman discarded the postulate of classical cosmology about the stationarity of the Universe and obtained a solution to Einstein’s equations, which describes the Universe with “expanding” space.

Solving the equations of A.A. Friedman allows for three possibilities. If the average density of matter and radiation in the Universe is equal to a certain critical value, the world space turns out to be Euclidean and the Universe expands without limit from the initial point state. If the density is less than critical, the space has Lobachevsky geometry and also expands without limit. And finally, if the density is greater than the critical one, the space of the Universe turns out to be Riemannian; expansion at some stage is replaced by compression, which continues until the initial point state.

Since the average density of matter in the Universe is unknown, today we do not know in which of these spaces of the Universe we live.

In 1927, the Belgian abbot and scientist J. Lvmeter connected the “expansion” of space with data from astronomical observations. Lemaitre introduced the concept of the “beginning of the Universe” as a singularity (i.e., a superdense state) and the birth of the Universe as the Big Bang.

In 1929, an American astronomer E.P. Hubble discovered the existence of a strange relationship between the distance and speed of galaxies: all galaxies are moving away from us, and with a speed that increases in proportion to the distance - ha system the lactic expands.

The expansion of the Universe has long been considered a scientifically established fact, but at present it does not seem possible to unambiguously resolve the issue in favor of one model or another.

4.3.2. The problem of the origin and evolution of the Universe

No matter how the question of the diversity of cosmological models is resolved, it is obvious that our Universe is evolving. According to the theoretical calculations of J. Lemaitre, the radius of the Universe in its original state was equal to 10-12 cm, which is close in size to the radius of an electron, and its density was 1096 g/cm3. In a singular state, the Universe was a micro object of negligible size.

From the initial singular state, the Universe moved to expansion as a result of the Big Bang. Since the late 40s. In the last century, the physics of processes at different stages of cosmological expansion has attracted increasing attention in cosmology. Student A.A. Friedman G.A. Gamow developed a model hot Universe, having considered the nuclear reactions that occurred at the very beginning of the expansion of the Universe, and called it "braid theology of the Big Bang."

Retrospective calculations place the age of the Universe at 13-15 billion years. G.A. Gamow suggested that temperature 130

power was great and fell with the expansion of the Universe. His calculations showed that the Universe in its evolution goes through certain stages, during which the formation of chemical elements and structures occurs. In modern cosmology, for clarity, the initial stage of the evolution of the Universe is divided into eras1.

Hadron era(heavy particles that enter into strong interactions). The duration of the era is 0.0001 s, the temperature is 1012 degrees Kelvin, the density is 1014 cm3. At the end of the era, the annihilation of particles and antiparticles occurs, but a certain number of protons, hyperons, and mesons remain.

Era of leptons(light particles entering into electromagnetic interaction). The duration of the era is 10 s, the temperature is 10 10 degrees Kelvin, the density is 104/cm3. The main role is played by light particles that take part in reactions between protons and neutrons.

Photon era. Duration 1 million years. The bulk of the mass - the energy of the Universe - comes from photons. By the end of the era, the temperature drops from 1010 to 3000 degrees Kelvin, density - from 104 g/cm3 to 10 - 21 g/cm3. The main role is played by radiation, which at the end of the era is separated from matter.

Star era occurs 1 million years after the birth of the Universe. In the stellar era, the process of formation of proto-everydays and proto-galaxies begins.

Then a grandiose picture of the formation of the structure of the Metagalaxy unfolds.

In modern cosmology, along with the Big Bang hypothesis, the so-called inflation model Universe, in which the idea of ​​​​the creation of the Universe is considered. This idea has a very complex justification and is associated with quantum cosmology. This model describes the evolution of the Universe starting from the moment 10-45 s after the start of expansion.

In accordance with the inflation hypothesis, cosmic evolution in the early Universe goes through a number of stages.

Start The Universe is defined by theoretical physicists as a state quantum supergravity with the radius of the Universe being 10 -50 cm (for comparison: the size of an atom is defined as 10-8 cm, and the size of an atomic nucleus is 10-13 cm). The main events in the early Universe took place in a negligibly small period of time from 10-45 s to 10-30 s.

Inflation stage. As a result of the quantum leap, the Universe passed into a state of excited vacuum and, in the absence of matter and radiation in it, intensively expanded according to an exponential law. During this period, the space and time of the Universe itself was created. During the inflationary stage lasting 10 -34 s, the Universe inflated from an unimaginably small quantum size of 10 - 33 cm to an unimaginably large 101,000,000 cm, which is many orders of magnitude greater than the size of the observable Universe - 1028 cm. During this entire initial period, there was neither matter nor radiation in the Universe.

Transition from the inflationary stage to the photon stage. The state of false vacuum disintegrated, the released energy went to the birth of heavy particles and antiparticles, which, having annihilated, gave a powerful flash of radiation (light) that illuminated space.

Stage of separation of matter from radiation: The substance remaining after annihilation became transparent to radiation, and the contact between the substance and radiation disappeared. The radiation separated from the matter constitutes the modern relict background, theoretically predicted by G.A. Gamow and experimentally discovered in 1965.

Subsequently, the development of the Universe went in the direction from poppy the most simple homogeneous state to create more and more complex structures- atoms (initially hydrogen atoms), galaxies, stars, planets, the synthesis of heavy elements in the bowels of stars, including those necessary for the creation of life, the emergence of life and, as the crown of creation, man.

The difference between the stages of the evolution of the Universe in the inflationary model and the Big Bang model concerns only the initial stage of the order of 10-30 s, then there are no fundamental differences between these models in the understanding of the stages of cosmic evolution. Differences in the explanation of the mechanisms of cosmic evolution are associated with divergent worldviews. From the very beginning of the emergence of the idea of ​​an expanding and evolving Universe, a struggle began around it.

The first was the problem of the beginning and end of the time of the existence of the Universe, the recognition of which contradicted the materialistic statements about the eternity of time and the infinity of space, the uncreatability and indestructibility of matter.

What are the natural scientific justifications for the beginning and end of the existence of the Universe?

This justification is proven in 1965 by American theoretical physicists Penrose and S. Hawking a theorem according to which in any model of the Universe with expansion there must necessarily be a singularity - a break in time lines in the past, which can be understood as the beginning of time. The same is true for the situation when expansion is replaced by compression - then there will be a break in time lines in the future - the end of time. Moreover, the point at which compression begins is interpreted by a physicist F. Tiple rum as the end of time - the Great Drain, into which not only galaxies flow, but also the very “events” of the entire past of the Universe.

The second problem is related to the creation of the world out of nothing. Materialists rejected the possibility of creation, since vacuum is not nothing, but a type of matter. Yes, that's right, vacuum is a special type of matter. But the fact is that A.A. Friedman, mathematically, the moment of the beginning of the expansion of space is derived not from ultrasmall, but from zero volume. In his popular book The World as Space and Time, published in 1923, he talks about the possibility of “creating a world out of nothing.”

In the theory of physical vacuum G.I. Shilov, the highest level of reality is geometric space - Absolute Nothing. This position of his theory echoes the statements of the English mathematician W. Clifford that there is nothing in the world except space with its torsion and curvature, and matter is clumps of space, peculiar hills of curvature against the background of flat space. The ideas of W. Clifford were also used by A. Einstein, who in the general theory of relativity for the first time showed the general deep relationship between the abstract geometric concept of the curvature of space and the physical problems of gravitation.

From absolute Nothing, empty geometric space, as a result of its torsion, space-time vortices of right and left rotation are formed, carrying information. These vortices can be interpreted as an information field that permeates space. The equations that describe the information field are nonlinear, so information fields can have a complex internal structure, which allows them to be carriers of significant amounts of information.

Primary torsion fields (information fields) generate a physical vacuum, which is the carrier of all other physical fields - electromagnetic, gravitational, torsion. Under conditions of information-energy excitation, vacuum generates material microparticles.

An attempt to solve one of the main problems of the universe - the emergence of everything from nothing - was made in the 80s. XX century American physicist A. Gut and Soviet physicist A. Linde. The energy of the Universe, which is conserved, was divided into gravitational and non-gravitational parts, having different signs. And then the total energy of the Universe will be equal to zero. Physicists believe that if the predicted non-conservation of the baryon number is confirmed, then then none of the conservation laws will prevent the birth of the Universe from nothing. For now, this model can only be calculated theoretically, and the question remains open.

The greatest difficulty for scientists arises in explaining reasons cosmic evolution. If we put aside the particulars, we can distinguish two main concepts that explain the evolution of the Universe: the concept of self-organization and the concept of creationism.

For self-organization concepts the material Universe is the only reality, and no other reality exists besides it. The evolution of the Universe is described in terms of self-organization: there is a spontaneous ordering of systems in the direction of the formation of increasingly complex structures. Dynamic chaos creates order. Question about goals cosmic evolution cannot be put within the framework of the concept of self-organization.

Within creationism concepts, those. creation, the evolution of the Universe is associated with the realization programs, determined by a reality of a higher order than the material world. Proponents of creationism draw attention to the existence of directed nomogenesis in the Universe (from the Greek. nomos - law and genesis - origin) - development from simple systems to increasingly complex and information-intensive ones, during which the conditions for the emergence of life and humans were created. As an additional argument, we use anthropic prin cip, formulated by English astrophysicists B. Carrom And Rissom.

The essence of the anthroponometric principle is that the existence of the Universe in which we live depends on the numerical values ​​of fundamental physical constants - Planck’s constant, gravitation constant, interaction constants, etc.

The numerical values ​​of these constants determine the main features of the Universe, the sizes of atoms, atomic nuclei, planets, stars, the density of matter and the lifetime of the Universe. If these values ​​differed from existing ones by even an insignificant amount, then not only would life be impossible, but the Universe itself as a complex ordered structure would be impossible. Hence the conclusion is drawn that the physical structure of the Universe is programmed and directed towards the emergence of life. The ultimate goal of cosmic evolution is the appearance of man in the Universe in accordance with the plans of the Creator1.

Among modern theoretical physicists there are supporters of both the concept of self-organization and the concept of creationism. The latter recognize that the development of fundamental theoretical physics makes it an urgent need to develop a unified scientific-theistic picture of the world, synthesizing all achievements in the field of knowledge and faith. The first ones adhere to strictly scientific views.

4.3.3. Structure of the Universe

The Universe at various levels, from conventionally elementary particles to giant superclusters of galaxies, is characterized by structure. The modern structure of the Universe is the result of cosmic evolution, during which galaxies were formed from protogalaxies, stars from protostars, and planets from protoplanetary clouds.

Metagalaxy is a collection of star systems - galaxies, and its structure is determined by their distribution in space, filled with extremely rarefied intergalactic gas and penetrated by intergalactic rays.

According to modern concepts, the Metagalaxy is characterized by a cellular (mesh, porous) structure. These ideas are based on data from astronomical observations, which have shown that galaxies are not uniformly distributed, but are concentrated near the boundaries of cells, within which there are almost no galaxies. In addition, huge volumes of space have been found (on the order of a million cubic megaparsecs) in which galaxies have not yet been discovered. A spatial model of such a structure can be a piece of pumice, which is heterogeneous in small isolated volumes, but homogeneous in large volumes.

If we take not individual sections of the Metagalaxy, but its large-scale structure as a whole, then it is obvious that in this structure there are no special, distinct places or directions and the matter is distributed relatively evenly.

The age of the Metagalaxy is close to the age of the Universe, since the formation of its structure occurs in the period following the separation of matter and radiation. According to modern data, the age of the Metagalaxy is estimated at 15 billion years. Scientists believe that, apparently, the age of galaxies that formed at one of the initial stages of the expansion of the Metagalaxy is also close to this.

Galaxy- a giant system consisting of clusters of stars and nebulae, forming a rather complex configuration in space.

Based on their shape, galaxies are conventionally divided into three types: elliptical, spiral and irregular.

Elliptical galaxies have a spatial ellipsoidal shape with different degrees of compression. They are the simplest in structure: the distribution of stars uniformly decreases from the center.

Spiral galaxies are presented in the shape of a spiral, including spiral arms. This is the most numerous type of galaxy, which includes our Galaxy - the Milky Way.

Incorrect galaxies do not have a distinct shape; they lack a central core.

Some galaxies are characterized by exceptionally powerful radio emission, exceeding visible radiation. These are radio galaxies.

Rice. 4.2. Spiral galaxy NGG 224 (Andromeda Nebula)

In the structure of “regular” galaxies, one can very simply distinguish a central core and a spherical periphery, presented either in the form of huge spiral branches or in the form of an elliptical disk, including the hottest and brightest stars and massive gas clouds.

Galactic nuclei exhibit their activity in different forms: in the continuous outflow of flows of matter; in emissions of gas clumps and gas clouds with a mass of millions of solar masses; in non-thermal radio emission from the perinuclear region.

The oldest stars, whose age is close to the age of the galaxy, are concentrated in the core of the galaxy. Middle-aged and young stars are located in the galactic disk.

Stars and nebulae within a galaxy move in a rather complex way: together with the galaxy, they take part in the expansion of the Universe; in addition, they participate in the rotation of the galaxy around its axis.

Stars. At the present stage of the evolution of the Universe, the matter in it is mainly in stellar condition. 97% of the matter in our Galaxy is concentrated in stars, which are giant plasma formations of various sizes, temperatures, and with different characteristics of motion. Many, if not most, other galaxies have "stellar matter" that makes up more than 99.9% of their mass.

The age of stars varies over a fairly wide range of values: from 15 billion years, corresponding to the age of the Universe, to hundreds of thousands - the youngest. There are stars that are currently being formed and are in the protostellar stage, i.e. they haven't become real stars yet.

Of great importance is the study of the relationship between stars and the interstellar medium, including the problem of the continuous formation of stars from condensing diffuse (scattered) matter.

The birth of stars occurs in gas-dust nebulae under the influence of gravitational, magnetic and other forces, due to which unstable homogeneities are formed and diffuse matter breaks up into a series of condensations. If such concentrations persist long enough, then over time they turn into stars. It is important to note that the birth process is not of an individual isolated star, but of stellar associations. The resulting gas bodies are attracted to each other, but do not necessarily combine into one huge body. Typically, they begin to rotate relative to each other, and the centrifugal force of this movement counteracts the force of attraction, leading to further concentration. Stars evolve from protostars, giant balls of gas with a low glow and low temperature, to stars - dense plasma bodies with internal temperatures of millions of degrees. Then the process of nuclear transformations begins, described in nuclear physics. The main evolution of matter in the Universe occurred and occurs in the depths of stars. It is there that the “melting crucible” is located, which determined the chemical evolution of matter in the Universe.

In the depths of stars, at a temperature of the order of 10 million degrees and at a very high density, atoms are in an ionized state: electrons are almost completely or absolutely all separated from their atoms. The remaining nuclei interact with each other, due to which hydrogen, which is abundant in most stars, is converted with the participation of carbon into helium. These and similar nuclear transformations are the source of colossal amounts of energy carried away by stellar radiation.

The enormous energy emitted by stars is generated as a result of nuclear processes occurring inside them. The same forces that are released during the explosion of a hydrogen bomb create energy within the star that allows it to emit light and heat for millions and billions of years by converting hydrogen into heavier elements, primarily helium. As a result, at the final stage of evolution, stars turn into inert (“dead”) stars.

Stars do not exist in isolation, but form systems. The simplest stellar systems - the so-called multiple systems - consist of two, three, four, five or more stars orbiting around a common center of gravity. The components of some multiple systems are surrounded by a common shell of diffuse matter, the source of which, apparently, is the stars themselves, which eject it into space in the form of a powerful gas flow.

Stars are also united into even larger groups - star clusters, which can have a “scattered” or “spherical” structure. Open star clusters number several hundred individual stars, globular clusters number many hundreds or thousands. And associations, or clusters of stars, are also not immutable and eternally existing. After a certain amount of time, estimated in millions of years, they are scattered by the forces of galactic rotation.

solar system is a group of celestial bodies, very different in size and physical structure. This group includes: the Sun, nine large planets, dozens of planetary satellites, thousands of small planets (asteroids), hundreds of comets, countless meteorite bodies moving both in swarms and in the form of individual particles. By 1979, 34 satellites and 2000 asteroids were known. All these bodies are united into one system due to the gravitational force of the central body - the Sun. The solar system is an ordered system that has its own structural laws. The unified nature of the solar system is manifested in the fact that all the planets revolve around the sun in the same direction and almost in the same plane. Most of the planets' satellites (their moons) rotate in the same direction and, in most cases, in the equatorial plane of their planet. The sun, planets, satellites of planets rotate around their axes in the same direction in which they move along their trajectories. The structure of the solar system is also natural: each subsequent planet is approximately twice as far from the Sun as the previous one. Taking into account the regularities of the structure of the Solar system, its accidental formation seems impossible.

There are also no generally accepted conclusions about the mechanism of planet formation in the Solar System. The solar system, according to scientists, was formed approximately 5 billion years ago, and the Sun is a star of the second (or even later) generation. Thus, the Solar System arose from the products of the life activity of stars of previous generations, which accumulated in gas and dust clouds. This circumstance gives reason to call the solar system a small part of stardust. Science knows less about the origin of the Solar System and its historical evolution than is necessary to build a theory of planet formation. From the first scientific hypotheses put forward approximately 250 years ago to the present day, a large number of different models of the origin and development of the Solar system have been proposed, but none of them has been promoted to the rank of a generally accepted theory. Most of the previously put forward hypotheses are today of only historical interest.

The first theories of the origin of the solar system were put forward by a German philosopher I. Kantom and French mathematician P.S. Laplace. Their theories entered science as a kind of collective cosmogonic hypothesis of Kant-Laplace, although they were developed independently of each other.

According to this hypothesis, the system of planets around the Sun was formed as a result of the forces of attraction and repulsion between particles of scattered matter (nebulae) in rotational motion around the Sun.

The beginning of the next stage in the development of views on the formation of the Solar system was the hypothesis of the English physicist and astrophysicist J. X . Jeans. He suggested that the Sun once collided with another star, as a result of which a stream of gas was torn out of it, which, condensing, transformed into planets. However, given the enormous distance between the stars, such a collision seems completely incredible. A more detailed analysis revealed other shortcomings of this theory.

Modern concepts of the origin of the planets of the solar system are based on the fact that it is necessary to take into account not only mechanical forces, but also others, in particular electromagnetic ones. This idea was put forward by a Swedish physicist and astrophysicist X . Alpha venom and English astrophysicist F. Hoyle. It is considered probable that it was electromagnetic forces that played a decisive role in the birth of the Solar System.

According to modern ideas, the original gas cloud, from which both the Sun and the planets were formed, consisted of ionized gas subject to the influence of electromagnetic forces. After the Sun was formed from a huge gas cloud through concentration, small parts of this cloud remained at a very large distance from it. The gravitational force began to attract the remaining gas to the resulting star - the Sun, but its magnetic field stopped the falling gas at various distances - exactly where the planets are located. Gravitational and magnetic forces influenced the concentration and condensation of the falling gas, and as a result, planets were formed.

When the largest planets arose, the same process was repeated on a smaller scale, thus creating systems of satellites. Theories of the origin of the Solar system are hypothetical in nature, and it is impossible to unambiguously resolve the issue of their reliability at the present stage of scientific development. All existing theories have contradictions and unclear areas.

Questions for self-control

    What is the essence of a systematic approach to the structure of matter?

    Reveal the relationship between the micro, macro and mega worlds.

    What ideas about matter and field as types of matter would

were developed within the framework of classical physics?

4. What does the concept of quantum mean? Tell us about the main stages in the development of ideas about quanta.

5. What does the concept of “wave-particle duality” mean? Which

Is N. Bohr's principle of complementarity important in describing the physical reality of the microworld?

6. What influence did quantum mechanics have on modern genetics?

netiku? What are the main principles of wave genetics?

7. What does the concept of “physical vacuum” mean? What is his role in

evolution of matter?

8. Highlight the main structural levels of the organization of matter in

microcosm and characterize them.

9. Determine the main structural levels of the organization of matter

in the megaworld and give them characteristics.

    What models of the Universe have been developed in modern cosmology?

    Describe the main stages of the evolution of the Universe from the point of view of modern science.

Bibliography

    Weinberg S. The first three minutes. A modern view of the origin of the Universe. - M.: Nauka, 1981.

    Vladimirov Yu. S. Fundamental physics, philosophy and religion. - Kostroma: Publishing house MITSAOST, 1996.

    Gernek F. Pioneers of the Atomic Age. - M: Progress, 1974.

    Dorfman Ya.G. World history of physics from the beginning of the 19th century to the mid-20th century. - M: Science, 1979.

    Idlis G.M. Revolution in astronomy, physics and cosmology. - M.: Nauka, 1985.

    Kaira F. Tao of physics. - St. Petersburg, 1994.

    Kirillin V.A. Pages of the history of science and technology. - M.: Nauka, 1986.

    Kudryavtsev P.S. Course on the history of physics. - M.: Mir, 1974.

    Liozzi M. History of physics. - M: Mir, 1972.

1 Q. Marion J.B. Physics and the physical world. - M.: Mir, 1975.

    Nalimov V.V. On the verge of the third millennium. - M.: Nauka, 1994.

    Shklovsky I.S. Stars, their birth, life and death. - M: Science, 1977.

    Garyaev P.P. Wave genome. - M.: Public benefit, 1994.

    Shipov G.I. Theory of physical vacuum. New paradigm. - M.: NT-Center, 1993.

Physics of the microworld

Structural levels of matter in physics

(insert picture)

Structural levels of substances in the microcosm

    Molecular level- level of molecular structure of substances. Molecule – a single quantum-mechanical system uniting atoms

    Atomic level- level of atomic structure of substances.

Atom – a structural element of the microcosm, consisting of a core and an electron shell.

    Nucleon level- level of the core and particles of its components.

Nucleon – the general name for the proton and neutron, which are components of atomic nuclei.

    Quark level- level of elementary particles – quarks and leptons

Atomic structure

The sizes of atoms are on the order of 10 -10 m.

The sizes of atomic nuclei of all elements are about 10 -15 m, which is tens of thousands of times smaller than the sizes of atoms

The nucleus of an atom is positive, and the electrons rotating around the nucleus carry with them a negative electrical charge. The positive charge of the nucleus is equal to the sum of the negative charges of the electrons. The atom is electrically neutral.

Rutherford's planetary model of the atom . (insert picture)

The circular orbits of four electrons are shown.

Electrons in orbits are held by forces of electrical attraction between them and the nucleus of the atom

An electron cannot be in the same energy state. In the electron shell, electrons are arranged in layers. Each shell contains a certain amount: in the first layer closest to the nucleus - 2, in the second - 8, in the third - 18, in the fourth - 32, etc. After the second layer, the electron orbits are calculated into sublayers.

Energy levels of the atom and a conventional representation of the processes of absorption and emission of photons (see picture)

When transitioning from a low energy level to a higher energy level, the atom absorbs energy (energy quantum) equal to the energy difference between the transition. An atom emits a quantum of energy if an electron in the atom transitions from a higher energy level to a lower one (transitions abruptly).

General classification of elementary particles

Elementary particles- these are indecomposable particles, the internal structure of which is not a combination of other free particles, they are not atoms or atomic nuclei, with the exception of the proton

Classification

    Photons

    Electrons

  • Baryons

Neutron

Basic characteristics of elementary particles

Weight

    Leptons (light)

    Mesons (medium)

    Baryons (heavy)

Lifetime

    stable

    Quasi-stable (decaying under weak and electromagnetic interactions)

    Resonances (unstable short-lived particles that decay due to strong interactions)

Interactions in a microcosm

    Strong interaction provides strong binding and neutrons in the nuclei of atoms, quarks in nucleons

    Electromagnetic interaction provides connection between electrons and nuclei, atoms in molecules

    Weak interaction provides a transition between different types of quarks, in particular, determines the decay of neutrons, causes mutual transitions between different types of leptons

    Gravitational interaction in the microcosm at a distance of 10 -13 cm cannot be ignored, however at distances of the order of 10 -33 cm the special properties of the physical vacuum begin to appear - virtual superheavy particles surround themselves with a gravitational field that distorts the geometry of space

Characteristics of the interaction of elementary particles

Interaction type

Relative intensity

Range cm

Particles between which interaction occurs

Particles are carriers of interaction

Name

Mass GeV

Strong

Hadrons (neutrons, protons, mesons)

Gluons

Electromagnetic

All electrically charged bodies and particles

Photon

Weak

All elementary particles except photons

Vector obozones W + , W - , Z 0

Gravitational

All particles

Gravitons (hypothetically particle)

Structural levels of organization of matter (field)

Field

    Gravitational (quanta - gravitons)

    Electromagnetic (quanta - photons)

    Nuclear (quanta - mesons)

    Electronically positive (quantum – electrons, positrons)

Structural levels of matter organization (matter and field)

Matter and field are different

    By rest mass

    According to the patterns of movement

    By degrees of permeability

    According to the degree of concentration of mass and energy

    As particle and wave entities

General conclusion : the difference between substances and fields correctly characterizes the real world in a macroscopic approximation. This difference is not absolute, and when moving to micro-objects its relativity is clearly revealed. In the microcosm, the concepts of “particles” (matter) and “waves” (fields) act as additional characteristics that express the internal inconsistency of the essence of microobjects.

Quarks are components of elementary particles

All quarks have a fractional electric charge. Quarks are characterized strangeness, charm and beauty.

The baryon charge of all quarks is 1/3, and that of the corresponding antiquarks is 1/3. Each quark has three states, these states are called color states: R - red, G - green and B - blue

Send your good work in the knowledge base is simple. Use the form below

Students, graduate students, young scientists who use the knowledge base in their studies and work will be very grateful to you.

Posted on http://www.allbest.ru/

Test

Microworld: concepts of modern physics

Introduction

The microworld is the world of extremely small, not directly observable microobjects. (Spatial dimension, which is calculated from 10-8 to 10-16 cm, and lifetime - from infinity to 10-24 s.)

Quantum mechanics (wave mechanics) is a theory that establishes a method of description and laws of motion at the micro level.

The study of microworld phenomena led to results that sharply diverged from those generally accepted in classical physics and even the theory of relativity. Classical physics saw its goal in describing objects that exist in space and in formulating the laws governing their changes over time. But for such phenomena as radioactive decay, diffraction, emission of spectral lines, one can only assert that there is some probability that an individual object is like this and has such and such a property. Quantum mechanics has no place for laws governing changes in a single object over time.

Classical mechanics is characterized by the description of particles by specifying their position and velocities and the dependence of these quantities on time. In quantum mechanics, identical particles under identical conditions can behave differently.

1. Microworld: concepts of modern physics describing the microworld

When moving to the study of the microworld, it was discovered that physical reality is unified and there is no gap between matter and field.

While studying microparticles, scientists were faced with a paradoxical situation from the point of view of classical science: the same objects exhibited both wave and corpuscular properties.

The first step in this direction was taken by the German physicist M. Planck. As is known, at the end of the 19th century. A difficulty arose in physics, which was called the “ultraviolet catastrophe.” In accordance with calculations using the formula of classical electrodynamics, the intensity of thermal radiation of a completely black body should have increased without limit, which clearly contradicted experience. In the process of researching thermal radiation, which M. Planck called the hardest in his life, he came to the stunning conclusion that in radiation processes energy can be given off or absorbed not continuously and not in any quantities, but only in certain indivisible quantities. portions - quanta. The energy of quanta is determined through the number of oscillations of the corresponding type of radiation and the universal natural constant, which M. Planck introduced into science under the symbol h: E = h y.

If the introduction of the quantum had not yet created a real quantum theory, as M. Planck repeatedly emphasized, then on December 14, 1900, the day the formula was published, its foundation was laid. Therefore, in the history of physics, this day is considered the birthday of quantum physics. And since the concept of an elementary quantum of action subsequently served as the basis for understanding all the properties of the atomic shell and atomic nucleus, December 14, 1900 should be considered both as the birthday of all atomic physics and the beginning of a new era of natural science.

The first physicist who enthusiastically accepted the discovery of the elementary quantum of action and creatively developed it was A. Einstein. In 1905, he transferred the brilliant idea of ​​quantized absorption and release of energy during thermal radiation to radiation in general and thus substantiated the new doctrine of light.

The idea of ​​light as a stream of rapidly moving quanta was extremely bold, almost daring, and few initially believed in its correctness. First of all, M. Planck himself did not agree with the expansion of the quantum hypothesis to the quantum theory of light, referring his quantum formula only to the laws of thermal radiation of a black body that he considered.

A. Einstein suggested that we are talking about a natural law of a universal nature. Without looking back at the prevailing views in optics, he applied Planck's hypothesis to light and came to the conclusion that the corpuscular structure of light should be recognized.

The quantum theory of light, or Einstein's photon theory A, argued that light is a wave phenomenon constantly propagating in space. And at the same time, light energy, in order to be physically effective, is concentrated only in certain places, so light has a discontinuous structure. Light can be considered as a stream of indivisible energy grains, light quanta, or photons. Their energy is determined by the elementary quantum of the Planck action and the corresponding number of vibrations. Light of different colors consists of light quanta of different energies.

Einstein's idea of ​​light quanta helped to understand and visualize the phenomenon of the photoelectric effect, the essence of which is the knocking out of electrons from a substance under the influence of electromagnetic waves. Experiments have shown that the presence or absence of a photoelectric effect is determined not by the intensity of the incident wave, but by its frequency. If we assume that each electron is ejected by one photon, then the following becomes clear: the effect occurs only if the energy of the photon, and therefore its frequency, is high enough to overcome the binding forces between the electron and matter.

The correctness of this interpretation of the photoelectric effect (for this work Einstein received the Nobel Prize in Physics in 1922) was confirmed 10 years later in the experiments of the American physicist R.E. Milliken. Discovered in 1923 by the American physicist A.H. Compton, the phenomenon (Compton effect), which is observed when atoms with free electrons are exposed to very hard X-rays, again and finally confirmed the quantum theory of light. This theory is one of the most experimentally confirmed physical theories. But the wave nature of light had already been firmly established by experiments on interference and diffraction.

A paradoxical situation arose: it was discovered that light behaves not only as a wave, but also as a flow of corpuscles. In diffraction and interference experiments its wave properties are revealed, and in the photoelectric effect its corpuscular properties are revealed. In this case, the photon turned out to be a very special kind of corpuscle. The main characteristic of its discreteness - its inherent portion of energy - was calculated through a purely wave characteristic - frequency y (E = Nu).

Like all great natural scientific discoveries, the new doctrine of light had fundamental theoretical and epistemological significance. The old position about the continuity of natural processes, which was thoroughly shaken by M. Planck, was excluded by Einstein from the much larger field of physical phenomena.

Developing the ideas of M. Planck and A. Einstein, the French physicist Louis de Broche in 1924 put forward the idea of ​​the wave properties of matter. In his work “Light and Matter,” he wrote about the need to use wave and corpuscular concepts not only in accordance with the teachings of A. Einstein in the theory of light, but also in the theory of matter.

L. de Broglie argued that wave properties, along with corpuscular ones, are inherent in all types of matter: electrons, protons, atoms, molecules and even macroscopic bodies.

According to de Broglie, any body with mass m moving with speed V corresponds to a wave:

In fact, a similar formula was known earlier, but only in relation to light quanta - photons.

microcosm quantum mechanical genetics physics

2. Views of M. Planck, Louis De Broglie, E. Schrödinger, W. Heisenberg, N. Bohr and others on the nature of the microworld

In 1926, the Austrian physicist E. Schrödinger found a mathematical equation that determines the behavior of matter waves, the so-called Schrödinger equation. The English physicist P. Dirac generalized it.

The bold thought of L. de Broglie about the universal “dualism” of particles and waves made it possible to construct a theory with the help of which it was possible to embrace the properties of matter and light in their unity. In this case, light quanta became a special moment of the general structure of the microcosm.

Waves of matter, which were initially presented as visually real wave processes similar to acoustic waves, took on an abstract mathematical appearance and, thanks to the German physicist M. Born, received a symbolic meaning as “waves of probability.”

However, de Broglie's hypothesis needed experimental confirmation. The most convincing evidence of the existence of wave properties of matter was the discovery of electron diffraction in 1927 by American physicists K. Davisson and L. Germer. Subsequently, experiments were carried out to detect the diffraction of neutrons, atoms and even molecules. In all cases, the results fully confirmed de Broglie's hypothesis. Even more important was the discovery of new elementary particles predicted on the basis of a system of formulas of developed wave mechanics.

Recognition of wave-particle duality in modern physics has become universal. Any material object is characterized by the presence of both corpuscular and wave properties.

The fact that the same object appears as both a particle and a wave destroyed traditional ideas.

The form of a particle implies an entity contained in a small volume or finite region of space, while a wave propagates over vast regions of space. In quantum physics, these two descriptions of reality are mutually exclusive, but equally necessary in order to fully describe the phenomena in question.

The final formation of quantum mechanics as a consistent theory occurred thanks to the work of the German physicist W. Heisenberg, who established the uncertainty principle? and the Danish physicist N. Bohr, who formulated the principle of complementarity, on the basis of which the behavior of micro-objects is described.

The essence of W. Heisenberg's uncertainty relation is as follows. Let's say the task is to determine the state of a moving particle. If it were possible to use the laws of classical mechanics, then the situation would be simple: one only had to determine the coordinates of the particle and its momentum (quantity of motion). But the laws of classical mechanics cannot be applied to microparticles: it is impossible not only practically, but also in general to establish with equal accuracy the location and magnitude of the movement of a microparticle. Only one of these two properties can be determined accurately. In his book “Physics of the Atomic Nucleus” W. Heisenberg reveals the content of the uncertainty relation. He writes that it is never possible to simultaneously know exactly both parameters - position and speed. You can never simultaneously know where a particle is and how fast and in what direction it is moving. If an experiment is performed that shows exactly where the particle is at a given moment, then the movement is disrupted to such an extent that the particle cannot be found after that. Conversely, with an accurate measurement of velocity, it is impossible to determine the location of the particle.

From the point of view of classical mechanics, the uncertainty relation seems absurd. To better assess the current situation, we must keep in mind that we humans live in the macroworld and, in principle, cannot build a visual model that would be adequate to the microworld. The uncertainty relation is an expression of the impossibility of observing the microworld without disturbing it. Any attempt to provide a clear picture of microphysical processes must rely on either a corpuscular or wave interpretation. In the corpuscular description, the measurement is carried out in order to obtain an accurate value of the energy and magnitude of the movement of a microparticle, for example, during electron scattering. In experiments aimed at precise location determination, on the contrary, the wave explanation is used, in particular when electrons pass through thin plates or when observing the deflection of rays.

The existence of an elementary quantum of action serves as an obstacle to establishing simultaneously and with equal accuracy quantities that are “canonically related,” i.e. position and magnitude of particle motion.

The fundamental principle of quantum mechanics, along with the uncertainty relation, is the principle of complementarity, to which N. Bohr gave the following formulation: “The concepts of particles and waves complement each other and at the same time contradict each other, they are complementary pictures of what is happening”1.

The contradictions in the particle-wave properties of micro-objects are the result of the uncontrolled interaction of micro-objects and macro-devices. There are two classes of devices: in some, quantum objects behave like waves, in others - like particles. In experiments, we do not observe reality as such, but only a quantum phenomenon, including the result of the interaction of a device with a microobject. M. Born figuratively noted that waves and particles are “projections” of physical reality onto the experimental situation.

A scientist studying the microworld thus turns from an observer into an actor, since physical reality depends on the device, i.e. ultimately from the arbitrariness of the observer. Therefore, N. Bohr believed that a physicist does not know reality itself, but only his own contact with it.

An essential feature of quantum mechanics is the probabilistic nature of predictions of the behavior of micro-objects, which is described using the E. Schrödinger wave function. The wave function determines the parameters of the future state of a microobject with varying degrees of probability. This means that when conducting the same experiments with the same objects, different results will be obtained each time. However, some values ​​will be more likely than others, e.g. only the probability distribution of values ​​will be known.

Taking into account the factors of uncertainty, complementarity and probability, N. Bohr gave the so-called “Copenhagen” interpretation of the essence of quantum theory: “Previously it was generally accepted that physics describes the Universe. We now know that physics describes only what we can say about the Universe.”1

N. Bohr's position was shared by W. Heisenberg, M. Born, W. Pauli and a number of other lesser-known physicists. Proponents of the Copenhagen interpretation of quantum mechanics did not recognize causality or determinism in the microworld and believed that the basis of physical reality is fundamental uncertainty - indeterminism.

Representatives of the Copenhagen school were sharply opposed by G.A. Lorentz, M. Planck, M. Laue, A. Einstein, P. Langevin and others. A. Einstein wrote about this to M. Born: “In our scientific views we have developed into antipodes. You believe in a God who plays dice, and I believe in the complete lawfulness of objective existence... What I am firmly convinced of is that in the end they will settle on a theory in which not probabilities, but facts, will be naturally connected.” 2. He opposed the principle of uncertainty, for determinism, and against the role assigned to the act of observation in quantum mechanics. The further development of physics showed that Einstein was right, who believed that quantum theory in its existing form is simply incomplete: the fact that physicists cannot yet get rid of uncertainty does not indicate the limitations of the scientific method, as N. Bohr argued, but only the incompleteness of quantum mechanics . Einstein gave more and more new arguments to support his point of view.

The most famous is the so-called Einstein-Podolsky-Rosen paradox, or EPR paradox, with the help of which they wanted to prove the incompleteness of quantum mechanics. The paradox is a thought experiment: what would happen if a particle consisting of two protons decayed so that the protons flew apart in opposite directions? Due to their common origin, their properties are related or, as physicists say, correlate with each other. According to the law of conservation of momentum, if one proton flies upward, then the second must fly down. Having measured the momentum of one proton, we will definitely know the momentum of the other, even if it has flown to the other end of the Universe. There is a non-local connection between particles, which Einstein called “the action of ghosts at a distance,” in which each particle at any given time knows where the other is and what is happening to it.

The EPR paradox is incompatible with the uncertainty postulated in quantum mechanics. Einstein believed that there were some hidden parameters that were not taken into account. Questions: do determinism and causality exist in the microworld; Is quantum mechanics complete? whether there are hidden parameters that it does not take into account has been the subject of debate among physicists for more than half a century and found its resolution at the theoretical level only at the end of the 20th century.

In 1964 J.S. Bela argued that quantum mechanics predicts a stronger correlation between interconnected particles than Einstein predicted.

Bell's theorem states that if some objective Universe exists, and if the equations of quantum mechanics are structurally similar to that Universe, then some kind of nonlocal connection exists between two particles that ever come into contact. The essence of Bell's theorem is that there are no isolated systems: every particle of the Universe is in “instantaneous” communication with all other particles. The entire system, even if its parts are separated by huge distances and there are no signals, fields, mechanical forces, energy, etc. between them, functions as a single system.

In the mid-1980s, A. Aspect (University of Paris) tested this connection experimentally by studying the polarization of pairs of photons emitted by a single source towards isolated detectors. When comparing the results of the two series of measurements, consistency was found between them. From the point of view of the famous physicist D. Bohm, A. Aspect's experiments confirmed Bell's theorem and supported the position of nonlocal hidden variables, the existence of which was assumed by A. Einstein. In D. Bohm's interpretation of quantum mechanics, there is no uncertainty in the coordinates of the particle and its momentum.

Scientists have suggested that communication is carried out through the transfer of information, the carriers of which are special fields.

3. Wave genetics

The discoveries made in quantum mechanics had a fruitful impact not only on the development of physics, but also on other areas of natural science, primarily biology, within which the concept of wave, or quantum, genetics was developed.

When in 1962 J. Watson, A. Wilson and F. Crick received the Nobel Prize for the discovery of the double helix of DNA carrying hereditary information, it seemed to geneticists that the main problems of the transmission of genetic information were close to being resolved. All information is recorded in genes, the combination of which in cellular chromosomes determines the development program of the organism. The task was to decipher the genetic code, which meant the entire sequence of nucleotides in DNA.

However, reality did not live up to scientists' expectations. After the discovery of the structure of DNA and a detailed consideration of the participation of this molecule in genetic processes, the main problem of the phenomenon of life - the mechanisms of its reproduction - remained essentially unsolved. Deciphering the genetic code made it possible to explain the synthesis of proteins. Classical geneticists proceeded from the fact that genetic molecules, DNA, are of a material nature and work like a substance, representing a material matrix on which a material genetic code is written. In accordance with it, a carnal, material and material organism is developed. But the question of how the spatiotemporal structure of an organism is encoded in chromosomes cannot be resolved on the basis of knowledge of the nucleotide sequence. Soviet scientists A.A. Lyubishchev and A.G. Gurvich, back in the 20s and 30s, expressed the idea that considering genes as purely material structures is clearly insufficient for a theoretical description of the phenomenon of life.

A.A. Lyubishchev, in his work “On the Nature of Hereditary Factors,” published in 1925, wrote that genes are neither pieces of a chromosome, nor molecules of autocatalytic enzymes, nor radicals, nor a physical structure. He believed that the gene should be recognized as a potential substance. A better understanding of the ideas of A.A. Lyubishchev is promoted by the analogy of a genetic molecule with musical notation. Music notation itself is material and represents icons on paper, but these icons are realized not in material form, but in sounds, which are acoustic waves.

Developing these ideas, A.G. Gurvich argued that in genetics “it is necessary to introduce the concept of a biological field, the properties of which are formally borrowed from physical concepts”1. The main idea of ​​A.G. Gurvich was that the development of the embryo occurs according to a pre-established program and takes on the forms that already exist in its field. He was the first to explain the behavior of the components of a developing organism as a whole on the basis of field concepts. It is in the field that the forms taken by the embryo during development are contained. Gurvich called the virtual form that determines the result of the development process at any moment a dynamically preformed form and thereby introduced an element of teleology into the original formulation of the field. Having developed the theory of the cell field, he extended the idea of ​​the field as a principle that regulates and coordinates the embryonic process, also to the functioning of organisms. Having substantiated the general idea of ​​the field, Gurvich formulated it as a universal principle of biology. He discovered biophotonic radiation from cells.

Ideas of Russian biologists A.A. Lyubishchev and A.G. Gurvich are a gigantic intellectual achievement, ahead of its time. The essence of their thoughts is contained in the triad:

Genes are dualistic - they are substance and field at the same time.

The field elements of chromosomes mark out space—the time of the organism—and thereby control the development of biosystems.

Genes have aesthetic-imaginative and speech regulatory functions.

These ideas remained underestimated until the appearance of the works of V.P. Kaznacheev in the 60s of the 20th century, in which the predictions of scientists about the presence of field forms of information transfer in living organisms were experimentally confirmed. The scientific direction in biology, represented by the school of V.P. Kaznacheev, was formed as a result of numerous fundamental studies on the so-called mirror cytopathic effect, expressed in the fact that living cells separated by quartz glass, which does not allow a single molecule of substance to pass through, nevertheless exchange information. After the work of V.P. Kaznacheev, the existence of a sign wave channel between the cells of biosystems was no longer in doubt.

Simultaneously with the experiments of V.P. Kaznacheev, Chinese researcher Jiang Kanzhen conducted a series of supergenetic experiments that echoed the foresight of A.L. Lyubishchev and A.G. Gurvich. The difference between Jiang Kanzhen's work is that he conducted experiments not at the cellular level, but at the level of the organism. He proceeded from the fact that DNA - genetic material - exists in two forms: passive (in the form of DNA) and active (in the form of an electromagnetic field). The first form preserves the genetic code and ensures the stability of the body, while the second is able to change it by influencing it with bioelectric signals. A Chinese scientist designed equipment that was capable of reading, transmitting over a distance and introducing wave supergenetic signals from a donor biosystem into an acceptor organism. As a result, he developed unimaginable hybrids, “forbidden” by official genetics, which operates in terms of only real genes. This is how animal and plant chimeras were born: chicken-ducks; corn, from the cobs of which wheat ears grew, etc.

The outstanding experimenter Jiang Kanzheng intuitively understood some aspects of the experimental wave genetics he actually created and believed that the carriers of field genetic information were the ultra-high-frequency electromagnetic radiation used in his equipment, but he could not give a theoretical justification.

After the experimental work of V.P. Kaznacheev and Jiang Kanzhen, which could not be explained in terms of traditional genetics, there was an urgent need for the theoretical development of the wave genome model, in the physical, mathematical and theoretical biological understanding of the work of the DNA chromosome in the field and material dimensions.

The first attempts to solve this problem were made by Russian scientists P.P. Garyaev, A.A. Berezin and A.A. Vasiliev, who set the following tasks:

show the possibility of a dualistic interpretation of the work of the cell genome at the levels of matter and field within the framework of physical and mathematical models;

show the possibility of normal and “anomalous” modes of operation of the cell genome using phantom wave image-sign matrices;

*find experimental evidence of the correctness of the proposed theory.

Within the framework of the theory they developed, called wave genetics, several basic principles were put forward, substantiated and experimentally confirmed, which significantly expanded the understanding of the phenomenon of life and the processes occurring in living matter.

*Genes are not only material structures, but also wave ones
matrices according to which, as if according to templates, the body is built.

The mutual transfer of information between cells, which helps to form the body as an integral system and correct the coordinated functioning of all body systems, occurs not only chemically - through the synthesis of various enzymes and other “signal” substances. P.P. Garyaev suggested and then experimentally proved that cells, their chromosomes, DNA, proteins transmit information using physical fields - electromagnetic and acoustic waves and three-dimensional holograms, read by laser chromosomal light and emitting this light, which is transformed into radio waves and transmits hereditary information in the space of the body. The genome of higher organisms is considered as a bioholographic computer that forms the spatiotemporal structure of biosystems. The carriers of the field matrices on which the organism is built are wave fronts set by genogolograms and so-called DNA solitons - a special type of acoustic and electromagnetic fields produced by the genetic apparatus of the organism itself and capable of intermediary functions in the exchange of strategic regulatory information between cells , tissues and organs of the biosystem.

In wave genetics, the ideas of Gurvich - Lyubishchev - Kaznacheev - Jiang Kanzhen about the field level of gene information were confirmed. In other words, the dualism of the combining unity “wave - particle” or “matter - field”, accepted in quantum electrodynamics, turned out to be applicable in biology, which was predicted by AG at one time. Gurvich and AA. Lyubishchev. Gene-substance and gene-field do not exclude each other, but complement each other.

Living matter consists of nonliving atoms and elementary particles that combine the fundamental properties of waves and particles, but these same properties are used by biosystems as the basis for wave energy-information exchange. In other words, genetic molecules emit an information-energy field in which the entire organism, its physical body and soul are encoded.

*Genes are not only what constitutes the so-called genetics
ical code, but also everything else, most of the DNA that used to be
was considered meaningless.

But it is precisely this large part of the chromosomes that is analyzed within the framework of wave genetics as the main “intelligent” structure of all cells of the body: “Non-coding regions of DNA are not just junk, but structures intended for some purpose with an unclear purpose.. non-coding DNA sequences (and this is 95-99% of the genome) are the strategic information content of chromosomes... The evolution of biosystems has created genetic texts and the genome - a biocomputer - a biocomputer as a quasi-intelligent “subject”, at its level “reading and understanding.” » these “texts”1. This component of the genome, which is called the supergene continuum, i.e. supergene, ensures the development and life of humans, animals, plants, and also programs natural dying. There is no sharp and insurmountable boundary between genes and supergenes; they act as a single whole. Genes provide material “replicas” in the form of RNA and proteins, and supergenes transform internal and external fields, forming from them wave structures in which information is encoded. The genetic commonality of people, animals, plants, and protozoa is that at the protein level these variants are practically the same or slightly different in all organisms and are encoded by genes that make up only a few percent of the total length of the chromosome. But they differ at the level of the “junk part” of the chromosomes, which makes up almost their entire length.

*Chromosomes’ own information is not enough for development
body. Chromosomes are physically reversed along some dimension
Chinese vacuum, which provides the main part of information for the development of em
Briona. The genetic apparatus is capable of itself and with the help of vacuum
generate command wave structures such as holograms, providing
affecting the development of the organism.

Significant for a deeper understanding of life as a cosmo-planetary phenomenon were the experimental data obtained by P.P. Garyaev, who proved the insufficiency of the cell genome to fully reproduce the organism’s development program in conditions of biofield information isolation. The experiment consisted of building two chambers, in each of which all natural conditions were created for the development of tadpoles from frog eggs - the necessary composition of air and water, temperature, lighting conditions, pond silt, etc. The only differences were that one chamber was made of perma-loy, a material that does not transmit electromagnetic waves, and the second was made of ordinary metal, which does not interfere with waves. An equal amount of fertilized frog eggs was placed in each chamber. As a result of the experiment, in the first chamber all freaks appeared, which died after a few days; in the second chamber, tadpoles hatched in due time and developed normally, which later turned into frogs.

It is clear that for the normal development of tadpoles in the first chamber, they lacked some factor that carried the missing part of the hereditary information, without which the organism could not be “assembled” in its entirety. And since the walls of the first chamber cut off the tadpoles only from the radiation that freely penetrated the second chamber, it is natural to assume that filtering or distortion of the natural information background causes deformity and death of the embryos. This means that communication of genetic structures with the external information field is certainly necessary for the harmonious development of the organism. External (exobiological) field signals carry additional, and perhaps the main information into the Earth's gene continuum.

* DNA texts and holograms of the chromosomal continuum can be read in multidimensional space-time and semantic versions. There are wave languages ​​of the cell genome, similar to human ones.

In wave genetics, the substantiation of the unity of the fractal (repeating itself on different scales) structure of DNA sequences and human speech deserves special attention. The fact that the four letters of the genetic alphabet (adenine, guanine, cytosine, thymine) in DNA texts form fractal structures was discovered back in 1990 and did not cause any particular reaction. However, the discovery of gene-like fractal structures in human speech came as a surprise to both geneticists and linguists. It became obvious that the accepted and already familiar comparison of DNA with texts, which was of a metaphorical nature after the discovery of the unity of the fractal structure and human speech, is completely justified.

Together with the staff of the Mathematical Institute of the Russian Academy of Sciences, the group of P.P. Garyaeva developed a theory of fractal representation of natural (human) and genetic languages. Practical testing of this theory in the field of “speech” characteristics of DNA showed the strategically correct orientation of research.

Just as in the experiments of Jiang Kanzhen, the group of P.P. Garyaev, the effect of translation and introduction of wave supergenetic information from donor to acceptor was obtained. Devices were created - generators of soliton fields, into which speech algorithms could be entered, for example, in Russian or English. Such speech structures turned into soliton modulated fields - analogues of those that cells operate in the process of wave communications. The body and its genetic apparatus “recognize” such “wave phrases” as their own and act in accordance with the speech recommendations introduced by the person from the outside. It was possible, for example, by creating certain speech and verbal algorithms, to restore radiation-damaged wheat and barley seeds. Moreover, plant seeds “understood” this speech, regardless of what language it was spoken in - Russian, German or English. Experiments were carried out on tens of thousands of cells.

To test the effectiveness of growth-stimulating wave programs in control experiments, meaningless speech pseudocodes were introduced into the plant genome through generators, which had no effect on plant metabolism, while meaningful entry into the biofield semantic layers of the plant genome gave the effect of a sharp but short-term acceleration of growth.

Recognition of human speech by plant genomes (regardless of language) is fully consistent with the position of linguistic genetics about the existence of a proto-language in the genome of biosystems at the early stages of their evolution, common to all organisms and preserved in the general structure of the Earth's gene pool. Here we can see the correspondence with the ideas of the classic of structural linguistics N. Chomsky, who believed that all natural languages ​​have a deep innate universal grammar, invariant for all people and, probably, for their own supergenetic structures.

Conclusion

Fundamentally new points in the study of the microworld were:

· Each elementary particle has both corpuscular and wave properties.

· Matter can turn into radiation (the annihilation of a particle and an antiparticle produces a photon, i.e. a quantum of light).

· You can predict the location and momentum of an elementary particle only with a certain probability.

· A device that studies reality influences it.

· Accurate measurement is only possible when emitting a stream of particles, but not a single particle.

Bibliography

1. P.P. Goryaev, “Wave genetic code”, M., 1997.

2. G. Idlis, “Revolution in astronomy, physics and cosmology”, M., 1985.

3. A.A. Gorelov. “Concepts of modern natural science” course of lectures,

4. Moscow “Center” 2001

5. V.I. Lavrinenko, V.P. Ratnikov, “Concepts of modern natural science”, M., 2000.

6. Concepts of modern natural science: Textbook for universities / Ed. prof. V.N. Lavrinenko, prof. V.P. Ratnikova. -- 3rd ed., revised. and additional -- M.: UNITY-DANA, 2006.

Posted on Allbest.ru

Similar documents

    Theory of the atomic-molecular structure of the world. Objects of the microworld: electron, fundamental particles, fermions, leptons, hadrons, atom, atomic nucleus and molecule. Development of quantum mechanics and microworld phenomena. Concepts of the microworld and quantum mechanics.

    abstract, added 07/26/2010

    The emergence of non-classical concepts in physics. Wave nature of the electron. Davisson and Germer's (1927) experiment. Features of the quantum mechanical description of the microworld. Heisenberg matrix mechanics. Electronic structure of atoms and molecules.

    presentation, added 10/22/2013

    The history of the birth of quantum theory. Discovery of the Compton effect. The content of the concepts of Rutherford and Bohr regarding the structure of the atom. Basic principles of Broglie's wave theory and Heisenberg's uncertainty principle. Wave-particle duality.

    abstract, added 10/25/2010

    Physical concepts of antiquity and the Middle Ages. Development of physics in modern times. Transition from classical to relativistic concepts in physics. The concept of the emergence of order from chaos by Empedocles and Anaxagoras. Modern physics of the macro- and microworld.

    abstract, added 12/27/2016

    History of the development of quantum theory. Quantum field picture of the world. Basic principles of quantum mechanical description. The principle of observability, clarity of quantum mechanical phenomena. Uncertainty relationship. N. Bohr's principle of complementarity.

    abstract, added 06/22/2013

    Thermal radiation, Planck's quantum hypothesis. Quantum properties of electromagnetic radiation. Einstein's formula for the photoelectric effect. Particle-wave dualism of matter. Heisenberg uncertainty relations. Stationary Schrödinger equation.

    tutorial, added 05/06/2013

    The main representatives of physics. Basic physical laws and concepts. Concepts of classical natural science. Atomistic concept of the structure of matter. Formation of a mechanical picture of the world. The influence of physics on medicine.

    abstract, added 05/27/2003

    Physical meaning of de Broglie waves. Heisenberg uncertainty relation. Particle-wave duality of particle properties. Condition for normalizing the wave function. The Schrödinger equation as the basic equation of nonrelativistic quantum mechanics.

    presentation, added 03/14/2016

    Principles of non-classical physics. Modern ideas about matter, space and time. Basic ideas and principles of quantum physics. Modern ideas about elementary particles. The structure of the microworld. Fundamental physical interactions.

    abstract, added 10/30/2007

    Determination of the center of gravity of the molecule and description of the Schrödinger equation for the complete wave function of the molecule. Calculation of the energy of a molecule and drawing up an equation for the vibrational part of the molecular wave function. Electron movement and molecular spectroscopy.