The History Nature Practice Of Atomic Physics Philosophy

Essay add: 27-11-2017, 19:15   /   Views: 159


The purpose of this paper is to explain how the atom and atomic physics arose and account for the basis of its on-going utility, starting from the earlier Greek philosophers until the scientists of the 20th Century and after the atomic bomb the ended World War II. The model of science applied to depicting this development is heuristics for its practical utility. This history will follow chronologically, showing how science developed and was later modified to incorporate new knowledge into its theories.

The History, Nature, Practice of Atomic Physics

Imre Lakatos (1976) posited that no theorem of mathematics is final or perfect. Once an exception is found, then the theory is adjusted to accommodate this new information. He proposed explaining mathematical knowledge based on the idea of heuristics; that is, ignoring whether a problem can be proven correct, but rather adopting a "good solution," albeit sometimes sacrificing accuracy or precision. He essentially is describing the heuristic model, which will be applied in this chronology from atom to atomic theory. Algorithms are developed to describe a process, and subsequently modified to incorporate new technological knowledge.

The idea behind the atom goes back to the Ancient Greeks who believed that all matter was made of smaller, more fundamental things. In 460 BC, Greek philosopher, Democritus, develop the idea of atoms. He asked what would happen if you break a piece of something in half, and in half again, and so on and so forth: how many times would you have to break it before it can no longer be broken into a smaller piece. He called this small, indivisible piece, atom (άτομο) (Freeman, 1948). Unfortunately, the philosophers of that period, particularly Aristotle, dismissed his ideas as worthless (Freeman, 1948). Subsequently, there was no further interest in the atom until 1803 when John Dalton proposed what he called his atomic theory.

Dalton concurred with Democritus' hypothesis of the immutability of the atom, and added two further hypothesis, specifically that atoms of different elements had different weights-which rejected Newton's theory of chemical affinities, and that three different types of atoms exist, which he labeled "simple," "compound," and "complex." (Greenaway, 1966). In his further work, he posited that atoms could be neither created nor destroyed, and atoms only combine in small, whole number ratios such as 1:1, 1:2, 2:3 and so on (Greenaway, 1966).

In 1897 Thomson discovered the electron and proposed a model for the structure of the atom. He posited that electrons are kept in position by electrostatic forces (Thomson, 1904). He suggested that these electrons were arranged as in a "plum pudding;" that is, each atom was a sphere filled with a positively charged fluid. The fluid was called the "pudding." Scattered in this fluid were electrons known as the "plums." The radius of the model was 10-10 meters (Hentschel, 2009).

In 1900, Planck demonstrated that when you vibrate atoms strong enough, you can measure the energy only in discrete units. He called these energy packets, quanta (Mehra & Rechenberg, 1982). He derived his formula by a statistical analysis of these quanta of energy. Each quanta contained an energy directly proportional to a constant, h, multiplied by the frequency of oscillation of the particular blackbody oscillator associated with that quanta. Using a formula that he developed, written as

I(v, T) =


I = energy per unit time per unit surface area per unit solid angle per unit frequency or


v = frequency;

T = temperature of the black body;

h = constant (6.62606896(33) x 10-34Js = 4.13566733(10) x10-15 eVs);

c = speed of light;

k = Bolzmann constant (E = ) (Mohr, et al., 2006),

calculating a value for the charge of the electron as well as the constant h. Subsequently, he discovered that because of the finite, non-zero value of h, the world at atomic dimensions could not be explained with classical mechanics (Mehra & Rechenberg, 1982). In 1905 Einstein applied this formula to light and was able to explain photoelectric affect-that is, light absorption could release electrons from atoms. He argued that under certain circumstances light behaves not as continuous waves but as discontinuous, individual particles; that is, quanta (Cassidy, 1998).

In 1905, Einstein published his paper on special relativity. It generalized Gallileo's principle of relativity. He termed it "special" because the theory only applied to frames of reference in unvarying relative motion with respect to each other (Einstein, 2008). In this theory, he expressed two postulates:

The Principle of Relativity - The laws by which the states of physical systems undergo change are not affected, whether these changes of state be referred to the one or the other of two systems in uniform motion relative to each other (Einstein, 2008).

The Principle of Invariant Light Speed - "light is always propagated in empty space with a definite velocity [speed] c which is independent of the state of motion of the emitting body" (Einstein, 2008). That is, light in a vacuum move with the speed c (a fixed constant, independent of direction) in at least one system of inertial coordinates (the "stationary system"), regardless of the state of motion of the light source (Einstein, 2008).

The outcome of this theory are that the time lapse between two events is not invariant from one observer to another, but is dependent on the relative speeds of the observer's frame of reference (Einstein, 2008). Two events happening in two different locations that occur simultaneously in the reference frame of one observer may occur non-simultaneously in the reference frame of another observer (Einstein, 2008). The length of an object as measured by one observer may be smaller than that measured by another observer (Einstein, 2008). He posited that as an object's speed approaches the speed of light, an observer would see its mass appear to increase, appearing to make it more difficult to accelerate (Einstein, 2008). The energy content of an object at rest with mass m equals mc2. Conservation of energy implies that in any reaction a decrease of the sum of the masses of particles must be accompanied by an increase in kinetic energies of the particles after the reaction; that is, E = mc2 (Einstein, 2008).

In 1909, Rutherford conducted an experiment where he fired gold foil with helium atom nuclei-alpha () particles. Most of the -particles went straight through, but a few were deflected or bounced back. This led Rutherford to hypothesize that atoms are mostly empty. He posited that the negative electrons orbited around the nucleus of the atom similar to planets around the sun (Goldstein, et al., 2000). In 1919, Rutherford was successful in demonstrating artificial disintegration of a nucleus by firing -particles into nitrogen gas, which resulted in the production of hydrogen (Reeves, 2008).

In 1913, Bohr postulated that electrons can be bumped up to a higher shell if hit by an electron or a photon of light. Classical physics held that the electrons orbiting the nucleus should lose energy until they spiral down into the center, collapsing the atom. Bohr proposed adding to the model the new idea of quanta put forth by Planck. That way, electrons existed at set levels of energy' that is, at fixed distances from the nucleus. If the atom absorbed energy, the electron jumped to a level further from the nucleus; if it radiated energy, it fell to a level closer to the nucleus (Smirnov, 2003). Sommerfeld hypothesized that the orbits of electrons do not have to be spherical but can also be elliptic. He further posited that the orbits don't have to lay in the same plane: they could be oriented in space on some defined directions (Eisberg & Resnick, 1985).

In 1915, Einstein developed his general theory of relativity. It addressed the issue of gravity. It described the relationship between space-time and energy-momentum. Space-time may be defined as space being three-dimensional, with the additive of time as a fourth dimension, combined in a single continuum. In this theory, Einstein assumed that space-time is curved by the presence of energy (Einstein, 2008).

Pauli, in 1925, developed his exclusion principle (Griffiths, 2004), which states that "no two electrons in the same atom can be in the same quantum state" (Schäfer, 1997). The importance of this principle is that it allows for the distinction between the different elements on the Periodic Table.

In 1926, Schrödinger theorized the concept of wave dynamics, developing a particle wave theory. (Frederic & Levi, 2006). The wave function is not physical because it cannot be measured. In this theory, the thing that is measured is the expected value of the quantum operator. This is based upon a probabilistic function (Frederic & Levi, 2006). His equation is used to described an electron's movement through space.

In that same year, Born and Heisenberg developed a theory they called "matrix mechanics" to explain the nature of atoms. Up to this time, quantum theory described the motion of a particle by a classical orbit, with a well defined position and momentum; with the restriction that the time integral over one period of the momentum times the velocity must be a positive integer multiple of Planck's constant (Born, et al., 1989). By applying matrix mathematics, the position, the momentum, the energy, and all the observable quantities are interpreted as matrices. This was developed on the premise that all observed sequences of physical operations may be represented by matrices whose elements are marked by two different energy levels. If one of these physical operations is measured, the result is a value, with the corresponding vector being the state of the system immediately after this measurement (Born, et al., 1989).

In 1926, Schrödinger theorized the concept of wave dynamics, developing a particle wave theory. (Frederic & Levi, 2006). The wave function is not physical because it cannot be measured. In this theory, the thing that is measured is the expected value of the quantum operator. This is based upon a probabilistic function (Frederic & Levi, 2006). His equation is used to described an electron's movement through space.

In 1927, Heisenberg went on further to posit that no experiment could measure the position and momentum of a quantum particle simultaneously. The more precisely one of the factors may be measured, the less precisely the other can be measured. This became known as the "Heisenberg uncertainty principle" (Born, et al., 1989).

In 1932, Heisenberg posited charged particles bounce photons of light back and forth between them, thus providing a way for the electromagnetic forces to act between the particles (Smirnov, 2003). In 1935, Yukawa used Heisenberg's uncertainty principle to explain that a virtual particle could exist for an extremely small fraction of a second (Brown & Jackson, 1976).

In 1938, Hahn and Strassmann first observed nuclear fission. Nuclear fission is a reaction in which the nucleus of an atom splits into smaller parts. In their classic experiment, they discovered barium upon bombarding uranium with neutrons (Hahn & Strassman, 1939). This was seen as a prescient discovery, with practical implications. With news of this discovery spreading throughout the scientific community, Szilárd in 1933 foresaw the potential of causing a nuclear chain reaction (Esterer & Esterer, 1972). Chain reactions were already and understood concept in chemistry and Szilárd envisioned a similar process in physics, using neutrons which he chose because they lacked an electrostatic charge. He attempted to create a chain reaction using beryllium and indium, but was unsuccessful (Esterer & Esterer, 1972). That summer, Szilárd collaborated with Fermi to develop the concept of the nuclear reactor. Uranium would be used as fuel. Earlier Fermi had demonstrated that neutrons were effectively captured by atoms if they were of low energy, because, applying quantum theory, it made the atoms appear to be larger targets (Segrè, 1970). To slow down secondary neutrons released by fissioning uranium nuclei, they proposed development of a graphite device, against which the fast, high-energy neutrons would collide, effectively slowing them down. With enough raw materials, their reactor could theoretically sustain a slow-neutron chain reaction, resulting in heat and radioactive byproducts.

Hahn, Strassman, Meitner and Frisch completed the first successful nuclear chain reaction experiment in 1939 (Smirnov, 2003). It was not until 1942 that the first nuclear reactor was built, named Chicago Pile-1, and subsequently the first chain reaction entirely controlled by man was accomplished (Fermi, 1946). Withdrawing the cadmium-coated rods that absorbed neutrons would increase neutron activity, thus leading to a self-sustaining chain reaction; re-inserting the rods would dampen the reaction.

Szilárd was responsible for the development of the Manhattan Project in 1939. Its purpose was to developed the first atomic weapon (Groves, 1962). It was not until 1945, when the first weapons were successfully developed. There were two types: the bomb used in the Hiroshima bombing was made of uranium-235 (U-235); the other, used in the Nagasaki bombing, was a plutonium bomb. The uranium bomb was a fission weapon, with a mass of U-235 fired down a gun barrel into another U-235 mass, rapidly creating critical mass resulting in an explosion (Graves, 1962). The plutonium bomb operated on the basis of an implosion. A sub-critical sphere of fissile material was reduced into a smaller, denser form; when fissile atoms are packed together, the rate of neutron capture increases to critical mass (Graves, 1962).

In 1948, the first transister was developed. (Bodanis, 2005). Its significance to science, and particularly to the electronic engineering community, was the development of a miniaturized, low-cost device that had an output power greater than its input power (Bodanis, 2005). The transistor may work as either a switch or as an amplifier and is used in many electronic applications applied to atomic physics devices.

In 1952, the first nuclear fusion weapon was developed. Fusion is the process whereby two nuclei are joined together to form a single, heavier nucleus. This is usually accompanied by the absorption or release of energy (Atzeni & Meyer-ter-Vehn, 2004). To detonate the weapon, a small fission device is set off; gamma and X-rays that are emitted first compress the fusion fuel, then heat it to a high temperature. The fusion reaction creates large numbers of high-speed neutrons, which can induce fission in materials not normally susceptible. By grouping together numerous stages with increasing amounts of fusion fuel, weapons may be created with an almost arbitrary yield (Atzeni & Meyer-ter-Vehn, 2004).

Many high-energy accelerators developed after World War II, produced numerous sub-atomic particles which challenged physicists to explain their existence and behavior. This was significant because previous theories either had to be modified or discarded. Particularly, the concept of parity. Parity conservation in quantum mechanics means that two physical systems, one of which is a mirror image of the other, must behave in identical fashion. In 1956 Lee and Yang empirically disproved this theory. They were able to show that parity conservation was not always the case (Lee & Yang, 1957). The implications upon future research were enormous.

Also in 1956, Reines and Cowan found the existence of neutrino interactions as proposed by Pauli in 1930. Pauli attempted to explain why electrons in beta decay were not emitting the full energy of nuclear transition. The neutrino has no charge and almost no mass, but could penetrate massively thick materials without any interaction (Franklin, 2003).

In 1960, Maiman developed the first functioning laser (Yariv, 1989). The gain medium of a laser is a material of controlled purity, size, concentration, and shape, which amplifies the beam by the process of stimulated emission. The gain medium absorbs energy, which raises some electrons into higher-energy quantum state, resulting in the output of the laser beam (Yariv, 1989). The utility of this development was far reaching. Today, lasers are used routinely in medicine, manufacturing, research, and the military.

During the next few years, physicists started realizing that existing theories failed to adequately explain the nature and behavior of newly discovered sub-atomic particles. In an attempt to start to explain this new phenomena, Gell-Mann and Zweig independently developed a model called "quark."

Article name: The History Nature Practice Of Atomic Physics Philosophy essay, research paper, dissertation