Radioecology: Part 1

Radioecology: Part 1

Introduction

Ecology is the study the interactions of organisms with one another and with their non-living, physical environment of energy and matter. Ernst Haeckel, a biologist, began the science of ecology in 1869. The word ecology is derived from two Greek words: oikos, meaning “house” or “place to live” and logos, meaning study. Two fundamental categories of factors are involved in this study: biotic (living) and abiotic (non-living).

Radioecology is the study of the effects of radiation and radioactive substances on ecological communities. This multi-disciplinary science focuses on the analysis of the behavior and effects of radioactive substances in the biosphere. It encompasses the production, release, and transport of radionuclides through the biotic and abiotic parts of the biosphere and the uptake and distribution in humans and the effects of radiation on living organisms. One of the primary goals of radioecology is to provide a knowledge base of the radiation doses to humans, and suggest strategies and methodologies to reduce radiation dose.

There are three main divisions of radioecology. The first deals with radionuclide movement within ecological systems and accumulation within specific ecosystems such as soil, air, water and biota. The second is concerned with ionizing radiation effects on individual species, populations, communities and ecosystems. The third involves the use of radionuclides and ionizing radiation in studies of the structure and function of ecosystems and their component subsystems. In this article I will focus on the first of these after reviewing some basic ecology terminology.

Review of Ecology Terminology

Populations are groups of individuals belonging to the same species. Communities are composed of all the populations living and interacting together in a given area.

Ecosystems encompass both the biotic and abiotic components of communities and their physical environments within a given space or area. Complex biological, chemical and physical processes link all parts of an ecosystem. Adjacent ecosystems can influence each other when their components cross boundaries via wind, precipitation, water flow, gravity and animal movements.

Biomes are complexes of living communities covering large areas of the Earth that are maintained by the climate of a region and characterized by distinctive types of vegetation. Examples of biomes in North America include the tundra, desert, prairie, and the western coniferous forests. Biomes contain several ecosystems within their territory and are the largest recognizable assemblages of animals and plants on the Earth. The distribution of biomes is controlled mainly by climate. For a review of the world’s biomes, please see this site: HYPERLINK “http://www.blueplanetbiomes.org/world_biomes.htm” http://www.blueplanetbiomes.org/world_biomes.htm .

Food chains show the sequence of chemical energy (sunlight fixed in the form of glucose sugar) flow from the lowest level producers (photosynthetic organisms such as plants, algae and phytoplankton) through the primary consumers (herbivores such cows, deer and sheep) and secondary consumers (carnivores such as wolves, lions, and predatory birds) up to the tertiary consumers (omnivores such as humans). Producers are generally called autotrophs because they make their own food (glucose sugar) through photosynthesis. Consumers are known as heterotrophs because they must obtain their food by eating other organisms. Two other categories of organisms that live on other dead organisms are called detrivores (detritus feeders such as catfish, crabs and vultures) and decomposers such as fungi and bacteria. Decomposers play a critical role in the recycling of organic matter and nutrients in ecosystems. Food webs represent the flow of energy through cross-linked food chains resulting from the wide variety of possible sequences of producers and consumers.

Movement of Radioactivity In Ecosystems

One of the most important considerations involved in understanding the impact of radiation in the environment is the fact that radioisotopes are chemically identical to non-radioactive elements in their same “isotopic family” and can combine with other elements to make up the minerals nutrients that are taken up by plants from the soil-solution. Also, elements which occur in the same chemical “family” (columns or groups in the periodic table) act as analogs for and replace each other in these various compounds. These “radio-compounds” make their way “up” through food chains maintaining their radioactivity to varying degrees based on their half-lives. Not only do these compounds “flow” through ecosystems, but also their concentration can increase due to a process known as bio-magnification. In food webs, the amount of chemical energy which transfers to the next highest level is roughly 10%, so in order for the organisms in that level to obtain enough energy to maintain their life processes they must consume a large amount of biomass (living matter) from the level below. The “radio-compounds” which constitute a large percentage of the consumed biomass are incorporated into the biomass at each level so the amount of radioactivity effectively increases. The only way that radioactivity will decrease at higher trophic levels is by the combined effects of radioactive decay and biological decay which transports energy “down” the food web.

One very common example of these processes is the movement of strontium-90 (90Sr) through dairy “ecosystems” (artificial food chains involved in the dairy industry). 90Sr is a common radionuclide in the same chemical group (family) as calcium and has a half-life of 28 years. Because of this, 90Sr is concentrated by dairy cows in their bones and milk and as a result is concentrated in the bones and milk of organisms that consume that milk, namely humans and nursing mothers. In areas such as Chernobyl and Fukushima, where 90Sr contamination is significant, the migration of radioactivity through food webs into the human population is very problematic.

Another example, and potentially even more insidious, involves tritium (3H, containing two neutrons, one proton and one electron) which is a radioisotope of hydrogen (H), the major component of dihydrogen monoxide (H2O), more commonly known as water, the “universal solvent.” Tritium has a half-life of 12 years. Water is the single most important compound to the vast majority of living organisms because not only does it serve as the intra-cellular (within the cell) medium for all biochemical reactions but also because it serves as the solvent making up the soil-solution containing the mineral compounds (solutes) taken up by plants.

In addition to this, and even more importantly, water is utilized in conjunction with carbon-12 (12C, containing six neutrons, six protons and six electrons, 666, yikes!) in the process of photosynthesis whereby plants, using solar energy (visible light), “dismantle” the carbon dioxide (CO2) and water (H2O) molecules and use the resulting carbon, oxygen and hydrogen atoms to synthesize glucose sugar (C6H12O6), the primary energy molecule for the vast majority of living organisms. If you add to this the increased presence of carbon-14 (14C, containing two extra neutrons) in the atmosphere you have a potentially significant uptake of radionuclides into not only glucose (the energy molecule) but also into the very organic compounds such as amino acids (the building blocks of proteins) and the various nitrogen bases, which in conjunction with ribose sugar, make up the DNA (deoxyribonucleic acid) molecule. The DNA molecule is the genetic information storage and “transmitter” molecule for most living organisms. What are the implications for the “genetic future” of life on Earth if the fundamental information transmitter molecule is so drastically impacted by the presence of radionuclides known for their mutagenic (mutation inducing) properties? This is one of the most critical (and far reaching) concerns in understanding the ecological impacts of radiation. “There is only one constant in the Universe, change.” In part 2, I will explore the implications. Thanks again for reading! Peace!

 

 

 

 

 

Understanding & Measuring Radiation Part 2

Understanding and Measuring Radiation, Part 2

Several difficulties present themselves when one wishes to collect, analyze and interpret radiation data.  Radioactivity cannot be collected in a sample container or measured directly.  However, indirect measurements can be made of the electrical effects that radiation produces in the materials through which it passes.

Types of Radiation Detectors

The meaningful analysis and interpretation of radiation data can only be accomplished when that data is collected with appropriate instruments using suitable procedures.  There are different types of meters that are designed to measure the various types of radiation–not every instrument is suitable for every type of radiation present in the field.  Some kinds of radiation are very difficult to detect in typical field situations.  The correct meter needs to be selected for the particular radiation being measured.  Also, the presence of naturally occurring background radiation and environmental conditions (soil, water, weather etc.) need to be accounted for.  For a good overview of background radiation, see this Health Physics Society document: https://hps.org/documents/environmental_radiation_fact_sheet.pdf

Radiation measurement instrumentation is based on two electrical interactions of atoms:  ionization and excitation.  As discussed in earlier posts, ionization is the removal of electrons from neutral atoms forming electron-ion pairs.  The electrons produced form an electrical current that is measured and converted to an appropriate radiation measurement unit.  Excitation is the process of raising electrons of atoms to higher energy levels above the ground state by the addition of energy derived from the radiation passing through a material.  These “excited” electrons will then spontaneously drop back down to the ground state and give off their absorbed energy in the form of x-rays, but they are not removed from the atom.

Detection instruments are generally categorized by whether they use a gas or solid as the detector.  Gas-filled detectors include Geiger counters, proportional counters and ionization chambers.  Solid detectors (known as scintillation detectors) utilize crystalline sodium iodide (NaI), zinc sulfide (ZnS), cadmium zinc telluride (CZT) or semiconductors such as silicon (Si) or germanium (Ge) that will emit flashes of light when irradiated with gamma rays and alpha particles.  Other solid detectors use organic (carbon containing) plastic polymers for detecting beta particles.  For a more detailed discussion of the types of detectors please refer to this site (Integrated Environmental Management, Inc.):

http://www.iem-inc.com/information/radioactivity-basics/measuring-radioactivity.

Also see, the Health Physics Society site:

http://hps.org/publicinformation/ate/faqs/radiationdetection.html

Legacy detectors were developed decades ago and have flourished because of their adequate performance and the lack of low-cost alternative materials with equivalent or superior performance (Hammig, 2012).  In order to have detectors present on a widespread scale in the environment, it will be necessary to use detecting materials that are more efficient and easier to handle than semiconductors or less costly than CZT.  The efficiency of legacy detector materials relates to the information that is lost when ionizing radiation interacts with the crystal structure producing electron-ion or electron-hole (positively charged regions in semiconductors) pairs.  This information loss can introduce significant error into radiation measurements.   A possible solution that is currently under investigation involves the use of nano-structured materials in nano-scintilators and nano-semiconductors (Hammig, 2012).  These nano-scale detectors provide less information loss, and thus more precise measurements and are also less costly to manufacture on a large scale.  Nano-structured materials have therefore been projected as the next-generation materials for ionizing radiation detector (source: www.intechopen.com/download/pdf/32109).

A discussion of current trends in radiation detection can be found here: http://www.kns.org/jknsfile/v38/JK0383111.pdf

Types of Radiological Analyses

After deciding on the correct radiation detector(s), the appropriate set of collection/analytical methods must be chosen.  These protocols should be selected after careful consideration of all factors affecting the interpretation of the data and implementation of corrective/protective actions.  Not all methodologies are suited to all situations.

The amount of radioactive material in a sample of air, water, soil, or other material can be assessed using several analyses, the most common of which are described below.
Gross alpha – Alpha particles are emitted from radioactive material in a range of different energies. An analysis that measures all alpha particles simultaneously, without regard to their particular energy, is known as a gross alpha activity measurement. This type of measurement is valuable as a screening tool to indicate the total amount but not the type of alpha-emitting radionuclides that may be present in a sample.

Gross beta – This is the same concept as that for gross alpha analysis, except that it applies to the measurement of gross beta particle activity.
Tritium – Tritium radiation consists of low-energy beta particles. It is detected and quantified by liquid scintillation counting.

Strontium-90 – Due to the properties of the radiation emitted by strontium-90 (Sr-90), a special analysis is required. Samples are chemically processed to separate and collect any strontium atoms that may be present. The collected atoms are then analyzed separately.

Gamma – This analysis technique identifies specific radionuclides. It measures the particular energy of a radionuclide’s gamma radiation emission. The energy of these emissions is unique for each radionuclide, acting as a “fingerprint” to identify it.

For a brief description of common atmospheric radiation sampling methods, see this website:

http://www.cleanair.com/EPAMethods/Air-Test-Methods/m-114.html#m114sec3_1_4

Statistical Considerations

Uncertainty – The emission of radiation is naturally a random process.  Because of this, uncertainty of measurement is an important analytical consideration.  A sample counted several times usually yields a slightly different result each time; therefore, a single measurement is not definitive. To account for this variability, the concept of uncertainty is applied to radiological data.  The usual range of reliability is a 95% confidence interval which is applied to the data.  For each calculated sample average (mean) a standard deviation is used to establish the confidence interval and means that there is a 95% probability the true value of the measured results lies within this range.

Negative Values – There is always a small amount of natural background radiation. The laboratory instruments used to measure radioactivity in samples are sensitive enough to measure the background radiation along with any contaminant radiation in the sample. To obtain a true measure of the contaminant level in a sample, the background radiation level must be subtracted from the total amount of radioactivity measured. Due to the randomness of radioactive emissions and the very low concentrations of some contaminants, it is possible to obtain a background measurement that is larger than the actual contaminant measurement. When the larger background measurement is subtracted from the smaller contaminant measurement, a negative result is generated.  The negative results are reported, even though doing so may seem illogical, but they are essential when conducting statistical evaluations of data.
Radiation events occur randomly; if a radioactive sample is counted multiple times, a spread, or distribution, of results will be obtained. This spread, known as a Poisson distribution, is centered about the sample mean value. Similarly, if background activity (the number of radiation events observed when no sample is present) is counted multiple times, it also will have a Poisson distribution. The goal of a radiological analysis is to determine whether a sample contains activity greater than the background reading detected by the instrument.  Because the sample activity and the background activity readings are both Poisson distributed, subtraction of background activity from the measured sample activity may result in values that vary slightly from one analysis to the next. Therefore, the concept of a minimum detection limit (MDL) was established to determine the statistical likelihood that a sample’s activity is greater than the background reading recorded by the instrument.

Identifying a sample as containing activity greater than background, when it actually does not have activity present, is known as a Type I error. Most laboratories set their acceptance of a Type I error at 5 percent when calculating the MDL for a given analysis. That is, for any value that is greater than or equal to the MDL, there is 95 percent confidence that it represents the detection of true activity. Values that are less than the MDL may be valid, but they have a reduced confidence associated with them. Therefore, all radiological data are reported, regardless of whether they are positive or negative
At very low sample activity levels that are close to the instrument’s background reading, it is possible to obtain a sample result that is less than zero. This occurs when the background activity is subtracted from the sample activity to obtain a net value, and a negative value results. Due to this situation, a single radiation event observed during a counting period could have a significant effect on the mean (average) value result. Subsequent analysis may produce a sample result that is positive.  Average values are calculated using actual analytical results, regardless of whether they are above or below the MDL, or even equal to zero. The uncertainty of the mean, or the 95 percent confidence interval, is determined by multiplying the population standard deviation of the mean by the t(0.05) statistic (source: NCRP. 1985. Handbook of Radioactivity Measurements Procedures, NCRP Report No. 58. National Council on Radiation Protection and Measurements, Bethesda, MD).

Ecological Considerations

The measurement, analysis and interpretation of radiation data is a very complex science because of the many variables involved.  The inclusion of known ecological factors complicates the matter even more because of the inter-connectedness of these factors in both time and space.  This temporal-spatial “web-like” matrix generates unforeseen interactions which must be treated as unknown variables.

Examples of complex ecological interactions are feedback loops in which a variable or sets of variables affect other variables which in turn affect the variables which affected them.  These processes can, over a short period of time, become impossibly complex to trace and analyze. Add to this situation the process of radioactive decay by which parent radionuclides transmute into daughter radionuclides and radioactivity diminishes overtime, and the complexity increases.  To this mixture add the process of bio-magnification, by which contaminants become more concentrated as you progress up or through food chains and food webs and the effects of ocean currents which are influenced by temperature, salinity and pressure fluctuations–the result:  chaos!

The concepts of chaos theory play a significant role in these complex ecological interactions.  At best we can only estimate/model possible cause/effect relationships in predicting the range of possibles outcomes.  The above example is a vastly oversimplified version of the reality of ecological interaction complexity.  This is the very reason why scientists should ALWAYS proceed with caution:  predicting the unknown is impossible especially since we have no idea how much of the unknown is really unknowable!

In the next lesson, I will begin a series of lessons exploring the essential concepts of radioecology which are prerequisite to an understanding of the complexity that has only been alluded to!  Thank you for reading!

Peace, joy, love, hope and understanding (PJLHU) to you all!

 

Understanding and Measuring Radiation, Part One

Understanding and Measuring Radiation, Part One


Ionizing radiation has the ability to remove electrons from around the nuclei of atoms.  Electrons are the parts of atoms responsible for bonding between atoms.  Removing electrons can disrupt molecular bonds and cause molecules to break apart and rearrange into different molecules or remain as molecular fragments.  The effects of ionization in living organisms are unpredictable and usually harmful.  Short-term tissue damage and long-term mutations are often the result and the latter can lead to various cancers and genetic defects.

Different types of radiation have different penetrating ability and ionizing potential.  Generally speaking, the more penetrating a radiation is, the more widespread is its ionizing potential (and the more damaging) because it travels further into a substance before losing its energy.  The less penetrating a radiation is, the more localized is its ionizing potential (and the less damaging).  The table below compares the penetrating abilities of three common types of radiation.  (These are estimates for comparison only!)  The values given represent what it takes to block the radiation.

 

Radiation Type

Symbol

Air

Paper

Aluminum

Lead

Alpha particle

α

10 cm

1 sheet

na

na

Beta particle

β

1 meter

100 sheets

3 mm

0.1 mm

Gamma rays

γ

 1 km

100,000 sheets (200 reams)

300 cm

10 cm

 

Missing from this list are x-rays, which are similar to gamma rays in that they are electromagnetic radiation (like light or radio waves) but have less energy than gamma rays.   Also missing from this list is neutron radiation, which interacts very weakly with electrically charged particles so it tends to pass easily through many materials.  When neutrons collide with atomic nuclei they can knock out other neutrons or cause the nuclei to fragment.  One of the best (and the cheapest) absorbing materials for neutron radiation is water.  That is why many nuclear reactor cores are submerged in large pools of water.  It should be noted that some radiation types (beta and neutron) can induce other materials to become radioactive.

Measuring radiation and assessing its damaging effects is a very complex process.  There are several units of measurement which apply to different situations.  Also, there are outdated units that are still commonly used in the U. S.  There are many conversions, complex calculations and protocols involved in radiological assessment.  In the remainder of this article I will present a very general overview of the subject.

There are two categories of radiation:  particle emissions and electromagnetic energy. The particles involved are electrons, protons and neutrons as discussed earlier.  Electromagnetic radiation comes from the interaction of electricity and magnetism at the sub-atomic level.

The first consideration in radiation measurement is the amount of radioactivity present in any given substance.  The amount (mass) of the substance has little to do with the amount of radioactivity.  A small mass can have a lot of radioactivity whereas a large mass can have very little.  The original activity unit, the Curie—Ci (named after Marie and Pierre Curie), measures the number of disintegrations (or nuclear changes or breakdowns) occurring per second (dps).  One Curie equals 37 billion dps.  Since this unit is so large, we often have to use prefixes (milli–m– 1/1000th and micro–μ—1/1,000,000th) to represent the smaller amounts radioactivity commonly encountered in practice.  The modern SI unit for activity is the Becquerel (Bq), which equals 1 dps.   Since this is such a small unit we often use the prefixes kilo (k)—1000, mega (M)—1,000,000, giga (G)—1,000,000,000 and tera (T)—1,000,000,000,000 for larger amounts of radioactivity. See table below for some comparisons between the two units.

1 MBq

27 microcuries (27  μCi)

1 GBq

27 millicuries (27 mCi)

37 GBq

1 curie

1 TBq

27 curies

 

The second consideration in measuring radiation is determining how much radiation is actually absorbed by a substance.  The standard unit used in the USA is the rad (radiation absorbed dose).  This unit relates to the amount of energy actually absorbed in some material, and is used for any type of radiation and any material. One rad is defined as the absorption of 100 ergs (a unit of energy) per gram of material. The unit rad can be used for any type of radiation, but it does not describe the biological effects of the different radiations.  The equivalent SI unit is the Gray (Gy)—one Gray equals 100 rads.  The rad is related to the roentgen (R), which is limited to measuring the amount ionization caused by x- or gamma rays in air only.  These two units are roughly interchangeable.

The third (and probably most important) radiation measurements are the rem (USA) and Sievert—Sv—(SI), which are used to derive a unit called equivalent dose.  This relates the absorbed dose in human tissue to the effective biological damage of the radiation.  Not all radiation has the same biological effect, even for the same amount of absorbed dose.  Equivalent dose is often expressed in terms of thousandths of a rem, or mrem. To determine equivalent dose (rem), you multiply absorbed dose (rad) by a quality factor (QF).  In the SI system, equivalent dose is often expressed in terms of millionths of a Sievert, or micro-Sievert (µSv) or thousandth of a Sievert as a milli-Sievert (mSv).  One Sievert is equivalent to 100 rem.  It is important to note that radiation dose is measured in rem or Sievert (or fractions of those units).  Radiation dose rate is measured in dose per unit time, such as R/hr, rem/hr or Sv/hr.  To get a dose measurement from a dose rate, you simply multiply the rate by the appropriate time period the rate was measured for.  The rem is related to the roentgen in that it measures the amount of biological tissue damage caused by a roughly equal (1 roentgen = 0.96 rem) amount of x- or gamma-radiation.  The table below summarizes and compares these units and lists some common exposure situations.

Comparison of US and International Radiation Measurement Units*

Meaning of Unit

USA Unit (abbr.)

SI** Unit (abbr.)

Conversion

Measure of radioactivity (nuclear disintegrations per second, dps)

1 Curie (Ci) =

37 billion dps

Becquerel (Bq) = 1 dps

1 Ci = 37 billion Bq

Measure of exposure to x-rays or gamma rays only

Roentgen (R)

na

na

Measure of absorbed dose (radiation absorbed dose)

Rad

Gray (Gy)

1 Gy = 100 rad

Measure that relates absorbed radiation dose to biological tissue damage

Rem

(1 rem =

1 rad)

Sievert

1 Sv = 100 rem

1 mSv = 100 mrem

1 µSv = 0.1 mrem

1 rem = 10 mSv

Typical Radiation Dose Comparisons*

Method of Exposure

US Units

SI** Units

Annual background radiation in the US***

360 mrem

3.6 mSv

Flying 3000 miles

3 mrem

30 µSv

Chest x-ray

10 mrem

0.1 mSv or 100 µSv

CT Scan

500 mrem -1000 mrem

5 mSv-10 mSv

Annual whole body limit for workers

5000 mrem

(5 rem)

50 mSv

Annual thyroid limit for workers

50,000 mrem

(50 rem)

500 mSv or 0.5 Sv

Radiation Sickness

(Acute Radiation Syndrome)

100 rem whole body

(100,000 mrem)

1 Sv  whole body

(100 mSv)

Erythemia  (skin reddening)

500 rem to skin

5 Sv to skin

* Source:  Radiation Information Network ( http://www.physics.isu.edu/radinf/index.htm)

** International System of Units

*** Recent estimates are upward of 620 mrem, but the increase is due to medical exposure


It is very important to realize that a certain amount of background radiation is naturally present in the environment due to the radioactive components of the soil, rock formations and extraterrestrial sources such as the Sun (UV, X- , gamma radiation and charged particles) and interstellar space (cosmic rays).  Also, it is necessary to understand the that the Earth’s magnetic field and ozone layer are absolutely vital to shielding the Earth from these lethal radiation types.  An often-overlooked consideration in radiation exposure is the connection to biological evolution, which is driven by changes in heritable genetic information (genes) caused by either ‘spontaneous’ or induced mutations in DNA molecules in sex cells (egg and sperm).  How will increased environmental radiation levels affect mutation rates, and how will these changed rates affect evolution of life on this planet?  However, radiation is not the only cause of mutation—many chemical substances can accomplish this as well.  How will/can we distinguish between the two effects?  In part two of this lesson, I will discuss the protocols that are used to collect, analyze and interpret radiation measurements.

 

PJH

Back To Lessons

 

 

Nuclear Fundamentals Table 1

Table 1:  A Brief Timeline of Nuclear Science Through World War II

Year

Event or Discovery

1789

Martin Klaproth, a German chemist, discovered uranium and named it after the planet Uranus.

1895

Wilhelm Conrad Rontgen discovered x-rays by passing an electric current through an evacuated glass tube.  Doctors began using them to see inside the human body.  Scientists began studying the effects of x-rays on various substances.

1896

In an effort to find x-rays in uranium, Antoine Henri Becquerel accidently discovered a new type of rays called ‘Becquerel rays’ in a uranium containing ore called pitchblende. The intensity of these rays was proportional to the amount of uranium.  These rays later became known as beta rays/particles (electron emission) and alpha rays/particles (helium nuclei emission).

1896

Paul Villard discovered gamma rays—similar to x-rays but more energetic—in the uranium containing ore pitchblende.

1896

Pierre and Marie Curie gave the name ‘radioactivity’ to the phenomenon of ray/particle emission.

1898

The Curies isolated polonium and radium from pitchblende.  Radium was later used in medical treatments.

1898

Samuel Prescott showed that radiation destroyed bacteria in food.

1902

Earnest Rutherford discovered that beta and alpha decay (particle emission) resulted in the formation of different elements.

1911

Frederick Soddy discovered that naturally radioactive elements existed as different isotopes (same number of protons—different number of neutrons in nucleus) with the same chemical properties.

1911

George de Hevesy showed that radionuclides (radioisotopes) were invaluable as tracers because small amount could be detected with simple instruments.

1919

Ernest Rutherford fired alpha particles from radium into nitrogen causing nuclear rearrangement and the formation of oxygen.

1932

James Chadwick discovered the neutron.  Cockcroft and Walton caused nuclear transformations by bombarding atoms with high-speed protons.

1934

Irene Curie and Frederic Joliot found that some nuclear transformations produced artificial radionuclides.

1935

Enrico Fermi discovered that using high-speed neutrons instead of protons could produce a much greater variety of artificial radionuclides, some being heavier and some being lighter. 

1938

Otto Hahn and Fritz Strassman in Berlin showed that the new lighter elements were barium and others with about half the mass of uranium.  This showed that atomic fission had occurred.Lise Meitner and Otto Frisch explained ‘neutron capture’ as the cause of nuclear fission and calculated the energy released from fission at 200 million electron volts.

1939

Otto Frisch experimentally confirmed the calculated the energy release from fission.  This was the first experimental confirmation of Albert Einstein’s energy/mass relationship, E = mc2.  Hahn and Strassman in Berlin, Joliot in Paris and Leo Szilard and Fermi in New York all confirmed the possibility of a self-sustaining nuclear chain (fission) reaction which could release tremendous amounts of energy according to E = mc2.  The major obstacle was that natural uranium ores contained 0.7% U-235 and 99.3% U-238.  The U-235 isotope was the much better candidate for producing fission reactions (as proposed by Neils Bohr).  The final piece of the fission reaction puzzle was provided by Francis Perrin and Rudolf Peierls who introduced the concept of ‘critical mass’ which was the minimum amount of uranium needed to produce a self-sustaining fission reaction.   Perrin also showed that using a neutron absorbing material to slow the chain reaction down could control the fission reaction.  This is what makes nuclear power plants possible.

WWII

USA

Germany

USSR

1939

President Roosevelt received a letter from Albert Einstein warning of the possibility of a uranium weapon. Atomic fission research began with a nuclear reactor, ‘heavy water’ production and uranium enrichment. Cyclotrons installed at the Radium Institute and Leningrad FTI.

1940

Neptunium and plutonium were produced using neutron bombardment of U-238 at the Berkeley cyclotron. The world’s only ‘heavy water’ plant seized in Norway.  First fission experiment failure. Central Asian uranium deposits studied for use in nuclear energy research.

1941

Plutonium identified as a new fissionable element with atomic number 94.  Manhattan project begun.  Graphite is rejected as a fission reaction moderator. German invasion of Russia shifts emphasis of nuclear research to weapons development.

1942

Robert Oppenheimer became director of Manhattan project.  First uranium isotope enrichment plant under construction in Oak Ridge, Tennessee.  Enrico Fermi produced the first controlled and sustained fission reaction at the University of Chicago. Emphasis of fission research shifted from military applications to energy production. Joseph Stalin officially began a nuclear weapons development program.

 1943

Planning for construction of breeder reactors for producing plutonium near Hanford, Washington began.  Oppenheimer moved bomb development headquarters to Los Alamos, New Mexico Fission research continued its decline due to politicization of the educational system and anti-Semitism, which was biased against theoretical physicists. Soviet nuclear research focused on:  achieving chain reactions; investigating methods of uranium enrichment; and designing both enriched uranium and plutonium bombs.

1944

First batch of spent fuel obtained from Hanford, Washington.  The ALSOS mission acquired secret documents implying slowed Nazi research progress. Experiments using graphite and heavy water as a fission reaction moderator were conducted.

1945

January – first plutonium reprocessing began at Hanford.January 20 – first U-235 separated at Oak Ridge, Tennessee.July 16 – first atomic explosion at Trinity site, near Alamogordo, New Mexico. 

August 6 – Little Boy, a uranium bomb, was dropped on Hiroshima, Japan. Between 80,000 – 140,000 people were killed.

August 9 – Fat Man, a plutonium bomb, was dropped on Nagasaki, Japan. About 74,000 people are killed.

Nazi Germany was defeated in May. Following Nazi defeat in May, German scientists were ‘recruited’ to aid in Russian weapon development.  After Hiroshima and Nagasaki were bombed, Russian weapons development went into high gear with plutonium breeder reactors becoming operational in the Ural mountains near Chelyabinsk.  Also, the first gaseous diffusion uranium enrichment facility in Verkh-Neyvinsk was under construction.

Continue to Table 2

Nuclear Science Fundamentals

Nuclear Science Fundamentals

by Patrick Andrew Parris


Nuclear science refers to any process that occurs at the nucleus level of the atom.  Chemistry refers to any process that occurs between atoms involving the interactions of electrons, which are located outside the nucleus.  The nucleus is made of protons and neutrons, which are often referred to as nucleons.  Protons and neutrons are very similar in mass but their electric charges are different.  Protons have a positive charge and neutrons are neutral—having no charge.  Like-charged particles (such as protons) always repel or try to push each other away.  Unlike-charged particles always attract or try to pull on each other (like electrons and protons).  Charged particles, however, don’t always succeed in their repulsion or attraction efforts.  Take the simplest element, hydrogen, for example.  An atom of hydrogen has only 1 proton (nucleus) and 1 electron.  You would think that electrons of hydrogen would spiral down into the proton, but they don’t.  Electrons of these atoms can only get so close and no closer to the nucleus (proton).  This is referred to as the ‘ground state’ of hydrogen, and this is where the electron is at its lowest energy state.

Now take the protons in the nuclei of atoms of elements heavier than hydrogen.  The number of protons in the nucleus (atomic number) determines the identity of the element.  As the number of protons increases, the electrostatic repulsion between the like-charged protons also increases, but the protons do not fly apart as expected because of a different and much stronger (the strong nuclear) force that ‘binds’ the neutrons and protons together.  The energy associated with this ‘binding’ is called the nuclear binding energy—this is the ‘E’ of E = mc2.  As you can see, when you take the speed of light (c =300,000,000 m/s) and square it you get a very large number.  This is why nuclear reactions can release so much energy.  Fission (nuclei breaking apart) reactions use enriched uranium-235 (U-235) or plutonium, one of the most lethal elements known to man.  Fusion reactions fuse hydrogen nuclei together to form helium nuclei.  Fusion reactions yield much more energy than fission reactions.  This is the process that powers our sun and all stars.  Fusion reactions also have the potential for providing virtually an unlimited and clean supply of energy.  The big obstacle in the way of realizing this dream is that it takes more energy to contain a fusion reaction than we get out of the reaction in usable form.  Overcoming this tremendous technical problem is the focus of much international fusion research.

The nuclear reactions (fission and fusion) generally instill the most fear in people because they produce massive destruction and death.  Yet there is another nuclear reaction, insidious by nature because it can quietly cause long-term health problems and/or death for very long periods of time.  Radioactivity is associated with uranium and other naturally occurring and artificial elements.  These elements have large nuclei (protons and neutrons, or nucleons) in which the repulsive electrostatic forces begin to overcome the much stronger strong nuclear force.  The reason for this that the strong nuclear force is very strong, but only over very short distances.  When the number of protons reaches 83 (bismuth), the size of the nucleus (which includes protons and neutrons) makes it unstable and it spontaneously ejects beta particles (electrons which come from the nucleus), alpha particles (helium nuclei—2 protons + 2 neutrons), and gamma rays, which are a form of electromagnetic radiation.  All this particle/ray emission is the nuclei’s way of stabilizing itself.  The biggest problem with this process is that the ionizing radiation given off can cause short-term and long-term biological tissue damage all the way down to the molecular level resulting in radiation sickness and permanent and unpredictable genetic damage (mutations, cancers, birth defects and the like).  Also, these health risks can extend indefinitely into the future because the radioactive substances and their decay products can emit radiation for tens, hundreds and even thousands of years.  Isolating the wastes of nuclear reactions from the environment in general and people in particular is a very difficult, costly and questionable process.  There are many storage facilities around the world, which are inadequate to the task because contractors have been less than diligent and succumbed to greed:  receiving millions of dollars for inadequate, substandard construction.  This continuing debacle will be the subject of future articles.

Looking at the timeline of nuclear science, Table 1, we can see that the history of nuclear science began with the discovery of radioactivity in 1896, but the bulk of achievement occurred in the years 1939-1945.  This is undoubtedly due to World War II, at the beginning of which the United States had no interest in nuclear weapons research.  When it became known that fission weapons were possible, and that the Nazis were exploring this possibility, American leaders and scientists could no longer wait.  The Manhattan Project was initiated and the race to develop an atom bomb was on.  Or was it?  The Nazi scientists encountered significant technical and socio-political problems early on in their research, which soon (1942) brought them to the conclusion that military application of nuclear energy was impractical and unimportant to a quick and successful conclusion to the war.  Research emphasis was shifted to nuclear energy production.  American leaders soon became aware that Nazi nuclear program was floundering, but they continued to push forward their research to develop a nuclear weapon, but this begs the question, who were they racing against?  The Soviets had been doing much research in nuclear energy, but they didn’t really focus on military applications until they were invaded by Germany in 1941.  So Germany started the race, but then quickly dropped out leaving only the Soviets and the Americans.  Once the Nazis were defeated, the German scientists (along with Germany and eastern Europe) were divided up between the U.S. and U.S.S.R.

With the transfer of American nuclear secrets to the Soviets and the subsequent escalation of research and development by both countries, the nuclear arms race was on, and it would come to dominate world politics for the next 44 years until the breakup of the Soviet Union in 1989.  During this ’Cold War’ period, other players entered the nuclear arms race.  A brief overview is shown in Table 2.  Slowly over the course of latter half of the 20th century it became apparent that the proliferation of nuclear materials and weapons was not in the best interest of humanity.  The people who could best attest to this are the Japanese and the people of Kazakhstan, a former Soviet satellite.  The Japanese have had to live with the aftermath of the tragedies of Hiroshima and Nagasaki for almost 70 years.  Now they have to deal and live with the tragedy of Fukushima, which will continue into the foreseeable future.  The people of Kazakhstan, now an independent country, have been the victims of forty years of nuclear testing by the Soviets with an estimated 1.5 million people suffering either death or long-term health problems resulting from radiation exposure.  Since gaining its independence, Kazakhstan has taken the lead in central Asia in promoting peaceful applications of nuclear energy (nuclear to electrical power).

We shall see in future articles that even peaceful applications of nuclear energy pose a great risk to the well being of all life on this planet.  Humanity can and must develop other modes of safe and sustainable energy production.  We have the knowledge, capacity and technology to do so, but as we shall see, progress is hindered by special interests that strive (at our expense) to always maximize their profits and control their ability to do so.  We, the people, will only be more powerful than the power elite if we wake up, stand up and raise our voices loud so the whole world can here us when we say, “we’re mad as HELL and we’re not going to take it any more!”  When we, as the People of Earth, have the combined will to institute change, then there can be nothing that will prevail against us.  We must be willing to “stand at the gates of hell and not back down!”

Peace, joy and health!


Continue to Tables