Reading Exercise: The History of the Atomic Model

Democritus’ atoms

Around 440 BC, Greek philosopher Democritus was the first to suggest the existence of atoms, tiny particles that make up all matter. The word atom comes from the Greek ”atomos” meaning indivisible.

However, most of his colleagues, especially Aristotle, did not agree with Democritus. Instead, they thought that matter was made up of the four ”elements” fire, water, wind and earth.

Dalton’s spheres

It took over 2000 years until another scientist would challenge Aristotle’s ”element” theory.

In 1803, John Dalton, an English teacher from Manchester, carried out experiments proving that all matter is made up of tiny particles. He chose to use Democritus’ name and called them atoms.

In Dalton’s model atoms were tiny, hard spheres that vary in size and mass, but cannot be split into smaller pieces.

Thomson’s plum pudding

It took a much shorter time to reach the next step in the discovery of the atomic model.

J.J. Thomson, another English scientist, discovered the electron in 1897 and developed the plum pudding model of the atom for which he received the Nobel prize in 1906.

The plum pudding model said that the tiny negative electrons were distributed in a positive mass inside the atom. The electrons were like negative raising in a positive plum pudding dough.

Rutherford’s gold foil experiment

The next experiment to develop the atomic model even further were carried out by one of Thomson’s former students, Ernest Rutherford, who was originally from New Zealand.

In 1909, Rutherford and his team conducted one of the most important experiments in the history of science. They used a gold foil which was bombarded with alpha particles (= Helium nuclei which have 2 protons and 2 neutrons).

If Thomson’s plum pudding model were true, you would expect all alpha particles to punch holes through the positive ”dough” and pass straight through the foil.

The results looked somewhat different. Most alpha particles did pass straight through the gold foil. However, some particles did not pass through and bounced back. This suggested that there were parts in the foil where mass was very concentrated, while the rest seemed to be empty space.

As a consequence, Rutherford introduced the modern planetary model of the atom where the electrons circulate around a nucleus. The nucleus is small, but contains most of the atom’s mass and is where the alpha particles bounced back in his experiment. The major part of the atom is empty space where the alpha particles could pass through.

By the way, Ernest Rutherford received a Nobel prize as well. However, it was for discovering the concept of half-life for radioactive substances rather than his work on the atomic model.

Tasks:

  1. Draw a timeline including the four stages of the atomic model’s development.
  2. Describe what the word ”atom” means.
  3. Describe what atoms were like in Dalton’s atomic model.
  4. State what Thomson discovered.
  5. Describe what atoms were like in Thomson’s atomic model.
  6. Describe the gold foil experiment Rutherford conducted.
  7. Explain how the gold foil experiment showed that Thomson’s theory was wrong.
  8. Describe what atoms are like in Rutherford’s model.
  9. Challenge: Find out how the atomic model developed further. You could look at the work of Niels Bohr, Werner Heisenberg and James Chadwick.

Active Reading Exercise: What is an echo?

The following active reading exercise includes a short test and tasks suitable for students aged 11 to 14 when studying waves and sound.

When waves hit a surface, they are reflected. This means they bounce off the surface and come back. For example, light is reflected by the surface of a mirror.

When you are high up in the mountains and call out in a loud voice. Your sound waves will be reflected by a mountain surface nearby and you can hear it coming back after a few seconds. This is called an echo.

You can measure the distance from where you stand to the mountain surface and time how long it takes until your hear the echo. With this information you can calculate the speed of sound.

This principle is used by animals like dolphins and bats for navigation. They send out sound waves and listen for their echo. This helps them to work out how far away predators or food are. When used like this echoes are called sonar.

Sonar is also used by submarines for navigation. Submarines send our sound waves and listen for the echo to know their own position. They can also detect other submarines and ships.

Tasks:

  1. Box the words in the text you do not know.
  2. Highlight what happens to waves when they hit a surface.
  3. Highlight the word that describes the reflection of sound waves.
  4. Circle the two types of waves that are mentioned in the text.
  5. a) Underline the animals that use sonar.
  6. b) Underline how submarines use sonar.

*Extension: Explain what a dolphin needs to know to work out the distance to a fish when using sonar.

Can we see atoms?

Image credit: Pixabay 2017.

photos of atoms

Background

There is a large number of physical and chemical techniques available today for the analysis of substances. They provide the opportunity to gain information about the properties of atoms, molecules and ions. Infrared spectroscopy, for example, measures the vibrations of molecules. Mass spectrometry, which is widely known from TV crime dramas, gives information about the weight of molecules or ions. This information about mass can then be used to identify poisons, drugs and other substances.

However, most techniques cannot give us a direct picture of what atoms, molecules or ions really look like. Normally, conclusions about their looks are drawn from measured properties. Imagine you are drawing a picture of a house while somebody is telling you about its appearance, but you cannot see it yourself. This is how physicists and chemists use the information from their measurements to figure out what atoms, molecules and ions look like. The measurements are giving them information about substances, which corresponds to the information about the house, such as colours; size or where doors and windows are.

But are there any methods that can provide us with – let us say – a photo of an atom? The main problem is that atoms are extremely tiny. One atom in an apple is as small as the apple itself is compared to our entire Earth. Nevertheless, there are a few methods that can indeed capture pictures of atoms with the help of quantum mechanics and other cool science. A few of these methods are listed below.

Transmission electron microscopy (TEM)

Atoms cannot be seen under normal light microscopes. The reason is that the wavelength of visible light is much larger than atoms themselves are. To be able to see a sample in a microscope, the wavelength has to be smaller than the sample itself. For example, light waves are smaller than cells which is why cells observed in light microscopes. Transmission electron microscopes use electrons instead of visible light. Thanks to the wave-particle duality, electrons can behave as both waves and particles. As waves, they have a much smaller wavelength than light which makes it possible to see atoms in a transmission electron microscope. Besides the use of electrons, the working principle of a TEM is the same as that of a light microscope.

Scanning tunneling microscopy (STM)

Scanning tunneling microscopy is another microscopic technique. It is based on a quantum mechanical phenomenon called ”tunneling”. Electrons can ”tunnel”, or in other words transtition, from an atom on the tip of an extremely sharp needle to atoms on a sample surface. Needle and surface have to be at a very short distance from each other to enable tunneling. The probability of tunneling increases when the gap between the needle and the surface gets smaller. This means that electron tunneling is more likely to occur when the needle tip is above the center of an atom as compared to the tip being above the space between two atoms. In consequence, a topographic picture of the atoms on the sample surface can be obtained.

Atom probe tomography (APT)

Atom probe tomography is a very powerful technique that can provide three-dimensional images of a sample’s atomic structure. In APT, the magnification is caused by a highly curved electric field instead of electron properties like wavelength or tunneling. For this technique, atoms have to be removed from the sample surface and turned into ions, in other words they have to be ionized. To obtain three-dimensional information, the atoms are removed from the sample layer by layer, which means that, unlike the previous two methods, this technique is destructive.

Scientific Methods in Archeology

Image credit: Pixabay, 2017.

There are few fields that are as interdisciplinary as archeology. From radiocarbon dating to ground-penetrating radar, Archeologists use a wide range of scientific methods to gain insights into historic cultures. But which scientific methods are actually most useful to archeologists? And what kind of information do they provide? I have interviewed Professor Kerstin Lidén, an archeologist at Stockholm University, to find out more about this. According to her the three most important scientific methods for archeology are ground-penetrating radar (GPR), mass spectrometry and X-ray fluorescence (XRF).

Ground penetrating radar (GPR) is a physical method also used in geosciences. It emits radio waves into the ground and detects their reflections caused by burried structures, for example ruins of castles, temples or settlements. GPR is popular because it offers the possibility to identify the location and the form of buildings and monuments without digging them up.

Mass spectrometry is a chemical technique that normally helps to determine the composition of unknown substances, as many of us have seen in TV crime dramas. In archeology, however, it is used differently. Here, its main task is to identify different isotopes (atoms of the same element with different neutron numbers). One example is the identification of the carbon-14 isotope, a radioactive carbon isotope used for radiocarbon dating. As long as plants, animals or humans live they incorporate carbon-14 into their systems, either by photosynthesis or by eating plants. No new carbon-14 enters the organism after death and conclusions can be drawn about age from the radioactive decay of carbon-14 and its half-life.

Isotope analysis can teach us even more about long-deceased organisms. Carbon and nitrogen isotopes are used to reconstruct diets, while oxygen isotopes can help to determine geographic origins and environment. Strontium and lead isotopes, on the other hand, can give clues about population mobility which means seasonal and/or permanent migrations.

X-ray fluorescence (XRF) is popular to gain information about the chemical and elemental composition of historic artefacts made from metal, glass or ceramics. This method works by bombarding the archeological sample with X-rays or gamma-rays which causes the removal of inner-shell electrons from the sample atoms. As a response, electrons from outer-shells will ”fall” into the inner-shells and simultaneously emit energy in the form of characteristic X-rays. These can be used to identify the chemical composition of the artefact.

Besides the three techniques described here, there are many other, some pretty cool, methods in operation by archeologists. Lidén gives the example of one colleague who used laser scanning to study the writing on rune stones. This analysis showed that different people had carved on the same rune stone. According to Lidén it is like analysing hand writing, only that it is over 1000 years old.

Lidén also says that the goal of archeologists is to get ”life history details of individuals that have been dead for thousands of years”. The information from different scientific techniques has to be put together like a puzzle to gain this information. In the end, archeologists will know what an individual ate at different points in life, where he/she lived and if there was movement between different geographical areas, which diseases the person might have had and when the individual died. This kind of information is important to understand variations and similarties between different cultures and populations.

What Lidén is most proud of in her own work was showing that two archeological cultures in Sweden were really two different cultures. She and her team were able to show that they did not only use different materials and burried their dead differently, but also actually ate different foods. This had long been disputed in Sweden before.

 

The physics behind musical instruments

physics-and-music

One of the best museums I visited last year was the Haus der Musik – Sound Museum in Vienna. It is an amazing place where you can among other things compose your own music and try to conduct the famous Vienna Philharmonic Orchestra. But there was one part of the exhibition that I enjoyed even more, the one about instruments. By standing in a giant mouth piece for wind instruments or a giant hollow percussion body you could experience first hand how music is created. And yes, it has everything to do with physics and science.

Sound is created when an object vibrates. The vibrations cause the particles in the air around the object to vibrate too. The particles in the air then bump into their neighbors setting them into a vibrating motion as well which lets the vibration travel further. For this reason, sound is often defined as vibrations that travel through air and are detected by a human’s or animal’s ear. But it should be said here that the medium does not necessarily have to be air. Whale ears, for example, pick up vibrations that are transported through water. In physics, vibrations are commonly described as waves.

Special about music instruments is that they create so-called standing waves instead of random vibrations. In a standing wave some points, the nodes, of the vibration remain fixed while the rest vibrates with maximum amplitude which refers to the highest and the lowest points of the wave. It is these standing waves that we experience as harmonic tones when listening to music. Other irregular, random waves that are not standing, we hear as noise instead.

The properties of the vibration’s standing wave can tell us how we experience the tone. One important property of waves in physics is their frequency which describes how many waves pass one point during a specific time. The larger the frequency, the more waves pass the point and the more high-pitched we experience the tone. In contrast, the smaller the frequency, the less waves pass and the lower the tone sounds. The amplitude (remember the highest and lowest points of the wave), on the other hand, tells us how loud a tone is. The larger the amplitude, the higher the wave and the louder the tone sounds, whereas a smaller amplitude gives a softer tone.

Musical intruments can create vibrations in three different ways as explained in the infographic above. There are persussions/drums, wind instruments and string instruments. Percussions, no matter if bell or drum, have a hollow body. When being hit with sticks, hands or something else this hollow body starts vibrating and creates the tone we hear.

Wind instruments include brass instruments, such as trumpets and saxophones, as well as woodwind instruments, such as flutes. They all have a mouth piece and a hollow tube. When the player blows into the mouth piece, an air column inside the tube is set into a vibration motion and a tone can be heard.

String instruments work through the vibration their tensioned strings. The vibrating motion can be started by different methods. The strings can be plucked, like for guitars or harps, bowed, like for violins or cellos, and hit, like for pianos.

 

New magnets for wind power plants

Image credit: Johan Cedervall, 2017. CC BY-SA IGO 3.0. This material is a strong magnet investigated for the use in wind power plants and other applications.

Almost everyone has seen magnets in action when sticking a birthday card or the drawing of a child to a refrigerator. Some might have even used one when taking a compass on a hike. There is one more important application for magnets that most of us have seen: wind power plants. The generators inside wind power plants use very strong magnets to create electricity. These magnets are normally made from the elements neodymium, iron and boron.

While iron and boron are are relatively easy to extract and separate from their minerals (or stones), neodymium is a rare-earth element. There are in total 17 rare-earth elements and despite their name, they are actually quite plentiful in the Earth`s crust. The problem is that these elements tend to occur together in the same minerals which makes their extraction and separation extremely difficult and expensive. In addition, China stands for 85 % of all rare-earth element production which provides a monopoly position enabling it to dictate prices and make other countries dependent. Besides, there are strong environmental as well as health and safety concerns about the mining procedures used in China.

For these reasons research is carried out to find new neodymium-free or rare-earth-free, strong magnets. I have spoken to Johan Cedervall, a PhD student in the Inorganic Chemistry group at Uppsala University, who is trying to make new rare-earth-free magnets for wind power plants and other applications. Cedervall is working on magnetic materials composed of iron, boron, phosphorus and silicon, all abundant materials which are relatively easy to extract from their minerals.

To produce his magnets, Cedervall melts the elements together in an electric arc (basically a permanent, artificial lightning) at 1500 to 2000 °C (2732 to 3632 °F) under argon atmosphere. This procedure results in a highly magnetic, grey compound as seen in the image above. According to Cedervall, his materials are generally a bit less magnetic and easier to demagnetize than conventional neodymium-based magnets, which is a disadvantage. But they are also much cheaper thanks to the absence of neodymium.

Nevertheless, Cedervall says, that the goal is to find even stronger rare-earth-free magnets than his for the use in wind power plants. The problem is that weaker magntic materials have to be used in larger amounts to reach the effect of a neodymium-based compound. For this reason, the search for strong, rare-earth-free magnets is ongoing.

Cedervall believes, that we will see a shift to rare-earth-free or rare-earth-lean (containing smaller amounts of neodymium) magnets in the near future. Wind power plants are being built at a rapid pace all over the world at the moment. To maintain this development, industry will sooner or later be forced to turn to alternative magnets. The German company Enercon has, in fact, already implemented neodymium-free technology into their wind turbines. In addition, magnetic materials called ferrites, for example strontium iron oxide, have already started to replace neodymium in other applications.

When asked which results of his research he is most proud off, Cedervall answers, that these are actually not related to wind power plants. All magnets lose their magnetic properties at a certain temperature when they are heated. This point is called Curie temperature which is between 500 and 600 °C (932 and 1112 °F) for Cedervall`s materials. He has found that this point can be tuned by the gradual substitution of iron with cobalt. A higher cobalt content decreases the Curie temperature, while a lower one increases it. This result could be interesting for building magnetic refrigerators that are safer and more environmentally friendly. The details of this application are a story for another time.

A short history of solar cells

Image Credit: U.S. Department of Agriculture, 2011, CC by 2.0.

The photocoltaic effect

In 1839 the photovoltaic effect was first observed by the French physicist Alexandre Edmond Becquerel (the father of Henri Becquerel – yes, they are related).  This phenomenon occurs when two different materials are in close contact with each other. When light hits one of the materials, energy is consumed and electrons are lifted to an excited state where they have a higher energy than in the ground state. As a result an electric field is formed along the contact to the second material. This field applies a force on the excited electrons and can force them into an external electrical load where their energy can be used to power an electronic device.

The photoconductivity of selenium

The English engineer Willoughby Smith experimented on selenium and found in 1873 that the normally insulating material becomes electronically conductive when exposed to light. This phenomenon is called photoconductivity. William Grylls Adams and Richard Evans Day discovered three years later that selenium can also produce electricity when light is shone on it. The first solar cell was finally created in 1883 by the American engineer Charles Fritts when he coated selenium with a thin layer of gold. His results were reproduced and confirmed later by the German engineer Werner von Siemens. Nevertheless, these prototype cells could only convert about 1 % of sunlight into electricity (in other words they had an efficiency of 1 %) and the phenomenon could not be well understood at the time. For these reasons solar cells were not developed further back then.

The photoelectric effect

The photoelectric effect was first observed by the German physicist Heinrich Hertz. This effect occurs when solid materials emit free electrons under exposure to light (or other electromagnetic radiation like X-rays). Modern silicon solar cells rely on this phenomenon to create electricity from sunlight. Albert Einstein later received the Nobel Prize in Physics for explaining the photoelectric effect in detail. (No, he did not get it for the relativity theory.)

The silicon solar cell

Daryl Chapin, Calvin Fuller and Gerald Pearson from the Bell Laboratories (New Jersey, US) discovered in the early 1950s that silicon is much better at converting sunlight into power than selenium. This lead to the first practical silicon solar cell being demonstrated in 1954 which showed an efficiency of 6 %. The first commercial silicon solar cells entered the market 1956, but they were still very expensive and not very succesful at first. The situation changed with the dawn of spaceflight where solar cells were used by NASA to power satellites like Vangguard 1 from 1958 onwards. This application enabled further research which resulted in lower prices for solar power. Nevertheless, it was not before 1982 that the first solar park was installed in California (US). Today silicon solar cells reach an efficiency of 15 – 20 %. Due to climate change, smog and pollution, solar cell technology has received a lot of interest during the past years as a clean alternative to burning of fossil fuels like coal. Now we are at a point where the cost for silicon solar cells if falling rapidly which could make them even more popular in the future.

The future of solar cell technology

Despite the historically low prices of silicon solar cells, they might soon encounter some serious competition as new materials and concepts are being studied. One contender could be perovskite solar cells (discovered 2009 in Japan) which promise a cheaper and simpler manufacturing process than silicon solar cells. In addition, dye-sensitized solar cells, which were discovered already in the 1960s and rely on the photovoltaic effect, could eventually prove another low-cost competitor. Up to now these new technologies are less efficient than silicon solar cells, but especially perovskite solar cells are on a good way to reach an efficiency of 15 – 20 % within just a few years. Another strategy could be the combination of perovskite solar cells and silicon solar cells in hybrid modules. It remains exciting to see which road solar cell technology will take next.