How Are Earthquakes Measured?


An earthquake is an intense vibration of the earth’s surface. There are two primary ways to measure earthquakes: magnitude, and intensity. These factors also determine the extent of damage caused by the earthquake.


Magnitude is the most common way to measure the size of an earthquake. It is calculated on the Richter’s scale, which describes how powerful the quake was. Magnitude is measured using a machine called the seismometer. Seismometers allow us to detect and aptly record earthquakes by converting vibrations caused by seismic waves into electric signals.

There are two types of seismic waves that pass through the earth’s body:


These are longitudinal waves that shake the ground back and forth along the direction of travel of the wave. They travel the fastest.


These are traverse waves. Their motion is perpendicular to the direction of the wave. They are slower than p-waves.

The Richter Scale

Charles F. Richter invented the Richter scale in 1953 as a quantitative measure of an earthquake’s size. Until recently, earthquakes were measured using the Richter scale. However, new and improved scales have now upended the dated Richter scale.

When an earthquake occurs, its magnitude can be assigned a specific numerical value on the Richter scale. The magnitude is then measured using the logarithm of the amplitude of the largest seismic wave calibrated to a scale with a seismograph’s help. A Richter scale is typically numbered 1-10, although there is no upper limit.

Earthquakes between 1 and 2 on the scale are small and unnoticeable, while earthquakes measuring 7 or up can wreak significant havoc.


An earthquake’s intensity measures the strength of the shaking caused by the earthquake. The intensity is usually highest at the epicenter and continues to subside as it moves away. Different tools are used to measure earthquakes’ intensity, including the Modified Mercalli Intensity scale and the European Macroseismic Scale (EMS).


With inherent flaws in the Richter scale, improvements have been made to record more accurate measurements of earthquakes, taking both magnitude and intensity into account.


Author: Amita Vadlamudi

Smithsonian Institute

Smithsonian Institute

Considered to be a world-class research and educational complex, the Smithsonian Institute constitutes 19 museums, 21 libraries, a massive zoo and research centers located across the US, with a focus on innovation and conservation. The Smithsonian group of institutions is one of the oldest and most renowned government projects in the US.

Established in 1846, the Smithsonian Museum is named after its founding donor, the British scientist, James Smithson. The vision of the Smithsonian institute is to diffuse knowledge and increase the flow of information. The original Smithsonian historical landmarks and architectural developments are all situated in the Washington D.C. 

Currently, the Smithsonian group of museums and libraries house over 150 million artifacts and rare items. The national zoological park holds and protects around 3,000 animals of 390 diverse species.  They attract an estimated crowd of 30 million visitors annually, who are admitted without any cost.

The annual funding for the maintenance and research of this complex is approximately 1.2 billion dollars, with majority of the funds allotted through federal budgeting.

Moreover, the Museum of Air and Space, Natural History and the Zoological Park are some of the most popular tourist destinations in USA under the Smithsonian banner. The Smithsonian expands its association to over 160 museums all over the States, with close ties extending to research centers in Puerto Rico and Panama as well.

Along with numerous subsidiary organizations, the Smithsonian has also established a scholarly press institute that publishes two magazines; Smithsonian and Air and Space on a monthly and bi-monthly basis.

Most recently, the Smithsonian has adopted an Open Access Policy that will permit the circulation of and access to collections of over 2.8 million pictures and artifacts online. This policy was implemented in February 2020 with the hope of enhancing educational influences for the youth.

The Hubble Space Telescope – Our Eyes Into the Unknown

Hubble Space Telescope

Thirty years ago, NASA launched one of the greatest pieces of discovery equipment into space to get a deeper understanding of the unknown world that exists outside of our own.

On April 24, 1990, the Hubble Space Telescope was placed on the Discovery space shuttle and was launched into low Earth orbit about 340 miles above, where it functions to this day. The telescope was the creation of a joint venture between NASA and the European Space Agency. Named after the astronomer, Edwin Hubble, the telescope was not the first of its kind but is one that has proven itself with some of the greatest discoveries made in space and time.

The Hubble Space Telescope was built to see cosmic creations that could only be imagined by the most obscure minds of science and philosophy. The powerful and clear images created by the telescope have helped researchers see into black holes, discover new planets, and witness the creation of stars. The telescope achieves this with a 7.9 ft mirror and four internal mechanisms that view in ultraviolet, visible, and near-infrared sectors on the electromagnetic spectrum.

The Hubble Space Telescope is unique in one sense. It is the only telescope that is built to be repaired, serviced, and upgraded by astronauts in space. It is because of this that the telescope has been able to stay in service for so long, with no plans of stopping. The last servicing mission took place in 2009, where the team placed a Soft Capture Mechanism onto the telescope to allow a robotic spacecraft to be attached to it in the future. The next servicing mission is planned for 2024.

One of Hubble’s main missions was to determine the size and age of the universe. While the size of the universe remains infinite, scientists were able to gather data from the Hubble Space Telescope to figure out that the universe is a significant 13.7 billion years old.

NASA credits the telescope for countless discoveries of moons, stars, and black holes. They say that the telescope is working better today than it ever did before; they intend to continue on that path for many years to come.


Look for articles by Amita Vadlamudi on some her other sites including:,


Understanding SONAR



In simple terms, Sonar refers to a system used to transmit sound waves underwater. It also receives the reflections of sound waves and uses that information  to detect  underwater depths or the existence of and/or the locations of the submerged objects. Sonar is an acronym  for  Sound Navigation and Ranging.

A Sonar device sends out sound waves at a steady frequency and then listens to the waves that return to the source. The data from the reflected sound waves is then relayed to operators through a display on a monitor or a loudspeaker.

Invented by Lewis Nixon in 1906, the importance of the technology grew as it came in handy for the detection of submarines during the First World War. Nixon is credited with inventing the first device to detect sound waves underwater, but the first device capable of detecting submarines was invented by Paul Langevin in 1915.

Initially, sonar systems relied on listening to sounds underwater without any sounds being sent out. Active sonar systems that send and receive sounds came forth by 1918 in both the U.S. and Britain. The need for detecting submarines created the perfect opportunity for further development of the technology, but it never came into use for the First World War.

During the Second World War, active sonar systems came into use, and it was also when the term Sonar was initially coined.

Primarily, there are two types of active sonar systems. Short-range active sonar systems emit pings or pulses of constantly changing frequency. The receiver of such sonar systems relies on differentiating between the sound emitted and sound received to process the gain and derive the information about the detection of distances.

Long-term active sonar systems rely on low-frequency pulses instead. They measure the time elapsed between the transmission and the detection of the low-frequency sound waves under water.

Passive sonar systems do not send out sounds, but they listen. Typically used for military applications, these sonar systems rely on a massive sonar database to accurately detect different classes of ships and maneuvers, based on the sound their movement makes.

While sonar systems are mainly used for military purposes, they have several other uses as well. Depth detection, diving safety, communications at sea and even commercial fishing nowadays rely on the technology due to its effectiveness in underwater detection.


Look for other technology related articles on this site by Amita Vadlamudi.

The Doppler Radar

The Doppler Radar

A Radar sends out radio waves that reflect off of the targeted objects. The reflected radio waves return back to the radar and can be analyzed for the presence, direction, distance and speed of the targeted objects. Following the principles of a radar, the Doppler radar specifically detects the speed of an object.

Doppler radar detects the speed of an object based on the natural law of the radio waves. As the object gets closer to the source, the waves produced by an object will crowd closely together. As the object moves farther away from the source, the waves spread farther apart.

A Doppler radar system uses pulse timing techniques for measuring the range to a target. Initially used for the detection of fighter aircraft during the 60s, Doppler radar has widespread use in meteorological radars. It is used to predict the weather.

Doppler radars in weather detection systems can detect both precipitation and wind. The radar system emits a short pulse of radio waves. If the pulse strikes an object (raindrops, snowflakes, birds), the radar waves become scattered in all directions. At the same time, a small portion is reflected back towards the radar.

Computers analyze the strength of the returning signal, the time it took to travel to the object and return, and the frequency shift of the pulse. The computers convert the change in the reflected pulse of energy to determine the velocity of the object either toward or from the radar. The information about the movement of objects towards or away from the radar provides the measure of the wind speed.

Essentially, a Doppler radar system allows us to “see” the wind which enables the National Weather Service to detect different facets of the weather conditions. This feature comes in particularly handy to detect the formation of a tornado, which is why the National Weather Service can issue tornado warnings in advance.

Doppler radars are typically able to detect most precipitation within a 90-mile radius of the radar itself while it can detect snow or intense rainfall at a wider radius of 155 miles. They are not very likely to detect light snowfall or rainfall as accurately over a long distance.


A former Information Technology professional, Amita Vadlamudi currently spends time studying and researching into science and technology topics. This is one of Amita Vadlamudi’s many articles on tools and technologies used in the exploration of science.


Facts about Microscopes


A microscope is an optical instrument that is used to produce enlarged images of very small objects. The most popular kind of microscope is an optical microscope that functions through a lens, forming images from the light. An acoustic microscope employs high-frequency ultrasound to form images. There is also an electron microscope which forms images from electronic beams. However, the most simple and basic type of microscope is an “optical microscope”. It comes with a single lens, magnifying glasses, and jeweler’s loupes.

Unlike a simple microscope having a single lens, a compound microscope has two lenses. The primary features of a compound microscope are the objective, used for holding the lens near the specimen, and the eyepiece that holds the lens near the observer. A modern compound microscope also comprises of a mirror which acts as a source of light, a focusing mechanism, and a surface where the object is placed to be examined. A compound microscope may also include a built-in camera for the purpose of microphotography.

During the 1st century AD, glass had been invented by the Romans. They observed that if one held a lens over an object, the object would look bigger. These lenses were referred to as “magnifiers” or “burning glasses”. Around the same time, Seneca discovered the magnification of objects by a globe of water. It wasn’t until 1600 that lenses were produced to be worn as spectacles. In the late 17th century, Antony Van Leeuwenhoek – a Dutch draper and scientist, became the first man to produce and use a real microscope. He made his own microscope which included a single convex glass lens and was hand-held by a metal holder.

Leeuwenhoek became more involved in Science and attempted various methods to improve the microscope. With his new and advanced microscope, he was able to see objects that no one else had encountered before. Owing to the invention of the microscope, scientists were able to see bacteria, yeast, blood cells, and many tiny insects that were difficult to detect otherwise. With the aid of microscopes, scientists and doctors were enabled to conduct more advanced and extensive research in the fields of science and medicine respectively.

Different microscopes are constructed for different applications. Hence, it is imperative that one invest in a microscope which suits their application. One will need a compound microscope, a high powered instrument, used for viewing small specimens like bacteria, germs, and water organisms. On the other hand, a stereo microscope is a low-powered microscope, used to view slightly visible specimens like insects, bugs, leaves, and rocks.

Eradication of the Crippling Disease Polio


Polio, full name poliomyelitis, is a virus that is transmitted from one person to another mostly through fecal matter. It can also be transferred through a less common way such as food or water contamination. This is a crippling disease that has been known to affect mostly children.

The disease has been known to exist for millennia, even during the ancient Egyptian times. This has been proven by artwork from the ancient Egyptian times that show a victim suffering from the disease with a thin and deformed limb, supporting himself with a staff.

The virus works in the way that it first invades the nervous system and then begins to destroy the nerve cells which have control over the muscle functions, especially the leg muscles. If someone gets paralyzed by the disease there is a 5 – 10 percent chance that they will die as soon as their respiratory system is affected. There is no saving the infected.

The disease itself has no cure, but it can be prevented if the children are given polio immunization.

In the year 1952, scientist Jonas Salk produced the polio vaccine which was supposed to be injected in the person, and in 1961 Albert Sabin created the oral vaccine drops which were able to bring about the immunity throughout the communities. The vaccine was immensely successful in preventing the development and spread of polio in the societies around the world.

The World Health Organization has set out on a mission to eradicate this debilitating disease around the world. There are four key strategies that have been designed to achieve complete eradication of the disease:

An infant, during the first year of their life, should be given polio vaccine.

National immunization days should be organized to provide children under 5 years of age with supplementary oral dosage.

There should be surveillance of areas with potential of being affected for any wide and active polio virus situation.

“Mop-up” campaigns should be introduced to the areas that have been affected by the disease.

Although there are a handful of countries where Polio still exists, doctors are hopeful that 2018 will be the year in which we will see the complete eradication of the crippling disease.


About the Author:

A former Computer Systems Engineer Amita Vadlamudi, is the author of many articles and blog posts. Science, Technology, Ancient history, Astronomy and Nature are some of Amita Vadlamudi’s favorite subjects.

A Look into Telescope

The telescope is an optical instrument used to observe objects that are a significant distance away, especially those that are not seen directly by the naked human eye. It is a collection of lenses and/or mirrors that allows the user to see objects that are far away, by either increasing the brightness around the object or by magnifying the object. The telescopes are able to perform at different levels of the electromagnetic spectrum from the radio waves to gamma rays.

The first optical telescope, according to some sources, was made by the Dutch lens-grinder names Hand Lippershey in the year 1608. Around the same time Galileo developed the first ever astronomical telescope. It was a tube that contained two lenses of different focal length that had been aligned on one axis.

Using this telescope, and the different versions that followed after, Galileo performed the first telescopic observation of the sky. During this time he discovered the lunar mountains, Jupiter’s four moons, sunspots, and the stars of the Milky Way.

There are two basic types of telescopes: the refracting telescope and the reflecting telescope.

The refracting telescope uses two lenses to work the light to focus on the object which tends to appear bigger than it really is. Both the lenses are convex lenses which work by bending the light inwards. The biggest refracting telescope in the world is present in the Yerkes Observatory of the University of Chicago.

The reflecting telescopes do not use lenses. They use mirrors to focus the light on the object and then they reflect the image back to the user.


Author Information:

Amita Vadlamudi’s resume can be found at the following site:


Ms. Vadlamudi’s favorite images and favorite places can be found at the following sites:


Aurora Borealis: Nature’s Very Own Light Show

Aurora Borealis

Essayist John Burroughs once famously gave the wise suggestion to “go to nature to be soothed and healed”. His words seem to hold true even today, as scientific studies suggest numerous health benefits to be had from nature, benefits like reducing depression and aiding healing.

While most prefer a beach trip or a jungle safari, an often overlooked way to communicate with nature is viewing a weather phenomenon like Aurora Borealis in all its glory.

 What Is Aurora Borealis?

Aurora Borealis or Northern Lights are a beautiful display of vibrant glowing lights that takes place on the Earth’s Northern Hemisphere. It’s caused by plasma particles that escaped from the sun’s surface and couldn’t be accelerated away by the Earth’s own magnetic field. The plasma particles’ interaction with Earth’s gaseous atmosphere results in a spectacular display of lights that has been described as “fire in the sky”.

The Northern Lights occur in varying colors that range from red to violet. However, the most common colors associated with the phenomenon are green and yellow caused by oxygen in the atmosphere interacting with the solar particles. Occasionally shades of red, blue and violet occur due to nitrogen.

The Lights also don’t appear in any one particular fashion – they may appear as a strand in the sky or as a running stream of light when the concentration of solar particles is high.

 Where to Go to See the Northern Lights?

Considering the beauty of the sight it offers, most tourists visit countries located in the Auroral Ring like Iceland and Norway to experience the Lights in their full majestic glory. Tour operators offer special packages for viewing Aurora Borealis that includes visiting towns with minimal light pollution.

Minimal light pollution ensures that the weather phenomenon is viewed at its peak brightness. Besides Scandinavian countries, one can view the Lights in North America especially in northwest areas of Canada and Alaska where the Lights are quite noticeable.

The phenomenon is not just limited to the Northern Hemisphere. When the phenomenon takes place in the Southern Hemisphere, it is known as Aurora Australia. These Lights though aren’t viewed by many as the Antarctic has difficult climatic conditions and is not a hospitable place to visit. Although they occur from time to time throughout the year, Aurora Borealis is best viewed in winter, lasting from September to April when nights offer the most darkness.


Following are the links to some of the other science and technology related articles that Amita Vadlamudi had published:

The Impressive History of Computers

Computers are known to be one of the greatest inventions that were ever thought of and created.

Their history goes back 2500 years to the abacus, which was a simple calculator that had been made from beads and wires. The abacus is still thought to be used in certain parts of the world today. If a modern computer and the ancient abacus are compared now, then it would come into the observation that both are vastly different, but they have one thing in common, they can do calculations much more swiftly than the human brain.

In the year 1642 a well-known French mathematician, Blaise Pascal, invented the adding machine which used cogs on wheels to represent numbers. This was the first ever practical mechanical calculator which the mathematician had designed to help his tax-collector do his sums. This invention was later upped by the German mathematician, Gottfried Wilhelm Leibniz in 1671, who instead of using cogs used cylinders with teeth of increasing length around its edges. Not only that, he is also credited with creating the binary code.

In the 1830s, an English man named Charles Babbage invented what he called the “Difference Engine”. It was made out of brass and pewter rods and gears. He also designed a device called the “Analytical Engine”. This machine included the 4 key characteristics of the modern computers which are:

  • Input: this was the way the numbers were fed into the machine
  • Memory: this is where the numbers would be stored while the complicated calculations would be taking place.
  • A processor: this would be the number cruncher where the calculations would be carried out.
  • Output: this is the printing mechanisms that would give the calculations and the result of the equation

Each of Babbage’s programmable machines was designed to use tens of thousands of precision gears. Augusta Ada Byron was an enthusiastic mathematician who fueled Byron’s ideas of making his machine programmable, she is even known to be the world’s first ever programmer. Not much could be made of Babbage’s work since none of his work could be completed. It probably needed much more finance than the British government could provide and so his ideas could never take proper formation.

American statistician Herman Hollerith is known to build one of the world’s first practical calculation machines. This machine had been created to compile census data. This was created during very critical times for the US government when they had been seeing a peak in population and realized that developing a census would be nearly impossible.

The real ideas of computers were contributed by Alan Turing, who was a Cambridge mathematician of a high reputation. He wrote a groundbreaking paper in which he theoretically laid out the instructions of a computer. These instructions were of a machine, now known as the Turing machine, which would be a simple information processor that would work through a series of instructions, reading data, writing results, and then moving to the next instructions.

The future inventors took inspiration from Turing and others and worked on creating the present-day computer.

About the Author: A long time computer professional, Amita Vadlamudi writes about computers and on other technical subjects. Some of her other articles can be found on her Worky site. Amita Vadlamudi’s professional qualifications can be found on her site.