mapping and predicting the weather

Post on 24-Feb-2016

59 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

Mapping and predicting the weather. The Storm of the Century, a winter storm which struck the entire eastern half of the US on March 12, 1993. . Mapping and predicting the weather: a history. Meteorology involves geography and cartography - PowerPoint PPT Presentation

TRANSCRIPT

Mapping and predicting the weather

The Storm of the Century, a winter storm which struck the entire eastern half of the US on March 12, 1993.

Mapping and predicting the weather: a history

• Meteorology involves geography and cartography

• Prediction is synonymous with mapping: predicting weather is best done spatially, on maps.

• To arrive at where we are today with weather forecasting required innovations in how atmospheric data was collected, communicated, mapped, and analyzed.

Weather proverbs and folklore were the first means for predicting and understanding the weather. http://www.americanfolklore.net/folktales/rain-lore.htmlhttp://en.wikipedia.org/wiki/Weather_lore

Visibility of stars can indicate increasing amounts of moisture in the troposphere, the lowermost layer of the atmosphere where weather takes place. This has been known to many cultures and civilizations.

Red sky at morning, sailor take warning. Red sky at night, sailor’s delight.

Andean cultures used the brightness of constellations to predict the weather. Bright stars indicated more moisture in the troposphere (El Nino conditions) which could several months later lead to dry conditions (La Nina conditions). This knowledge allowed Andeans to adjust potato planting dates in anticipation of oscillating climatic conditions.

Weather proverbs and folklore also have a long history in the prediction and comprehension of the weather, although their accuracy can be debated.

• The 189-year-old publication “The Farmers Almanac” claims 80% to 85% accuracy for the forecasts by its reclusive prognosticator, Caleb Weatherbee.

• Weatherbee prepares the forecasts two years in advance using a secret formula based on sunspots, the position of the planets, and tides.

• Does not meet scientific standards of weather prediction.

Weather logs were the first formalization of what would eventually become weather maps. Prior to late 1700's, weather data was listed in weather logs rather than spatially analyzed with maps. Integrating weather logs among places and meteorologists was thwarted by a systematic way to collect and distribute data quickly.

First weather maps published in 1811 by Brandes.

Because it took a lot of time to assemble and plot the data, he could not predict the weather; only hindcast it.

Brandes hindcasted conditions in Europe for each day of the year in 1783. He was the first to contour and plot isobars, the lines on an air pressure map. He plotted air pressure values observed across the continent and then drew in the contours by hand.

Sketches from Morse’s notebooks showing early design of the telegraph (1840’s)

The invention of the telegraph aided in the rapid dissemination of weather data. Now weather observations and data could move faster from place to place. Warnings of severe weather could be communicated along telegraph wires. However, knowledge of how large weather systems travelled was poorly developed.

Development of a rapid network of telegraph stations eventually evolved to warn Atlantic cities of storms developing to the west, in the Midwest and South, shortly after 1846. Weather gauge ( instrumental) information could be telegraphed to Washington D.C. where maps were constructed. By 1860 had 45 stations reaching as far west as St. Louis

Signal Corps telegraph office network, (War Department, post-Civil War)

The network of telegraph stations increased under the US Weather Bureau administration, helped by the completion of the transcontinental railroad.

US Weather Bureau forecast headquarters, Washington DC, early 1900’s

Production and standardization of the first national weather maps in 1880’s

Weather Bureau transferred to Dept of Agriculture and network of local and regional forecast stations constructed. In other words, the office in Washington DC was decentralized. Telegraph was still main means of communicating weather data. However, weather over ocean was an unknown….

Institutional evolution of weather prediction

• US Weather Bureau - US War Department (1870)• US Weather Bureau - US Department of Agriculture

(1891)• US Weather Bureau - US Department of Commerce

(1940)• The US Weather Bureau was renamed the National

Weather Service (1970) and placed under the newly created National Ocean and Atmospheric Administration (NOAA) within the US Department of Commerce

Development of wireless radio technology, early 1900’s. Wireless radio communications was pioneered during WWI and used to transmit weather conditions and forecasts to and from ships at sea. Airplanes are still particularly critical today for making precise measurements of hurricane conditions and positions.

View of hurricane eye from airplane.

But what was going on up high in the atmosphere? Initial upper atmospheric measurements initially made with kites…..note tornado in background (and the likely presence of lightning). Not a safe work environment.

Bi-plane with weather instruments, 1934. Planes were expensive to operate for weather data collection, and safety concerns certainly remained.

Balloons could be launched with people and weather instruments on board, or better, safer, and less expensive….

..by tracking unmanned balloons, meteorologists could gain an idea of how wind speeds and directions change moving up through the troposphere.

Image at right shows an airplanedropping a hurricane warningnotice to a crew of sponge divers of Florida. Today, airplanes dropdropsondes, which record the weather conditions in its fall to thesurface. These data are transmitted back to the forecast office. Without dropsondes our knowledge and prediction of hurricane track and intensity would be greatly diminished.

A radiosonde is a balloon released from the surface with an attached weather recording and transmitting instrument. A dropsonde is a weather recording and transmitting instrument dropped from a plane. The instrument shown above is what records and transmits weather info on a radiosonde. It is attached underneath the balloon.

Radiosondes, dropsondes, and later, satellites, led to the mapping of the upper atmosphere. Shown below is the location of the polar jet stream. Colors indicate wind speed. Altitude of the polar jet stream ranges from 10-15 miles. This information could not be collected without these technological innovations.

Midlatitude cyclone (left and above) encompass low pressure center and attached warm and cold fronts. Cold front is shown in blue. Warm front in red. Entire system rotates counterclockwise

Development of polar front theory(1922) , which described mid-latitude cyclones, a major weather maker. Pioneering work by Bjerknes (left) contributed to our conceptual and mathematical understanding of weather prediction.

Integration of upper-level conditions, surface conditions, and polar front theory areessential to forecasting the weather. The low pressure on the surface is a mid-latitude cyclone. Note how the winds in it are coupled to upper level flow.

Skew-t plot for Tallahassee

Red line shows air temp and blue line shows dew point temp

y-axis: tempy-axis: pressure level

At what pressure level are clouds likely toform ?

Invention of the skew-t plot from existing graphical methods (late 1940’s)

Finite difference technique of forecasting (Richardson, 1922)

Grid cells used by Richardson to map his prediction of air motion determine weather patterns. Richardson’s work pioneered a method for mathematically and cartographically predicting the weather.

To understand the finite difference technique, you have to understand some basic principles about weather prediction.

At its simplest, weather prediction is predicated upon anticipating the motion of the atmosphere.

Various air flow geometries of atmospheric motion are associated with certain weather conditions.

Rising air is associated with cloudy conditions and low pressure. Sinking air is associated with clear conditions and high pressure.

If you can predict the relative motion of the atmosphere, moving up or down, a few days in advance, you can predict the likelihood of cloudy or clear conditions.

Based on these descriptions, what is the general motion of air in Montana and Idaho? (Rising). What is the general motion of air over the mid Atlantic states? (Sinking)

3D block of the atmosphere showingcomplexity of flow

Region of rising air: cloudy conditions and low pressureRegion of

sinking air: clear conditions and high pressure

Steps in the finite difference technique:

1. Collect data from weather stations. In this case we will collect air temperature.

2. Contour data so that temperature an be estimated for all points on the map

3. Draw a grid over the entire forecast area and assign a discrete value of temperature to the center of each grid cell. The value of temperature is estimated from the contour map.

4. Repeat this same process of contouring, gridding, and assigninga value to a point at the center of each grid cell for other weather variables, like air pressure.

5. Apply equations of fluid motion and thermodynamics to predict the motion of air to and from grid cells basedon their temperature, air pressure, and other weather variables (dew point, wind speed and direction)

These calculations will reveal general movement of air and thus the regions where there are rising (potentially cloudy) and sinking (potentially clear) atmospheric motions

Richardson’s method is the basis for 3-dimensional numerical weather prediction models used today,

After WWII, surplus radar donated for weather forecasting

Early radar image of thunderstorms in Florida

One of the first radar images of hurricane structure, 1946

This was state of the art radar-based hurricane tracking in 1960

Now the biggest hurdle to making weather maps and forecastswas computational. The calculations were too tedious to do by hand. ENIAC-one of the first general purpose computers (1940) used to make forecasts.

Vacuum-tube technology (eventually replaced by transistor and integrated circuits----the “chip”) was used in ENIAC and early computers. Unreliable and prone to overheat.

Computer punch card---eventually replaced by magnetic tape, then floppy discs, zip discs, compact discs, and dvd’s.

IBM 7090 (1965) used in weather forecasting

Refinement of integrated silicon-chip based computing, last half of 20th century.

The “chip” was developed in late 50’s and 60’s

Launch of the first weather satellite

Launch of first weather satellite, TIROS 1.

Real time satellite tracking

Further development of quantitative models of weather prediction

• New models possible because of faster computers, better data integration and communication

• Predictive model types–Global model–Nested grid model–Ensemble forecast

A global predictive model applies the finite difference technique over a large area, with very large grid cell sizes. This allows meteorologists to predict the weather by looking “upstream” to see what kinds of conditions are coming their way. Remember, weather systems tend to flow from west to east. One of the weaknesses of this model is that it cannot make local weather predictions: the grid cells are too large. It is also very computationally intensive.

A 24-hour global forecast system (GFS) prediction of potential precipitation. The map shows the predicted weather for 8 pm Thursday night Eastern Standard Time (which is midnight, or 0Z Friday on the Prime Meridian). I downloaded this map Thursday morning, March 13th. As this is a 24 hour prediction, the model was run Wednesday night, 8 pm Eastern Standard Time. (Also see this link:http://www.wunderground.com/modelmaps/maps.asp?model=GFS&domain=TA)

A nested grid predictive model also uses the finite difference technique, but it combines the large scale global grid with a smaller grid with much finer grid cells. This allows forecasters to see and predict the big picture, while also accounting for local conditions. The smaller grid cells in the nested grid allow more detail for local forecasts.

A 24-hour nested grid predictive model (NGM) showing areas of potential precipitation. This is the prediction for 8 am Friday Eastern Standard Time (which is noon, or 12Z Thursday on the Prime Meridian). I downloaded this map Thursday morning, March 13th. As this is a 24 hour prediction, the model was run Thursday morning 8 am Eastern Standard Time.

Hurricane forecasters look at the way models agree and disagree with each other. Each of the lines below is predicted track from different computer models (NGM and GFS models are two of them). By looking at how the models agree and disagree a measure of how much certainty in the track can be estimated. If models disagree to a large extent, there is a greater sensitivity to local conditions . Small differences in initial conditions input into the models can lead to vastly different forecasts The result is a much harder to predict hurricane.

Which hurricane is more sensitive to the local conditions surrounding it? In other words, which storm, when weather variables are fed into the forecast models, is more dependent upon small differences in them? The white envelope indicates the potential 1-3 day area in which the eye of the hurricane may pass.

(The atmospheric and oceanic conditions with Ivan (left) are much more difficult to predict based on the shape of the white cone. Very localized variations in air temperature, ocean temperature, or winds around the storm could cause Ivan to shift tracks over a very wide area. With Noel, any local variability in air and ocean temperatures or winds are unlikely to have any effect on the outcome of the storm track. You could enter a range of initial conditions in the models for Noel (right) and the models would likely agree with each other.

Predicting hurricane intensity is more difficult than predicting its track. This is because the conditions that determine intensity exist on much smaller scales and are difficulty to measure within and around the hurricane. These parameters are a challenge to integrate into track forecast models.

Hurricane Charley, 2004

The detection of “hot tower” thunderstorms in hurricanes are thought to herald an increase in strengthening (an example of remote sensing)

NEXRAD Doppler radar coverage for the US, another geographic-meteorologic innovation

Outside of one of the first Doppler radar units, late 1970’s

Inside of one of the first Doppler radar units, late 1970’s

Doppler radar can estimate two parameters:

Reflectivity - a measure of the size of the particles in the air, which can not only tell you the type of precipitation (rain, sleet, snow, freezing rain) but also its relative intensity.

Rotational velocity - the speed and direction of the air relative to the radar station

To understand how Doppler radar works….

Interface for selecting Doppler radar reflectivity and velocity from across the US:http://www.rap.ucar.edu/weather/radar/

Tallahassee NationalWeather Service Doppler

…..you have to understand the Doppler effect.

The sound of a horn from a moving train will sound different for an observer in front of the train compared to someone the train has already passed.

This is the Doppler effect applied to sound. The wavelengths of sound are compressed for the person out in front of the moving train.

Doppler radar bounces microwave radiation off the atmosphere and the clouds within it.

By looking at how the return wavelengths have changed, Doppler can detect three aspects of the weather:

1. The type of precipitation, (rain, sleet, snow, freezing rain) because their differences in size, shape, and composition will bounce back a different signal.

2. Potential intensity of preciptitation. For example, light to heavy rain will have a different signal.

3. The rotational movement of the wind in terms of its direction to (inbound) or away (outbound) from the radar station and at what speed.

Doppler reflectivities showingprecipitation type (top) and rainfall

intensities (bottom).

These dBZ values equate to approximate rainfall rates indicated in the table right.

These are hourly rainfall rates only and are not the actual amounts of rain a location receives. The total amount of rain received varies with intensity changes in a storm as well as the storm's motion over the ground.

Doppler rotational velocities

With Doppler radar, the rotationalcomponent of winds can be detected. This is particularly useful for detectingtornadoes. The pattern of a tornado inDoppler is called a hook echo.

Doppler can detect winds that are movingtoward (inbound) the radar and away from(outbound) the radar. It can also providethe windspeeds.

Hook echo shown in white box. A hook echo is indicative of tornadic circulation. When a hook echo is detected a tornado warning is announcedto the public.

In a hook echo, adjacent pixels will have sharply different directions, typically with one inbound and one outbound. This permits the detection of rotation in a tornado. Dopper is also useful for detecting the rotational component in the eye of a hurricane.

Doppler is not perfect. Some parts of the US are not covered completely by Doppler radar. However, Doppler radar coverage for most of the lower 48 overlaps.

The area right around the Doppler is the cone of silence. Weather phenomena in this area may be hard to detect.

At greater distances from the radar, the beam may overshoot severe weather and miss potentially dangerous conditions closer to the ground.

Development of Internet 1960’s - today

• Emergence of the Internet in the early 1990’s revolutionized the dissemination and analysis of weather data.

• Weather stations could be connected in real time, allowing instantaneous data feeds, and rapid dissemination of forecasts and warnings.

National Weather Service offices, including Tallahassee NWS data and forecasting network

Map of the Internet, 2007

Surface observation map

Each symbol is called a station model and communicates information about wind direction/speed, temp, air pressure, dew point, and cloud cover

Post-1990 to present

• Faster computer processor speeds, larger hard drive storage capacities, and rapid data integration through the Internet encouraged the development of:– AWIPS (Advanced Weather Interactive Processing

System)– Cartographic animation– Global climate models– Mesoscale forecasting models– Integration of dynamical and statistical models.– Privatization of weather forecasting

AWIPS

Integrated weather monitoring and forecasting systemcomprised of:

Network of Doppler radar

Automated weatherobservation stations

Data stream of satellite imagery and numerical forecast models

Global climate models (GCM’s) are being used to predict globaland hemispheric changes in temperature and precipitation decades into the future.

Mesoscale forecasting models

Mesoscale numerical models can forecast the weather at smaller spatialand temporal scales.

http://www.mmm.ucar.edu/mm5/

In a statistical model, the past is used to forecast the future.

For example, with hurricanes, the records of when and where hurricanes formed in the past provides the raw data about how strong they became as they tracked across the ocean.

This historic data can be used to assign probabilities that a present-day tropical storm, developing near the same location at the same time of year, would have characteristics (wind speeds, air pressure in the eye) of the older tropical storms.

Statistical models are combined with dynamical (grid-based models) to make a joint forecast of hurricane track and intensity.

Private forecasting

Weather “Wars”Sometimes called TV radar wars or Doppler wars, are a kind of sensationalist journalism primarily concerning weather news. The "war" is typified by competing local TV news stations engaging in technological one-upmanship to increase viewership.

http://en.wikipedia.org/wiki/Weather_wars

top related