modelling in the natural sciences. i will touch upon the essentials of modelling in those sciences...

Post on 01-Apr-2015

214 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Modelling in the Natural Sciences

I will touch upon the essentials of modelling in those Sciences which rely on the application of the scientific method as the exclusive means to update knowledge. Although I have been thinking in terms of modelling in

Physics when creating this material, I have kept the content general enough to apply to all Natural Sciences

(e.g., Chemistry, Biology, the Earth Sciences, etc.).

Let us start with the terms ‘Theory’ and ‘Model’. The term ‘Theory’ is usually reserved for something ‘grand’ or

adequately general; ‘Model’ applies to something more ‘humble’ or restricted. Herein, the word ‘Model’ will

represent both notions.

Why do we want to model natural processes or phenomena at all? (To be accurate, we model ‘the

mechanisms underlying or creating the phenomena’.)

A few answers:a) Sheer curiosityb) A pursuit for orderc) Compression of informationd) Need for predictions

After having created a model and obtained estimates of its parameters, one is able to ‘predict’ future events (e.g., the eclipses of

stellar objects, the amount of the traffic on a specific road next Monday morning, an intense storm in the Bay of Biscay tonight, the collapse

of a bank within the next few weeks, etc.).

There are three basic types of modelling:a) Theoretical modelling, in which the microscopic

aspects of the phenomenon are addressed.b) Phenomenological modelling, in which the

macroscopic aspects of the phenomenon are addressed.

c) Empirical modelling, in which no effort is made towards understanding the phenomenon at either (i.e., microscopic or macroscopic) level; attempted is merely the description of patterns in the data relating to the phenomenon.

These three types of modelling are equally appropriate in terms of the description of past

measurements, as well as of extrapolations into the future. However, it is generally accepted that the theoretical modelling provides the

deepest insight into the phenomenon under study, whereas the empirical modelling the

shallowest.

The most common way to test models is via experimentation. The experimental results are confronted

with the corresponding model predictions. A dedicated statistical analysis determines the probability (p-value) that the observed deviations (between the experimental results

and the model predictions) are due to random effects (random fluctuation). If the resulting p-value is ‘large

enough’, the model is assumed to have survived the test.

The acceptance requirement for a newly introduced model is that it accounts for all past (correct) measurements; no one would ever take the trouble of introducing a model which does not satisfy this criterion, save for didactic or

entertaining reasons (or in order to show a model’s inadequacy in describing a phenomenon).

Future measurements (i.e., obtained after the relevant model has been established) may be

compatible or incompatible with the corresponding model predictions (obtained from all the correct

and relevant past measurements).

As long as new measurements are not in conflict with the corresponding model predictions, the

model is usually not questioned.

In case that an ‘out of place’ measurement appears, its correctness is first investigated. Many

things may go wrong in an experiment, e.g., a flaw during the data-acquisition phase (defect

equipment, drifting calibration) or an occasional mistake during the processing of the raw

experimental results.

Once the correctness of the discordant measurement is established, the existing model

(assumptions, calculations, past analyses) is thoroughly re-examined. Finally, the model is either

modified (in such a way as to ‘accommodate’ the newly obtained data) or its validity/application is proclaimed restricted. As the scientists do not like models which are valid only in small regions of the free variables involved, ‘snipped-off’ models give way (sooner or later) to more general ones, which are built in the light (and exploiting the potential)

of the recent experimental and theoretical developments.

What do I want to say with all that?

To establish a model, evidence must be provided that the model is able to account for all existing

(correct) measurements.(Being constructive is difficult.)

To refute a model, evidence must be provided that the model is not able to account for only

one correct measurement.(Being destructive is easy.)

At the end of the day, one can only certify that a model is wrong; one can never prove that a model

is correct!

Most people have particularly hard times to keep up in terms with this very simple fact, which

however lies at the heart of modelling.

Unlike what most people believe, the Natural Sciences are not ‘static’. Despite that they use the mathematical framework in their methods (e.g., in

processing new information and updating knowledge), none of them should be thought of as

possessing the rigidity and rigorousness of Mathematics (which is a formal Science).

A good physical model today may become outdated tomorrow. Nature is impervious to

human understanding; ‘She’ follows Her courses irrespective of the way we attempt to figure out

how She behaves, by trying to unravel the mechanisms behind the various phenomena

around us. With all our Theories and Models, we can only approximate Her functions. Although our

approximate solutions are expected to become more accurate with time, it is an illusion to think

of them as ‘exact’ reproductions of the corresponding natural phenomena.

Thank you for watching!

All photos have been taken from the ‘Image of the Day’ Gallery on the NASA web page:

http://www.nasa.gov/multimedia/imagegallery/iotd.html

top related