recovering color from black and white photographssco590/colorrecovery.pdf · recovering color from...

Download Recovering Color from Black and White Photographssco590/ColorRecovery.pdf · Recovering Color from Black and White ... information from black and white photographs in ... Both the

If you can't read please download the document

Upload: lenga

Post on 06-Feb-2018

276 views

Category:

Documents


2 download

TRANSCRIPT

  • Recovering Color from Black and White Photographs

    Sven OlsenUniversity of Victoria

    Rachel GoldUniversity of Victoria

    Amy GoochUniversity of Victoria

    Bruce GoochUniversity of Victoria

    Abstract

    This paper presents a mathematical framework for recover-ing color information from multiple photographic sources.Such sources could include either black and white negativesor photographic plates. This papers main technical contri-bution is the use of Bayesian analysis to calculate the mostlikely color at any sample point, along with an expectederror value. We explore the limits of our approach usinghyperspectral datasets, and show that in some cases, it maybe possible to recover the bulk of the color information inan image from as few as two black and white sources.

    1. Introduction

    The modern tradition of photography started in the 1820s.Before Kodaks introduction of Kodachrome color reversalfilm in 1935, color image reproduction was a laboratorycuriosity, explored by a few pioneers such as Prokudin-Gorskii [1]. Tens of thousands of black and white plates,prints, and negatives exist from the hundred year time pe-riod between the birth of modern photography and thewidespread introduction of color film [15]. Currently, it isonly possible to create color images from early black andwhite data sources in those rare occasions when the originalphotographer contrived to create a color image by takingmatched exposures using red, green, and blue filters.

    In this paper, we explore how variations in spectral sensitiv-ity between different black and white sources may be usedto infer color information to within a known margin of error.We assemble a toolkit that scholars can use to recover colorinformation from black and white photographs in historicalarchives.

    The papers main technical contribution is the use ofBayesian analysis to calculate the most likely color at anysample point, along with an expected error value. This anal-ysis requires some knowledge of the types of light spectralikely to occur in the scene.

    We believe that, for many antique photographs, the intro-duction of accurate color information will greatly enhancetheir visual quality. Color recovery may prove invaluablein understanding archived materials of historical, cultural,artistic, or scientific significance.

    2. Related Work

    Both the computer graphics and photography communi-ties have long investigated techniques for colorizing blackand white photographs. Previous research in the computergraphics community proposed adding color to black andwhite images by either transferring color from a similarcolor image to a black and white image [13, 17] or usedcolor-by-example methods that required the user to assigncolor to regions [7, 14].

    Colorizing old black and white photographs can be as sim-ple as painting colors over the original grey values. Whiledifficult to do by hand, with the help of the right softwaretool, a convincingly colorized version of a black and whitephotograph can be created provided that a user selects rea-sonable colors for each object. Software capable of quicklycolorizing a greyscale image given minimal user interac-tion has been a topic of recent research interest, pioneeredby Levin et al. [7]. Nie et al. [9] presented a more com-putationally efficient formulation. Qu et al. [12] presenteda colorization system specialized for the case of coloriz-ing manga drawings, while Lischinski et al. [8] generalizedthe idea to a broader category of interactive image editingtools. Such colorization techniques naturally complementthe process of color recovery. Provided an appropriatelychosen sparse set of recovered colors, colorization can beused to create complete color images from any of the blackand white originals (see also Figure 2).

    In the field of remote sensing, it is sometimes necessary toconstruct color images from multispectral image data. Forexample, color images of the Martian landscape were gen-erated from the 6 channel multispectral image data returnedby the Viking lander [10, 11]. The basis projection algo-

  • 400 450 500 550 600 650 700 7500

    1

    2

    3

    4

    5TriX Pan and a Blue Filter: Effective Sensitivity Curves

    Res

    po

    nse

    (er

    gs/

    cm2 )

    Wavelength (nm)

    KODAK TriX Pan Film + Wratten 47

    KODAK TriX Pan Film

    (a) Effective sensitivity curves for two B&W sources (b) Initial Colors (c) Corrected Colors (d) Ground Truth

    Figure 1: Our mathematical framework allows the recovery of color with known error thresholds from as few as two images.This example uses a hyperspectral dataset to create two simulated photographs using KODAK Tri-X Pan film with and withouta blue filter (Wratten 47). The first image shows the effective sensitivity curves used for creating the two simulated imagesused as source data. The remaining three images show the default recovered colors, the recovered colors with statisticalerror correction, and ground truth color values for the dataset.

    rithm used to create those color images is nearly identicalto the projection technique that appears in our own methodssection (see Equation (4)). However, in our own framework,this initial result can be significantly improved using naturallight statistics.

    A technique developed by the photographic communitycombines several registered black and white images takenin rapid succession with red, green, and blue filters to cre-ate a color image. This method dates back to at least theearly 1900s when Sergey Prokudin-Gorskii set out to doc-ument the Russian Empire using a pioneering camera andred, blue, and green filters to capture three different imagesof the same scene. The slides were then projected throughred, green and blue filters of a device know as a magiclantern which superimposes the images onto a screen, pro-ducing a color projection [1]. While the projection equip-ment that Prokudin-Gorskii used to display his photographshas been lost, it is possible to extract color information fromthe surviving glass plates. Recovering color from Prokudin-Gorskiis photographs is relatively straightforward, becausewe can assume that the red, blue, and green channels inan output color image are just scaled versions of the red,blue, and green filtered photographs. In order to create colorimages, scholars working with the Library of Congress ad-justed the weightings of each color channel until the result-ing image looked sensible. In this work we provide a so-lution to the general problem of recovering color from anarbitrary collection of black and white photographs.

    3. Methods

    3.1. Terminology and Background

    3.1.1 Terminology

    In this paper, we use the convention of referencing per-photograph variables with a i subscript. To distinguish be-tween the different dimensions of a color space, we use a ksubscript.

    Given a spectral sensitivity curve, G(), we define itsideal response to incoming light having radiance distribu-tion V () to be,

    G()V ()d. (1)

    Under certain assumptions, equation (1) can be used tomodel photoreceptors with spectral sensitivity G(). In or-der for equation (1) to predict the response of a photore-ceptor, it must be the case that the incoming light doesnot change as a function of space or time. The activationlevel of a photoreceptor is influenced by the amount of timethat it is exposed to light, thus, for Equation (1) to predicta photoreceptors measurements, we must assume that thereceptor will always be exposed for a fixed length of time,and that the sensitivity of the photoreceptor does not changeover time. These assumptions are reasonable in the case of

  • Film/Filter Spectral Sensitivty Curves

    Input Images Partial Knowledge of the Scene

    Recovered Colors

    +

    Figure 2: Application to non-aligned images. Our methodscan operate on data samples taken from non-aligned im-ages. Color recovery at point samples is possible, providedthat a user selects pixel sets that share a common materialand lighting properties, and the spectral sensitivity curvesof the film and transmission functions of any lens filters areknown. Such point color samples are a natural input forstandard colorization techniques.

    a CCD camera with fixed exposure time, but they are overlyrestrictive as a model for film photography.

    The CIE Colorimetric system defines the RGB and XYZcolorspaces in terms of ideal response equations [18].Specifically, if we assume that the light V () passingthrough a small patch of our image is constant over time,and Ck() is the sensitivity curve associated with the k-thdimension of the CIE RGB color space, then the k-th colorvalue of that patch should be,

    Ck()V ()d. (2)

    In a black and white photograph, an integral similar to equa-tion (1) plays a role in determining brightness at each point.The films spectral sensitivity and the transmittance func-tions of any applied filters will combine to form an effectivespectral sensitivity, which we denote as Fi. The relation ofthe ideal response of the film to the density of the developed

    negative is determined by the parameters of the developingprocess; see Section 3.3 for further discussion.

    3.1.2 The Linear Algebra of Response Functions

    Putting aside, for a moment, the question of whether or notit is possible to find our films ideal response values, wenow consider whether given such responses, it is possibleto approximate the color values at any point in the image.

    First, note that the set of all response functions forms aninner product space, with the inner product defined as,

    F G :=F ()G()d.

    Define R(V ) to be a linear transformation that maps radi-ance distributions V to the ideal responses of a set of filmsensitivity functions {Fi},

    R(V )i := Fi V.

    It follows that any color response function Ck can beuniquely decomposed into two components,

    Ck = Sk +Dk. (3)

    Here Sk is an element of the span of {Fi}, and Dk is anelement of the null space of R [2]. As Sk span({Fi}),we can find a set of scalar weights wi such that,

    Sk =

    i

    wiFi.

    Therefore, for any radiance distribution V , the scalar prod-uct Sk V can be recovered from the set of measured filmresponse values R(V ), as,

    Sk =

    i

    wiFi Sk V =

    i

    R(V )iwi. (4)

    Equation (4) can be used to infer colors from a setof response measurements R(V ), if we assume thatCk V Sk V , i.e., if we assume that little color errorwill be introduced ifCk is replaced by its projection into thespanning space of {Fi}. Color interference using projectioninto a spanning space is not a novel technique, and non-trivial examples of the method can be found dating back tothe Viking lander project of the 1970s [10, 11].

    3.2. Bayesian Error Prediction

    Color recovery using a spanning space projection has thepotential to be very accurate, provided that a sufficiently

  • wide range of response functions is available [10, 11]. How-ever, given a limited set of effective film responses, the ap-proach will frequently fail. In the following, we introducea technique for improving on the results of the spanningspace projection method. We use a large collection of ex-ample light sources to predict the color information likelyto have been lost as a result of the spanning space projec-tion. That expected value can then be used to adjust theinitial recovered colors. Additionally, we define a varianceterm that can be used to predict the accuracy of the adjustedcolor values.

    It follows from Equation (3) that the correct color value willbe the sum of the responses of the spanning space and nullspace components of the color sensitivity curve,

    Ck V = Sk V +Dk V. (5)

    Given a limited set of film response curves, it is likely thatDk V 6 0. We suggest that this error can be corrected byfinding correspondences between the response of the nullspace component and the set of known film response values,R(V ). Formally, we propose adopting the approximation,

    Ck V Sk V + E(Dk V |R(V ) ).

    In order to calculate a posteriori probabilities for Dk Vgiven the measured film response values R(V ), we mustassume some knowledge of the light spectra likely to bepresent in the scene. In our approach, this prior knowledgetakes the form of a set of known light sources {Vq()}. Wemake the assumption that that the true radiance distribution,V (), will be equal to a scaled version of one of the knownlight sources. Thus, good error predictions require a largedatabase of light sources.

    Given a list of film responses R(V ), we begin by findingthe scale factor sq that minimizes the apparent differencebetween any example light Vq and the correct light V . Weaccomplish this by solving the following simple optimiza-tion problem,

    2q (s) :=

    i

    (Ri sVq R(V )i)2,

    sq := mins[s0,s1]

    2q (s).

    We define an error value associated with the q-th examplelight source as 2q (sq). This error definition is quite similarto the affinity function definition used in many image seg-mentation algorithms [16], and the pixel weighting functionused in Levin et al.s colorization algorithm [7]. The bestapproach to modeling error value probabilities would be todevelop a mixture model based on known data; however, asis often the case in similar image segmentation tasks, the

    computational costs associated with such models are sub-stantial. A more practical approach is to assume a Gaussiandistribution of error. Thus, we define a probability function,P (Vq |R(V ) ), as follows,

    P (Vq |R(V ) ) uq := exp

    (2q (sq)

    22

    ).

    Here, we define 2 as the m-th smallest 2q (sq) calculatedfor our set of known lights.

    Given the above, we can now form complete definitions forthe expected error in our recovered colors, as well as thevariance in those error terms. Defining dqk := sqVq Dk,we have,

    E(Dk V |R(V ) ) =

    q

    P (Vq |R(V ) )dqk,

    =

    q uqdqk

    q uq.

    Var(Dk V |R(V ) ) =

    q uqd2qk

    q uq

    (q uqdqk

    q uq

    )2.

    In Section 4 we use hyperspectral datasets to test the ac-curacy of these error predictions, given a range of differ-ent possible film response curves and example light sets ofvarying quality. Along with the set of known lights, Vq , theaccuracy of our predictions is also influenced by the choiceof values for the minimum and maximum allowed scale fac-tors, s0 and s1, as well as the index m used to determine2. In our experiments, we found that [s0, s1] = [.8, 1.2]and m = 5 typically produced good results.

    3.3. Calculating Ideal Film Response Values

    Unfortunately, the realities of film photography deviate suf-ficiently from equation (1) that we cannot consider the greyvalues in our black and white photographs to be the idealresponse values. However, we can hope to find a functiongi : R R that will map the grey values in our image, b,to R(V ).

    In order to find such a function we must propose a modelof black and white photography that makes it possible toapproximate ideal response values given the grey valuespresent in the image, along with some other additional in-formation that we can hope to uncover. One such modelis,

    b = fi(tiFi V ). (6)

    Here ti is the exposure time of the photograph, and fi :R R is a nonlinear scaling function, determined by the

  • 400 500 600 700 8002

    1

    0

    1

    2

    3TriX Pan and a Blue Filter: Red Recovery

    Res

    po

    nse

    (er

    gs/

    cm2 )

    Wavelength (nm)

    Correct Response

    Recoverable Response

    Unrecoverable Response

    400 500 600 700 8001

    0.5

    0

    0.5

    1

    1.5TriX Pan and a Blue Filter: Green Recovery

    Res

    po

    nse

    (er

    gs/

    cm2 )

    Wavelength (nm)

    Correct Response

    Recoverable Response

    Unrecoverable Response

    400 500 600 700 8000.5

    0

    0.5

    1

    1.5

    2

    2.5TriX Pan and a Blue Filter: Blue Recovery

    Res

    po

    nse

    (er

    gs/

    cm2 )

    Wavelength (nm)

    Correct Response

    Recoverable Response

    Unrecoverable Response

    Figure 3: Here we show the null space and spanning space projections defined in Equation (3), for the case of the the filterset used in Figure 1. The spanning space projection, Sk, represents the portion of the response curve that can be easilyrecovered from the information present in the photographs, while the the null space projection, Dk, represents parts of thespectrum that are impossible recover without the use of Bayesian error prediction.

    Figure 4: Hyperspectral Test Results: Here we show the color recovered images from the two simulated photographs of theTri-X Pan and Blue Filter test case. Row one: initial recovered colors, recovered colors with Bayesian error correction, cor-rect color values. Row two: Initial color error, predicted error, actual error in the final recovered colors. Twenty additionalresults are shown in the supplemental materials.

    saturation function of the film, and the details of the devel-opment process used to create the photographic print [5].

    To find fi, we can either assume that the film saturationfunction given by the manufacturer will be a good matchfor fi, or we can take several photographs of the same sceneusing known exposure times, develop the film in a way thatconforms to our assumptions about the film developmentprocess used in our input black and white image, and fit acurve to the resulting data points.

    Provided that fi is known, it is possible to calculate ti,

    given at least one point in the image with known incominglight V (). Such an exposure time inference is similar totechniques used by Land [6]. For example, given a numberof photographs of a building taken under daylight, if we canidentify a point in the image that contains a diffuse objectpainted with a white paint having known reflectance func-tionH(), and assume a standard daylight illuminant L(),then we can predict that V () = H()L().

  • 4. Evaluation using Hyperspectral Images

    The mathematics of color recovery impose few bounds onthe amount of error that could occur due to the unrecover-able portion of color sensitivity curve, Dk(). While wecan attempt to model this error statistically, it is ultimatelythe light present in the scene, taken together with the spanof the films effective response curves, that determines theextent to which color recovery is possible.

    In order to access the practical limits of our color recov-ery process, we have investigated the effectiveness of ouralgorithm in a series of controlled test cases. We use Fos-ter et al.s hyperspectral datasets [4] to simulate the resultof taking several photographs of the same scene using dif-ferent film/filter combinations. Each hyperspectral datasetis effectively an image that stores thirty or more valuesper pixel. Each value in the dataset records the narrow bandresponse from a 10nm section of the visible spectrum. Byapplying a discretized spectral sensitivity curve to this data,we are able to calculate both ideal color values and idealfilm response values.

    These test cases allow us to analyze the color error that re-sults from unavoidable sources of color information loss,i.e., the unrecoverable Dk() curve, independently of anymeasurement errors that occur in the course of constructingideal response values from an input photograph set. Thus,the hyperspectral test results can be considered an upperbound on the quality of the color information that canbe recovered using real photographs. However, it must benoted that Foster et al.s hyperspectral datasets do not recordinformation in the IR and UV spectrums of light, thoughsome black and white films are sensitive in those wave-lengths. Thus, our simulations miss errors that might beintroduced due to electromagnetic radiation in those spec-trums.

    For each hyperspectral scene, we construct a set of idealfilm response values R(V ), assuming a collection of pho-tographs taken with varying combinations of films and fil-ters. The effective response curves used are defined usingthe published spectral sensitivity curves of Kodak and Fuji-film black and white films, along with the published trans-mittance values of Kodak Wratten filters. The film/filtercombinations which we test range from the virtually impos-sible case of recovering color from a single photograph, tothe relatively easy case of recovering color given 5 differentphotographs, taken with a range of photographic filters.

    Foster et al.s hyperspectral datasets include a range ofscenes featuring natural and man-made objects, viewed un-der daylight illuminant conditions. In order to performthe statistical error correction of Section 3.2, we use radi-ance values taken from two hyperspectral datasets, Scenes 2

    (green foliage) and 7 (sunlit building) from Foster et al. [4].As the high resolution data can greatly increase the com-putational cost of finding the expected error at each point,we downsample in image space by a factor of 9. To avoidthe case of impossibly good example lights, neither of thedatasets used to provide example light sets are used to gen-erate test problems. Our tests are thus run using responsesderived from Scenes 3 (English garden), 4 (flower), 6 (cityscape) and 8 (barn door). Example recovered images fromthese trials are shown in Figures 1 and 4, and the completeset of recovered images and associated error values are in-cluded in our supplementary materials.

    Overall, the results of the hyperspectral tests are quitepromising. They suggest that roughly correct color recov-ery may be possible given as few as two photographs. Pre-dictably, a collection of photographs representing a widerrange of spectral sensitivity curves greatly improves thequality of the recovered colors.

    4.1. Error Prediction Accuracy

    The error prediction step can only be expected to provideaccurate results insomuch as the set of example radiancedistributions includes rough matches for the radiance dis-tributions that exist inside a given scene. We would expectfew similarities between the reflectance functions of met-als and those of plants, and thus, using a set of examplelight sources drawn entirely from images of plants is likelyto lead to incorrect results in the case of photographs con-taining metal objects. A similar failure case is shown inFigure 5.

    Interestingly, even rough matches between the content ofthe example light set and the source photograph often ap-pear sufficient to allow the Bayesian error correction to sig-nificantly improve the accuracy of the recovered colors. Forexample, the materials present in the barn door dataset areonly roughly similar to those in the sunlit building dataset.Yet, as show in Figure 4, the error correction is accurateenough to frequently introduce the reds and oranges miss-ing from the initial recovered colors. (That said, there arealso several visible failure cases, which attest to the limitsof the approach.)

    For many of the test cases, the variance term,Var(V Dk |R(V ) ), is demonstrated to be a fairlyaccurate prediction of the true error in the recovered colors.The accuracy of the error predicted by the variance termappears to be a function of the dataset that the tests are runon, rather than the particular filters used. This suggests thatthe quality of the error predictions is determined primarilyby the degree to which the example dataset can providegood matches for the light in the scene.

  • Figure 5: Error Correction Failure Case: Here we show another set of hyperspectral test results, once again using the sameTri-X Pan and Blue Filter test case, and the same simulated ideal response functions as Figure 4. However, for this example,we have reduced the set of example light sources, using only hyperspectral data from Foster et al.s green foliage dataset.From left to right: initial recovered colors, recovered colors with Bayesian error correction, correct color values. As theseresults demonstrate, given sufficiently poor example light sources, the error correction step can increase color errors.

    5. Discussion and Conclusions

    Viewed at a high level, the problem of finding ideal responsevalues provides us with an opportunity to incorporate anyinformation that we have on the content of the image intothe color recovery process. This information does not needto be as specific as the known response functions and light-ing conditions proposed in the previous subsection. For ex-ample, if we know that some of the pixels in the images areskin illuminated by daylight, that restricts the possible ob-served radiance, V (), and thus the possible response val-ues R(V ) for those points in the image. Developing algo-rithms that can effectively make use of such partial informa-tion has the potential to greatly improve the effectiveness ofcolor recovery algorithms, and such techniques could alsobe useful in other computational photography applications.There is also considerable potential to incorporate existingcomputational photography techniques into the color recov-ery process, for example, it may be possible to combine ex-isting color constancy algorithms with our present work inorder to recover colors from photographs of the same scenetaken under differing illumination conditions [3].

    Prior to the 1920s, most photography made use of or-thochromatic films, which had negligible red responses.This would seem to put a hard limit on the time period be-fore which color recovery could be possible as responseinformation for the red and orange portions of the spectrumwill always be missing in photographs taken before the in-troduction of panchromatic films. One of the most interest-ing results of our research is that, as the hyperspectral testcases suggest, it may still be possible to recover accuratecolor information from such photographs, by making use ofBayesian error prediction.

    For the vast majority of antique black and white pho-tographs, neither exposure times, nor any filter informa-

    tion have been recorded. Moreover, professional blackand white photographers regularly use a range of darkroomtechniques that would render the inference of ideal responsevalues for points in their prints exceedingly difficult. Filmnegatives or photographic plates are more attractive datasources, as all the unknowns associated with printmakingare eliminated. Furthermore, photographers make use ofnumerous conventions that recommend exposure times, fil-ter use, and development processes in particular situations.Thus, there is potential to develop tools for guessing thelikely photographic variables associated with a given col-lection of antique negatives.

    References

    [1] The Empire that Was Russia: The Prokudin-Gorskii Photographic Record Recreated.http://www.loc.gov/exhibits/empire/making.html.1, 2

    [2] S. J. Axler. Linear algebra done right. Springer-Verlag, second edition, 1997. 3

    [3] D. A. Forsyth. A novel algorithm for color constancy.Int. J. Comput. Vision, 5(1):536, 1990. 7

    [4] D. Foster, S. Nascimento, and K. Amano. Informa-tion limits on neural identification of coloured surfacesin natural scenes. Visual Neuroscience, 21:331336,2004. 6

    [5] J. Geigel and F. Kenton Musgrave. A model for sim-ulating the photographic development process on dig-ital images. Computer Graphics, 31(Annual Confer-ence Series):135142, 1997. 5

    [6] E. Land. The retinex theory of color constancy. Sci-entific Am., pages 108129, 1977. 5

  • [7] A. Levin, D. Lischinski, and Y. Weiss. Coloriza-tion using optimization. In Proceedings of ACM SIG-GRAPH 2004, Computer Graphics Proceedings, An-nual Conference Series, Aug. 2004. 1, 4

    [8] D. Lischinski, Z. Farbman, M. Uyttendaele, andR. Szeliski. Interactive local adjustment of tonal val-ues. In SIGGRAPH 06: ACM SIGGRAPH 2006Papers, pages 646653, New York, NY, USA, 2006.ACM. 1

    [9] D. Nie, Q. Ma, L. Ma, and S. Xiao. Optimizationbased grayscale image colorization. Pattern Recogn.Lett., 28(12):14451451, 2007. 1

    [10] S. K. Park and F. O. Huck. A spectral reflectanceestiamtion technique using multispectral data fromthe viking lander camera. Technical Report D-8292,NASA Tech. Note, 1976. 1, 3, 4

    [11] S. K. Park and F. O. Huck. Estimation of spectral re-flectance curves from multispectral image data. Appl.Opt., 16:31073114, dec 1977. 1, 3, 4

    [12] Y. Qu, T.-T. Wong, and P.-A. Heng. Manga coloriza-tion. In SIGGRAPH 06: ACM SIGGRAPH 2006 Pa-pers, pages 12141220, New York, NY, USA, 2006.ACM. 1

    [13] E. Reinhard, M. Ashikhmin, B. Gooch, and P. Shirley.Color transfer between images. IEEE ComputerGraphics & Applications, 21(5):3441, Sept. 2001. 1

    [14] D. Sykora, J. Burianek, and J. Zara. Unsupervisedcolorization of black-and-white cartoons. In NPAR04: Proceedings of the 3rd international symposiumon Non-photorealistic animation and rendering, pages121127, New York, NY, USA, 2004. ACM. 1

    [15] P. Vanderbilt. Guide to the special collections of prints& photographs in the library of congress. U.S. Libraryof Congress, 1955. 1

    [16] Y. Weiss. Segmentation using eigenvectors: A unify-ing view. iccv, 02:975, 1999. 4

    [17] T. Welsh, M. Ashikhmin, and K. Mueller. Transfer-ring color to greyscale images. ACM Transactions onGraphics, 21(3):277280, July 2002. 1

    [18] G. Wyszecki and W. S. Stiles, editors. Color Science: Concepts and Methods, Quantitative Data and For-mulae. Wiley-Interscience, 2000. 3