the world of automotive lighting

9

Click here to load reader

Upload: buiquynh

Post on 04-Feb-2017

223 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: The world of Automotive Lighting

Color Luminance File Format Specification

Version 1.5.0, 25. October 2013

Introduction

This document specifies a file format especially defined to store luminance and color data of views of automotive lighting systems. These image data could have been created by computer simulation or with a high-dynamic-range luminance camera. The file format has to fulfill the following requirements:

Due to the high dynamic nature of the luminance of automotive lighting systems, common low dynamic range file formats as jpeg or bmp are not sufficient to store the data. A high dynamic range data format is needed.

Absolute physical quantities as the luminance and the chromaticity coordinates have to be stored

In automotive lighting systems such as rear lamps or front lamps different light functions are existing, where each light function has its own set of light sources. Preferably images of all these functions together with the image created with ambient lighting are stored in a single file.

Often views from different viewing angles, distances or at different times are made from the lighting system. An appropriate file format should store these parameters to enable the viewing program to navigate around the imaged object.

The camera parameters have to be stored in order to reproduce the settings and to permit the superposition of images which have been generated by different persons with different systems.

As a basis for this image file format, which will be called color luminance format in the following,

the OpenExr file format[1] created by Industrial Light & Magic (ILM)[2] is used. It is an open standard with an Open Source Licence. OpenExr offers the required possibility to store high-dynamic-range data in different channels and layers. Furthermore, custom header attributes can be stored in the file. Version 1.7.0 (which allows long attribute names) or newer of openEXR has to be used. The current version 2.0.1 has also been tested and can be utilized.

In the following the specific attributes to store color luminance data are outlined.

[1]

http://www.openexr.com [2]

http://www.ilm.com

Page 2: The world of Automotive Lighting

General Definition

One view is stored in one OpenEXR file, where a view may contain several light functions, each is stored in its own layer. In each layer a single colored luminance distribution can be stored in one channel, or the three dimensional color coordinates X, Y, and Z in three channels. Single-colored layers and multi-colored layers can be mixed in a file.

A sequence of images (for example a rotation of the viewing direction around the object) is stored as a sequence of files in one directory, where each file contains the appropriate header attributes to define the sequence parameters.

In the next two sections the header attributes, which have to be set are defined. The section Data

structure explains how the data is stored in layers and channels. The last section provides an example source-code to show how the ILM library is used to write a color luminance file.

Mandatory OpenEXR Header Attributes

The following attributes are required in all OpenEXR files to ensure that the files can be read by other programs. Most of them are automatically set by the ILM write routines to default values. In Table 1 these attributes have the value “set automatically”.

Table 1: Mandatory OpenEXR header attributes

Attribut Name Value

channels set automatically compression set automatically dataWindow set automatically displayWindow set automatically lineOrder set automatically pixelAspectRatio set automatically ( = 1) screenWindowCenter x and y coordinates of the window’s center

screenWindowWidth horizontal width of the image The attribute screenWindowWidth is used to calculate the pixel coordinates in real space. For a

perspective view (the additional header attribute cameraType defines, whether a perspective or an orthographic view is used, see next section) the width is the same as defined in the document “Technical Introduction to OpenEXR” by ILM[3]: The screenWindowWidth is the width of the projected image at a distance of 1 unitlength in front of the projection point (see Figure 1).

The horizontal and vertical viewing angles of a pixel can be calculated from the displayWindow DW (which contains the horizontal and vertical image resolution Nx=DW.x_max – DW.x_min +1 and Ny=DW.y_max – DW.y_min +1 respectively), the screenWindowCenter C, the pixelAspectRatio PA, defined as width divided by height of a pixel[3] (which should be set to 1) and the screenWindowWidth W.

[3]

http://www.openexr.com/TechnicalIntroduction.pdf

Page 3: The world of Automotive Lighting

The viewing angle to the left border of the image is given by atan[-(W/2 –C.x)/1], the viewing angle of the right border by atan[(W/2 + C.x)/1], where C.x is the x-coordinate of the screenWindowCenter.

The height H of the image can be calculated with H= (Ny – 1)/(Nx – 1) * W/ PA. The height the vertical viewing angles can be calculated similar to the horizontal angles as outlined

above. For an orthogaphic view the width of the image in the units of the header attribute lengthUnit (see

below) shall be saved as the screenWindowWidth. In this case the 2D Cartesian coordinates of a pixel in the camera coordinate system can be calculated from the screenWindowWidth, pixelAspectRatio, displayWindow resolution and screenWindowCenter.

Figure 1 illustrating the screenWindowWidth, the screenWindowCenter and the camera coordinate system, taken from

the Technical Introduction to OpenEXR[3]

.

Page 4: The world of Automotive Lighting

Specific Header Attributes

In addition to the mandatory OpenEXR header attributes, additional attributes have to be given to store the parameters of the image creation as detailed as possible.

Table 2: Specific Header Attributes

Attribute Name Description Type[4] Default Value versionCLF version of the color luminance format v3i (3 integer) (0,0,0)

comments file comments string “” owner name of the owner of the image string “”

capDate the date when the image was created or captured, formatted as YYYY:MM:DD hh:mm:ss

string “”

chromaticities Tells the program to interpret RGB data as CIE XYZ

Chromaticities (1,0,0,1,0,0,1/3,1/3)

valueUnit Luminance is stored by default string “cd/m²” lengthUnit Lengths are given in meter by default string “m” defaultLayerName Name of the default layer string “” cameraOrigin Coordinates of the camera origin v3f (3 floats)

cameraTarget Coordinates of viewing target v3f (3 floats)

cameraUpDirection Unit Vector pointing in camera-up direction

v3f (3 floats)

cameraType perspective or orthographic string “orthographic” focalLength Focal length of the camera’s lens in

mm float

aperture the camera's lens aperture, in f-stops (focal length of the lens divided by the diameter of the iris opening)

float

focus the camera's focus distance, in meters float dynamicView.order (u,v,w) used for the mouse control,

values between 0 and 1 v3f (3 floats) (0,0,0)

dynamicView.view (au,av,aw) physical coordinates for the corresponding orderParameter

v3f (3 floats) (0,0,0)

dynamicView.unit (eu,ev,ew) units of the physical coordinates

stringvector (3 Strings)

(“”,””,””)

dynamicView.description Description of the view parameter stringvector (3 Strings)

(“”,””,””)

tonemapping.name Name of the tone mapping string tonemapping.[Name of the tm].[Name of the parameter]

Value of the tone mapping parameter

versionCLF contains the version of the color luminance format used to store the data. The three integers are interpreted as versionCLF(0).versionCLF(1).versionCLF(2), i.e. the current

[4]

Information on the data types can be found in http://www.openexr.com/openexrfilelayout.pdf

Page 5: The world of Automotive Lighting

version would be stored as (1,5,0). The attribute can be used to ensure downward compatibility.

The attribute valueUnit depicts the unit of the stored data, which is by default cd/m².

lengthUnit is by default set to meters (m). It specifies the unit of the screenWindowWidth, when the cameraType is orthographic.

The attributes cameraOrigin, cameraTarget, cameraUpDirection and cameraType together with screenWindowWidth and displayWindow ensure that a recalculation of a simulated image by a different user and with a different simulation program is possible and will give the same display window. cameraOrigin, cameraTarget and cameraUpDirection are stored in the coordinate system and units of the data to be visualized for simulated data or of the laboratory when the data has been taken by a camera.

focalLength, aperture and focus are parameters which can be set if the image was taken by a real or a simulated real camera. The sensor width s can be calculated from the focalLength f and the screenWindowWidth w with the relation s = w * f.

dynamicView.order, dynamicView.view, dynamicView.unit and dynamicView.description are used to order and to navigate through an associated set of images. For example, the set might consist of several images taken from different camera positions. The viewing direction is stored in the dynamicView.view attribute (e.g. horizontal viewing angle, vertical viewing angle, 0)). The corresponding units of the dynamicView.view are stored in dynamicView.unit (in this example (“degree”, “degree”, “”)). The parameter dynamicView.description stores the label of the view direction to be displayed in the viewing program (e.g. “horizontal viewing angle”, “vertical viewing angle”, “”). The dynamicView.order is a mapping of the dynamicView.view to the interval [0, 1]. It can be used in a viewing software to relate mouse-drag (or scroll wheel movement) to a change of the images thereby navigating around the object. It is also possible to store images created at different times in the set, in this case one parameter would be the time.

If a default layer is used (see next section) , the name of the default layer is set with the attribute defaultLayerName

The optional parameters starting with tonemapping. account for the fact, that the high dynamic range data, which is stored in the file, has always to be mapped to low dynamic range in order to be displayed. The tone mapping parameters are stored in the file to guarantee the reproducibility of this operation. The name of the tone mapping is saved in the attribute tonemapping.name. The parameters of this tone mapping are stored in the attributes tonemapping.[Name of the tone mapping].[Name of the parameter] (for example “tonemapping.OpenEXRTMO.Exposure”). Since the name and the type of the parameters depend on the selected tone mapping algorithm, these values are not defined in this specification.

Page 6: The world of Automotive Lighting

Table 3 shows attributes which can be set for each layer in the openEXR file. They start with [Name of the layer], where “[Name of the layer]” is replaced by the name of the corresponding layer.

Table 3: Layer specific attributes

Attribute Name Description Type Default Value [Name of the layer].comments

Comments for the considered layer string “”

[Name of the layer].CIExy CIE 1931 chromaticity coordinates x and y for single-colored luminance data

v2f (2 Floats) (0.33,0.33)

[Name of the layer].luminousFlux

Luminous flux of the light source float 1

[Name of the layer].scaleFactor

scaling factor for data of [Name of the layer]

float 1

If a layer contains only one channel with luminance data, the attribute CIExy is used to assign a color to this layer.

The attribute luminousFlux for the [Name of the layer] stores the luminous flux of the light-source(s) used to create the image-data of this layer.

scaleFactor scales the data in the layer’s channels. This might be useful, if the data is stored with half data-types and it the range of the half-data type does not cover the absolute luminance of the data.

Default values are given for the attributes where a viewer requires an input in order to display an

image. Where no default value is given, the attribute is ignored by the viewer.

Data Structure

Units Luminance, i.e. the Y-component, has to be stored in cd/m². X and Z have to be stored with

corresponding units.

Channels Single-colored luminance data are stored in a channel with the name “Y”, as specified in the

Technical Introduction to OpenEXR[5]. Multi-colored data shall be saved as CIE XYZ tristimulus values, where the pixels’ components X, Y, and Z are stored in the R, G and B channels. In addition the header has to contain the chromaticities attribute (1,0, 0,1, 0,0, 1/3,1/3) to indicate that RGB is interpreted as XYZ.

Layers Different light functions or lit and unlit appearance can be stored together in a single file. In this case

different layers shall be used for each image. The name of the layer should reflect the name of the light function. There are specific header attributes for each layer ([Name of the layer].comments, [Name of

[5]

http://www.openexr.com/TechnicalIntroduction.pdf

Page 7: The world of Automotive Lighting

the layer].CIExy, [Name of the layer].luminousFlux, [Name of the layer].scaleFactor). Here the name “[Name of the layer]” shall be replaced by the name of the corresponding layer. As an example the tristiumulus values for a multi-color luminance image of a turn indicator could be stored with the names “turnindicator.R”, “turnindicator.G”, “turnindicator.B” and the chromaticites attribute as defined above. A single-colored luminance image would be stored as “turnindicator.Y” together with the turnindicator.CIExy attribute to define the color.

If there are several layers in a file a general OpenEXR viewing software, which has not been

customized to display color luminance data, might not display any of the layers. To provide a default layer for these programs it is possible to define the default layer by omitting the layer name in the channel name string, i. e. by just defining “R”, “G” and “B” for multi-colored data or “Y” for single-colored data as the name string. In this case the layer name has to be stored in the header attribute defaultLayerName.

MultiView In each file the header attributes for the camera and the dynamicView attributes are stored only

once. This implies that one OpenEXR file is only for one view and the multiView attribute[6] is not set. An exception are stereo views. Here the stereo view could be interpreted as a single view with a

stereo camera, one image for the left, the other for the right eye. Yet, the details of the stereo camera remain to be defined, which is a task for the future.

Source Code Example

A detailed documentation on how to define layers and channels in practice can be found on the openEXR web page (http://www.openexr.com) in the document “Reading and Writing OpenEXR Image Files with the ilmlmf Library”[7]. As an alternative a source code example of a color luminance file write routine is shown in the following. //Array Pointer definitions

Array2D<float> exrData_Y (rows, columns);

float *yPixels = &exrData_Y[0][0];

Array2D<float> exrData_unlit_R (rows, columns);

Array2D<float> exrData_unlit_G (rows, columns);

Array2D<float> exrData_unlit_B (rows, columns);

float *rPixels_unlit = &exrData_unlit_R[0][0];

float *gPixels_unlit = &exrData_unlit_G[0][0];

float *bPixels_unlit = &exrData_unlit_B[0][0]; //Header Definitions Header header (columns, rows);

header.insert("comments", StringAttribute ("Comment on the creation of the image"));

header.insert("owner", StringAttribute ("Automotive Lighting Reutlingen GmbH"));

header.insert("capDate", StringAttribute ("2013:01:01 16:01:00 "));

V3i versionCLF(1,5,0);

header.insert("versionCLF", V3iAttribute (versionCLF));

[6]

See http://www.openexr.com/MultiViewOpenEXR.pdf [7]

http://www.openexr.com/ReadingAndWritingImageFiles.pdf

Page 8: The world of Automotive Lighting

V2f red(1.0,0.0);

V2f green(0.0,1.0);

V2f blue(0.0,0.0);

V2f white(1.0/3.0,1.0/3.0);

Chromaticities XYZ_STD_Chroma(red,green,blue,white);

header.insert("chromaticities",ChromaticitiesAttribute (XYZ_STD_Chroma));

header.insert("valueUnit", StringAttribute ("cd/m^2"));

header.insert("lengthUnit", StringAttribute ("m"));

V3f Origin(0.0, 0.0, 3.0);

header.insert("cameraOrigin", V3fAttribute (Origin));

V3f Up(0.0, 1.0, 0.0);

header.insert("cameraUpDirection", V3fAttribute (Up));

V3f Target(0.0, 0.0, 0.0);

header.insert("cameraTarget", V3fAttribute (Target));

header.insert("cameraType", StringAttribute ("orthographic "));

V3f View(45.0, 18.0, 0.0);

header.insert("dynamicView.view", V3fAttribute (View));

V3f OrderParam(0.5, 0.2, 0.0);

header.insert("dynamicView.order", V3fAttribute (OrderParam));

StringVector unitparameterString;

unitparameterString.push_back("deg");

unitparameterString.push_back("deg");

unitparameterString.push_back("");

header.insert("dynamicView.unit", StringVectorAttribute(unitparameterString));

StringVector unitdescriptionStr;

unitdescriptionStr.push_back("horizontal view direction");

unitdescriptionStr.push_back("vertical view direction");

unitdescriptionStr.push_back("");

header.insert("dynamicView.description", StringVectorAttribute(unitdescriptionStr));

//Channel definitions + layer specific header definitions

// single colored data is stored as luminance in the Y channel.

// the color of this layer is stored in the attribute Lit.CIExy

header.channels().insert ("Lit.Y", Channel (FLOAT));

V2f SingleColor(0.312, 0.554);

header.insert("Lit.CIExy", V2fAttribute (SingleColor));

float layerLitFlux = 1100.0;

header.insert("Lit.luminousFlux",FloatAttribute (layerLitFlux));

// multicolored data is stored as the trisitmulous values X, Y, Z

// in the channels R, G and B

// by setting the attribute "chromaticities" to the values given below

// the software knows that R, G, B is interpreted as X, Y, Z

header.insert("defaultLayerName", StringAttribute ("Unlit"));

header.channels().insert ("R", Channel (FLOAT));

header.channels().insert ("G", Channel (FLOAT));

header.channels().insert ("B", Channel (FLOAT));

float layerUnLitFlux = 4.0e28;

header.insert("Unlit.luminousFlux",FloatAttribute (layerUnLitFlux));

Page 9: The world of Automotive Lighting

FrameBuffer frameBuffer;

frameBuffer.insert ("Lit.Y",Slice (FLOAT, // type

(char *) yPixels, // base

sizeof (*yPixels) * 1, // xStride

sizeof (*yPixels) * columns)); // yStride

frameBuffer.insert ("R",Slice (FLOAT, // type

(char *) rPixels_unlit, // base

sizeof (*rPixels_unlit) * 1, // xStride

sizeof (*rPixels_unlit) * columns)); // yStride

frameBuffer.insert ("G",Slice (FLOAT, // type

(char *) gPixels_unlit, // base

sizeof (*gPixels_unlit) * 1, // xStride

sizeof (*gPixels_unlit) * columns)); // yStride

frameBuffer.insert ("B",Slice (FLOAT, // type

(char *) bPixels_unlit, // base

sizeof (*bPixels_unlit) * 1, // xStride

sizeof (*bPixels_unlit) * columns)); // yStride

OutputFile file (fileName, header);

file.setFrameBuffer (frameBuffer);

file.writePixels (rows);