open source adobe lightroom like
Post on 06-May-2015
911 Views
Preview:
DESCRIPTION
TRANSCRIPT
Honours Project Investigation Report
Open Source Adobe Lightroom like Pierre – loic Chevillot
2011
A report submitted as part of the requirements for the degree of BSc (Hons) in Computer Science
at The Robert Gordon University, Aberdeen, Scotland
Page 2
Declaration I confirm that the work contained in this Honours project report has been composed solely by myself and has not been accepted in any previous application for a degree. All sources of information have been specifically acknowledged and all verbatim extracts are distinguished by quotation marks. Signed ……………………………………………………… Date……………………………… The declaration must be signed and dated by the student when each volume of the report is
submitted. The work will not be accepted for assessment unless the above declaration has
been included and signed.
Page 3
Acknowledgement
I am thankful to my supervisor, Mr. David Davidson, who helps me to understand the subject,
and support the project from the preliminary to the concluding level for the application.
I am thankful to people who help me to test the application, and give me their feedbacks about
the project.
Lastly I offer my regards to all of those you support the project, and advice me for
understanding of some part of the project.
Page 4
Abstract
This project creates a application for image processing. The application has a rich feature
interface, and some algorithms to adjust different parameters of standard format library.
Analysis of image processing and the programming language used during the project is
developing in the report, as the algorithms used and how they are apply. The project is inspired
from commercial photography edition software, Adoble LightRoom. Results of algorithm will be
compared to commercial products.
Page 5
Table of Contents Acknowledgement ........................................................................................................................... 3
Abstract ............................................................................................................................................ 4
1. Introduction ............................................................................................................................. 7
2. Background .............................................................................................................................. 9
2.1. Image processing .............................................................................................................. 9
2.2. Aims and Objectives of the Project ................................................................................ 10
2.3. Comparison of programming languages. ....................................................................... 10
2.4. C#(C‐Sharp) and its environment: .................................................................................. 12
2.4.1. .Net architecture ........................................................................................................ 12
2.4.2. C# language and version ............................................................................................ 14
2.4.3. Visual studio ............................................................................................................... 15
2.4.4. GDI+ ............................................................................................................................ 16
3. Analyze ................................................................................................................................... 17
3.1. Image Processing ............................................................................................................ 17
3.1.1. Theory: ................................................................................................................... 17
3.1.2. Color space definition: ........................................................................................... 18
4. System Design ........................................................................................................................ 20
4.1. Existing proprietary products ......................................................................................... 20
4.1.1. Adobe Photoshop: .................................................................................................. 20
4.1.2. Gimp: ...................................................................................................................... 20
4.1.3. Adobe Lightroom: ................................................................................................... 20
4.1.4. Aperture: ................................................................................................................ 21
4.2. System requirements ..................................................................................................... 21
4.3. Design ............................................................................................................................. 22
4.3.1. Lifetime of the project ............................................................................................ 22
4.3.2. UML ........................................................................................................................ 22
5. Implementation ...................................................................................................................... 25
5.1. User Interface ................................................................................................................. 25
5.1.1. Loading an image into the application ................................................................... 26
5.1.2. Save of a modify image .......................................................................................... 27
5.2. EXIFF of the picture ........................................................................................................ 27
Page 6
5.3. Adjustment Library ......................................................................................................... 30
5.3.1. Brightness and contrast modification .................................................................... 31
5.3.2. Saturation ............................................................................................................... 31
5.3.3. Specific color saturation ......................................................................................... 32
5.4. Histogram ....................................................................................................................... 33
5.5. Threading ....................................................................................................................... 35
6. Evaluation ............................................................................................................................... 38
6.1. System testing ................................................................................................................ 38
6.2. Usability evaluation ........................................................................................................ 39
7. Conclusion .............................................................................................................................. 40
8. Future Development .............................................................................................................. 41
Appendix A – References ............................................................................................................... 42
Appendix B – C# namespaces ......................................................................................................... 43
Appendix C ‐ Presentation slides .................................................................................................... 44
Appendix D – Project Log ............................................................................................................... 46
Page 7
1. Introduction
During the honors year at RGU, a project for all the year has to be done. The project could be
propose by a student, and must taking part on computer science, and be hard enough to be
suggest for a final year project. This project is about image processing, and computer science.
The idea of the project was suggest to Mr. Davidson, who chose to follow this project as
supervisor.
The project is base on image processing, and construct on a relatively new programming
language. The choice of image processing was made about the interest that could be done on
the dimension of imaging and photography on the actual world. Photography is everywhere
now, even on small device like mobile phones, compact digital camera, which gives the
possibility to anyone to try to do good pictures. And it is without taking professional
photographers, who now lots come of digital camera. Imaging is more and more use by
everyone, and technology given the possibility to have this into their hands. Image processing is
just the continuity of this imaging technology, with the post‐treatment of pictures taken by
devices. Lot of software appears to give the user the possibility to adjust, modify pictures taken
by their digital devices. Some applications are developed more from professional to manipulate
image, and create new image with the base of multiple image load in the software (as Adobe
Photoshop).
Lots of existing software or freeware applications are created with language as C++, or Java, but
this project is under C# .net. C# .Net is a relatively recent language creates by Microsoft. It is also
Object Oriented, but base on the .net Architecture and using the C# Framework integrate in
Microsoft Windows. C++ programs need to use library outside of the language, and where the
main functionally require are already develop, and in Java lots of applications are based on the
project ImageJ as some add‐on to integrate to the initial project. And during the last year of
studies, C++ and Java are the two main languages approached, and lots of Object Oriented
concept could be translate to a new evolving language as C#. This project is a good experience to
discover C#, and apply studied concepts to a new programming language.
Projects requirements were discussed with the supervisor of the project. Requirements need to
take the professional photographer point of view, and also the amateur who just want to apply
little modification to his picture because it is too dark, or too bright. The application must
provide good tools to user, without knowing if it is a professional or not. One of the main aims is
the simplicity of utilization of the application.
Page 8
The project time line is base on the Robert Gordon University time table. This project is started
in October, and is due for end of April. This let 7 months of working on the project, which start
from scratch. The project technologies and programming has been investigated during the first
month, and then the implementation of the requirements has started. This implementation is
start from scratch, and must go through different steps (Graphical User Interface design, Exiff
implementation, adjustment library and histogram creation, and threading the application). The
final application is the result of the work made during the project time life, and is based on the
prototype up to date.
Page 9
2. Background
2.1. Image processing
Digital technologies have been massively developed during the last few years, and the fall of
hardware held digital technologies to be included in lots of large use hardware, like mobile
phones, laptops, cameras. One of the technologies, which know a massive deployment on
mobile platforms, is digital photography device. Photography device increase the capacity of the
sensor to take reasonable good and detailed picture on mobiles phones for example, and the
low cost of hardware put constructor to integrate cameras into most of actual market phones.
Imaging technology touch now a day a large user population, but this creates a need for low cost
or free applications to manipulate images, enhance the image quality, ease to share a image on
a social network, or the possibility to integrate images into other digital media.
The primary focus of this project is on image processing, and the enhancement on images
acquires by classical digital cameras. But image processing could be applied to a wide range of
fields:
o Robotic: one of the aim of computer science and engineering is build a robot with is
capable to navigate in his environment without any human intervention. The robot, and
its embedded computer must extract and find out the way to process images (as video
camera sequences, sensors, etc) to move in any environment. Image processing is
mainly used to extract information like “edge detection”, contrast enhancement, noise
removal, to permit the computer vision to have a 3D focus of the scene. One example of
this technology up‐to‐date is the “Mars Rover” which is autonomous on Mars, and
performs many tasks, without any help of humans.
o Medical diagnosis: Biomedical sector now use a lot of imaging devices to detect some
disease without being intrusive for the patient, like x‐ray, MRI, CT‐scan. These devices
give an image to the doctor that he has to analyze order to give the right treatment to
the patient. Image processing gives some tools to have a better definition of images
given by the medical devices, and help the doctor in his diagnosis. More the image
processing algorithms are powerful to gives a better definition of the images, more the
diagnosis could be right. It is now a big research sector for image processing.
Page 10
2.2. Aims and Objectives of the Project
The main aim of this project is to provide a rich‐feature application and open source, which
could be executed as a standalone on any user computer to process images captured by digital
camera. This project will focus on common used features into processing software. The
application implements efficient algorithms for adjusting:
Brightness and Contrast,
Color balancing,
Color saturation
The application will provide to user rich information on the image to process, like the intensity
of the color contained in the image, and the different information store by digital camera into
the image file.
Image will be retrieved from disk in formats of industry standard, processed, and results saved
in the disk into industry standard formats.
The project will be integrate into a usability issue by implementing rich user interface features,
and allow the user to see immediate effect of his adjustment operation on the image.
2.3. Comparison of programming languages.
The application must be used on local machine, by any user. The better way to create an
application is to use an Oriented Object Programming (OOP) language. There are 3 main
Oriented Object programming languages well known: C#, C++, Java. Each of the languages has
some strength points and weak point.
C++: C++ was created in the 80’s to improve the C language (sequential programming
language). C++ is an evolved version of C, which introduce the concept of Object
programming (concept of programming with class), inheritance, and template. It offers
the possibility to work with C libraries, and the C++ normalized library, like “std” which
is the standard library normalized in 1998. It is very powerful because you don’t need
to go through some interface to put something in memory, or use the processor. This
language is very powerful for image processing because it doesn’t requires any fancy
layers to gives the data to process to the processor. And some library existing to work
on image processing which are powerful. There is a huge library for Graphical User
Interface developed originally by Trolltech, and now it is by nosier, named QT. This
library offers the possibility to create some rich User interface, with a lot possible
options which are free to use by developer. C++ is hard to use due to his old
Page 11
development, even if the language is always upgrade, but the last official ISO
normalization was in 2003. C++ used a lot of pointers, which is direct access to an object
at the memory address, and the development of image processing could be much
harder because it will use a lot of pointers to library functions, objects, and an error is
easy made which could corrupt all the application.
Java: Java is the most OOP language used for developing application in companies, and
educational systems around the world. This language was created in 1995, and created
by sun Microsystems (now Java is under Oracle Company who bought sun
Microsystems in 2009). Java has removed some of the subtle programming stuff of C++,
like multi‐inheritance, pointers and references. Java was created to be all Object
Oriented based. Java used a virtual machine to process his data. One of the big
advantages of Java is that it is multiplatform. The application could be launch form any
Operation System (OS) like Linux, Windows or Mac. It is independent of the computer
and the OS, due to the JVM (java virtual machine) which is a layer add on the operating
system to run java application. A lot of project use Java for image processing, as the
most known is maybe ImageJ. The thing is that Java Virtual Machine consumes lots of
resources on the computer, and the application will consume more. The application
must done the job require the quickest as possible, the virtual machine is a brake to
that.
C#: C# is the language created by Microsoft in 2001. This language was created to be
integrated in the .Net (dot net) platform, as the same point it provide a totally
independent Oriented Object Programming language. This language is a mix between
Java and C, C++. It provides the easy writing of Java, and re‐implements some useful
thing of C++. This language is based on the Framework .Net (which gives the possibility
to create application with many Microsoft languages, Visual, visual C++, and merge
them into a biggest one without any deployment problem). The Framework works with
CLR (common language runtime), which is relatively the same as the JVM of java, but
which is integrated in the last version of Microsoft Windows. C# has some portability on
other platform than Microsoft, it is call “project Mono”, which try to provide the .Net
framework for Linux/Unix platforms, under a GNU license.
The thing is to choose a powerful language, and easy writing language to create the project. Due
to the comparison, C++ and C# seems the more powerful, and quickest language to work,
because it didn’t require any further Virtual Machine launch and taken resources of the
computer. Then the easy writing is more provide by Java, but C# is a good compromise either
Page 12
due the IDE (integrated development environment) named Microsoft Visual Studio.
C# seems the better language as a compromise to work directly on the resources of the
computer, and the easy developing writing tool.
2.4. C#(C‐Sharp) and its environment:
C# was created in 2001 by Microsoft to integrate a powerful Oriented Object Programming
language in their .Net platform. C#, for Microsoft, has to be a simple and modern oriented
object programming language.
2.4.1. .Net architecture
.Net (dot net) was based on a architecture on Windows layer. This layer consists of a collection a
DLL (Dynamic Link Library), which could be incorporates into a project, or directly incorporate in
the last windows kernel, and just need to call them.
This Microsoft windows layer is composed of thousands of class, because as explained in the
previous section, C# is all object oriented. The class (included in dll’s), could be used by any
Microsoft languages as the same way to create an application. The classes provided are used
under the .Net executing environment called the runtime. The Run‐time .Net is in fact as the
virtual machine for Java, but it is integrated into the Microsoft environment. The run‐time .Net
provides services like:
Load and managed execution of applications.
Isolations of the application one to each others.
Translation of byte code to Unicode when the application is running, called Just In Time
Compiler.
Check of the accesses in the memory. No possible access out of the allocate area of the
application, and same for the array allocated in the application.
Memory management with a garbage collector.
Automated adaptation of the National characteristics (languages, numeric and symbol
representation, keyboard transcription, etc)
Gives a compatibility with COM modules, but not managed by .Net framework. The
COM modules are simple computer based around a micro‐controller and some RAM,
with a small physical size and low power consummation, created to work on embedded
system.
Page 13
On the .Net architecture, classes are sort into “namespaces”. A namespace regroup classes
which have the same utility, or work into the same domain of knowledge, like security,
Input/output, etc. (Appendix B)
Like mention previously, .Net framework provides a full compatibility of all the languages of the
.Net framework. Any class could be use as the same way in all managed code for .Net (except of
the syntax of the language). .Net provides a type of polymorphism through language compatible
.Net, like you could create a class in one language (C#), inherit this class in another language
(VC++), and finally implemented this class in a third language (VB, visual basic).
For the language of the .Net Framework, they need to have the same characteristic to be
compatible. The languages need to have the same representation of the data type (size and
representation in memory) named CTS (Common Type System) which regroup all the
characteristics. CTS is the normalization through all the .Net language to specify how Type
definitions are represented in computer memory. This CTS helps to enable cross‐language
integration, type safety and high performance code execution.
The compiler of .Net generated an intermediate code which in situate in the middle of the
machine code (basic level) and software code (high level). This code is independent of the low
level code, and is capable to manage high level object construction. At the moment of the
running of the application, this code is not interpreted, but compiled by the JIT compiler. At the
first call of the execution, and only the first one, the “Just In Time Compiler” will know the
execution environment. This knowledge of the execution environment will provide to JITC to
compile the code as the better native code for the model of processor of the actual user, like
that it will result in optimization of the native code for the microprocessor.
Microsoft .Net languages are:
Visual C++
Visual Basic, which was refractor to be full object oriented.
C# that became the reference language of Microsoft.
Many others languages could be integrated in the .Net architecture. As Python and Ruby who
could be use officially to develop some Silverlight application. Silverlight authorize to create
some web application which are executing on the client computer.
But Microsoft doesn’t limit the usage of its .Net Architecture at only these languages. Microsoft
provides a full documentation for compiler supplier who wants to create a .Net compatible
Page 14
version of their product. The adaption will be make on the generated code buy the compiler,
which authorize to uses the same classes and development tool, and language could be
integrated into Visual Studio (Chapter 2.4.3)
Languages who want to be integrated into the .Net architecture has to follow some edited rules.
The rules make the CLS (Common Language Specification). But for each piece of code which
could be reused by any part of the whole application, the compiler need to generate a “self
describing component” for this piece of code. The information contains into the self describing
component, are the classes, properties, public methods, and libraries used into the assembly
(executables, exe, or library, dll, files).
2.4.2. C# language and version
C# is now the main language of Microsoft. It was diverted from C++, but taking also lots of
characteristics of recent languages, as Java. But C# evolved and now introduces some new
concepts.
C# implements some good new functionally since C++, as:
Better Object Oriented Programming integration, everything must be incorporated into
Classes.
Types have to be conforming to the .Net Architecture, and safe Type casting.
Pointers could be use by developer, but should be use for optimization.
Automated liberation of non‐use Object with the Garbage‐Collector.
Translation of array pointers by references on array. This offers the same possibilities,
but more safe, and without any loss of performance.
Better management of array manipulation, with more security.
Integration of the “foreach” to make some loop.
C# could be also use to create web application with ASP.NET or Silverlight. This is two different
model of creating web application.
ASP.NET creates application on the server side. It authorize ton insert some C# (or
VB.Net) source code into HTML web pages. The website could be use by any client
without asking anything more; the web browser will interpret the page as a classical
website. C# is only use to modify the information that the web browser will display to
the client.
Page 15
Silverlight is a client side web application. Silverlight will deploy the application on the
client computer. C# will be used to interact with the client. The application
communicates with the server, but without using the resource of it, just requires the
information the client needs when using the application. Silverlight uses XAML and WPF
as well to provide events, buttons, etc..
C# is a very young language, but it already has a lot of different versions. Each version adds a
valuable evolution to the language, and to the .Net Architecture. The version 1 and 2 are mostly
the integration of new functionally and the development of C# to the .Net Architecture. Then
since the version 3, in 2007, Silverlight and Rich Internet application comes into the language.
And now on the version 4, since 2010, the most important innovation is the parallel treatment
on multi‐core computer.
2.4.3. Visual studio
Visual Studio is an IDE (Integrated Development environment) created by Microsoft for its .Net
Framework.
Visual Studio gives to developer the possibility to create console application, as rich graphical
user interface application, as Web sites, services or applications. This IDE gives the developer
the choices of the language he wanted to use from the .Net compatible language.
As a classical IDE, visual has a debugger included. This debugger is not focus on one language,
but was develop to debug some cross language application which could be develop on the .Net
architecture. But for the Project, I will only need to use the debugger on C# environment.
One of the good tools of Visual Studio is “IntelliSense”. This tool is the Microsoft auto‐
completion tool, and integrates a short description of Types in a pop‐up window. During the
writing, IntelliSense could suggest some statement to help the developer.
There is another tool to have a better organization of the source code, it is some define region.
You could name some regions of your code inside #region name … #endregion. Like that it
regroups you function inside a sub list you could display or hide if you need it.
Page 16
2.4.4. GDI+
GDI (Graphics Device Interface) is the graphic library of Microsoft Windows. GDI+ offers object
to manipulate color, font style, draw some graphics, use pen into a drawing area, etc. It offers a
easy way to create and manipulate image, and graphics object without go through the DirectX
library. DirectX is a much powerful library, but a lot more complex to use as well. GDI+ is the
good tool for the project to use Object which could easily manipulate graphics and enough
powerful without been to complex.
Page 17
3. Analyze
3.1. Image Processing
3.1.1. Theory:
Image processing is the technology that uses algorithms on images, for different purposes. This
technology uses the base of the Imaging Technology, to translate the optic view of an image into
Digital image. This digital image could be treat by some image processing algorithm afterwards.
For a human, an image view by the eyes is a multitude of light point. In the retina, Rods and
Cones are two major light sensitive photoreceptor cells. These two cells give to human a certain
light and colors sensibility. Rods are more use by the body for the low light sensibility, it is
responsible of the monochrome vision we could sense when we are in a room with no light, but
we can navigate through because we felt the obstacle. Cones are colors sensitive. Each cone is
sensitive of a range of the wavelength of the spectral representation of the light. (Figure 1)
In white curves, it is the sensibility of the cones, and the black one is the sensibility of the rods.
The computer representation is made by Pixel. A Pixel is a representation of point of light. It is
encoded on 32bit, but only 24bits are use for the color representation, 8Bits for each Red colors,
Green and Blue. The last 8bits could be either not use, or use to represent to transparency of
the image, this is call the alpha channel.
Figure 1
Page 18
3.1.2. Color space definition:
RGB:
The color space RGB (Red, Green, and Blue) could be easy understood because all possible
colors could be made from the three primary colors. This color space model is become the main
model for computer graphics. Two color spaces inherit from the basic model of RGB are now
mostly use in monitors (LCD, phone, etc)
Adobe RGB: design in 1998. This color space is designed to restore the color which to be
achieved on printers, on computer display.
sRGB: developed in 1996 by the collaboration of Microsoft and HP to be use on
monitors, printers and Internet. This design is now the default color space for many
devices such as printers, monitors, phone, video camera, etc.
The colors we perceive are not a mix of the three components, but more like a combination of
light intensity, and coloration. The coloration gives the shade, and the saturation. The shade is
the color perceives, like blue, cyan, yellow, and the saturation is the purity of this shade, which
could be gray for a very saturation, and maximum for pure color. That is the reason of if we
increase the brightness of a scene, it produce a proportional increase of the reflected light by
the object of the scene for the wavelength. So for this example the component RGB will be
multiply by this constant, but the saturation and shade will not change.
The Luminance is the intensity of light received. This is normalizing by the ICE (International
Commission of Illumination) as the codification of Red, Green and Blue lights as a combination
of each of them:
Y = 0.2125 R + 0.7154 G + 0.0721 B
The addition of the 3 coefficients is equal to 1, but there is a different between each coefficient.
This explains the different between the perceptions of for each luminance. A green light will
seem brighter than a red one, and more than a blue one.
But in the encoding of a color, it needs to add a complementary data to the luminance. This
complementary data is the Chrominance. The Chrominance is linear combinations of 2 numbers
Page 19
to represent the intensity of Red, Green and Blue. The ICE has codified information into a
chromaticity diagram who takes the form of a half eclipse.(Figure 2)
The RGB color space representation could be modifying to be as HSV (hue, Saturation, Value, or
B as Brightness) representation. HSV representation is a cylindrical representation of RGB color
space. It was developed in 1970, to be used as a color modification tool for image editing
software, and image analysis of computer vision. The translation of RGB to HSV could be done
by the transformation
Figure 2
Page 20
4. System Design
4.1. Existing proprietary products
Now days, a lot of different applications were created to give some tools to edit image for any
user of a digital device. This application provides lots of different tools, and some applications
are focus to be use by professional of photography, or by graphic editors.
4.1.1. Adobe Photoshop:
Photoshop is a huge application for image processing, and maybe one of the most knew. This
software has a lot of image algorithm library included. It is focus to be use by professional, but
most of users are none professional.
Photoshop has basic image processing tools as easy cropping tool, colors modification, and
filters to apply. It is really easy to use these basics tools due to a good graphical interface given
by Adobe. But more of the power of Photoshop could be find in the menu tool bar. This
regroups a lot of vary algorithms to be use to create, modify or add effects on images.
Creation of images could be done from scratch. Photoshop works with masking tool. The mask is
a layer where graphics objects are drawn, and overlay one to each other. Effects could be done
on one mask, or a set of masks. And different light effects could be done on the overlay masks.
Adobe Photoshop is one of the most powerful application, but it is most to use for professional
people which work on graphic of photography montage.
4.1.2. Gimp:
Gimp is the open source version of Photoshop. The developing team tries to provide the same
functionally as the Adobe one. Gimp is a good solution to make graphic and image processing on
Operating System as Linux, where Adobe products couldn’t be use.
Gimp provides a wide range of good tools for image processing, as Adobe Photoshop. But it is
very difficult to use. But the software is free to use, and always update by the developing team.
4.1.3. Adobe Lightroom:
Adobe Lightroom is an image processing software creates for professional photographers. This
software is creating to simplify the post‐processing of their work.
Lightroom is creating to manage a library of images, adjust images and exporting on printer or
publish on a website gallery. The import tool of the software gives the possibility to select
images to import before importing them on the computer.
Page 21
The principal usage of Lightroom is the development section. The section is split into a
histogram, and some functions to act directly on the image. All the modifications are none
destructive on this software. The modifications are store in a separate file from the original
picture file. The user needs to print, or export his picture to apply the modification. At this
moment a new file is create in classical compressing format (JPG), or a new digital format create
by Adobe, Digital Negative format (DNG).
On basics adjustment for picture, we can find algorithm to adjust the temperature of an image,
brightness and contrast, exposure level, saturation level, and conversion into Black & White
picture. But Adobe creates some new algorithm like clarity and vibrancy. Clarity change the
contrast in the middle range of the histogram values, and Vibrancy increase the intensity of less
representative color in the picture, but without burn the other colors.
The new thing is that Lightroom could correct lenses deformation. The software extract the Exiff
information to know the lens used, and apply the right correction to the picture.
This application is create for a professional use of photography, and to give more time for the
photographer to practice photography than post‐processing of his image. It is a solution for
professional who don’t need all the functionally of Photoshop, and a quick answer to their main
problem of post processing.
4.1.4. Aperture:
Aperture is the same software as Lightroom, but created by Apple. This software works only on
Mac operating system.
4.2. System requirements
The project is design to be like commercial software of image processing. This project provides
some quick access tools to adjust an image.
The system need to load an image of a standard compression algorithm, jpeg (joint
photographic experts group, jpeg is a algorithm to encode and decode photographic image use
by all digital devices who captures pictures). The system has to save the modify picture on the
same schema.
Projects will display the information store into the image (EXIFF), and calculate an histogram of
it.
Page 22
Some modification tools will be providing as:
Brightness modification
Contrast adjustment
Saturation level adjustment
Color filter adjustment
4.3. Design
4.3.1. Lifetime of the project
The project is made on the plan of prototyping and evaluation. This plan intends to create a first
quick prototype with some basic functionality, then evaluate this prototype and debug it. When
the evaluation is reasonably good and efficient, a new prototype is created with more
functionality, and the process of evaluation restart. This process of prototype, evaluation could
be done until a prototype is enough evolve to be release as final project.
In the time life of this project during honors year, some prototypes were create and evaluated
before going on more feature. The project on the day of the presentation will be a part of the
evaluation phase, on the threading implementation.
4.3.2. UML
The creation of diagrams is made in Visual Studio 2010. This could be made by adding a
modeling project, and in this project, Visual Studio could create projects as UML diagram.
Diagrams are store as a part of the project, and could help anyone who comes on the project
without have external source to understand the basic architecture of the project.
4.3.2.1. Use case diagram:
The user will require this application to make some simple actions. All the action a user could do
on the application could be draw in a use case diagram. The use case will represent all the action
that could be done by a user, without taking part of the software engineering, and the
technologies use in the project.
Page 23
Figure 3
User can do 2 main tasks, has Information, or made adjustment. In the information section, the
user could have information about the EXIFF of the picture, or read a histogram. In the section
part, the user could do 3 actions, adjust the brightness, the contrast or the saturation. For the
saturation, the global and the specifics color are 2 options inherit from the main action.
4.3.2.2. Class diagram:
The project will be split into the application and the adjustment library. This library will be store
in an additional library, compile as a DLL (dynamic link library). This
Page 24
Figure 4
Page 25
5. Implementation
5.1. User Interface
The project is design to be use by everyone. The user could be a professional of photography, or
a standard user who want to adjust one of his pictures.
The main requirement for a good graphical interface is a clear view of the possibility provide by
the application. On the first launch of the application, the user has to quickly understand where
he needs to act on the interface to make what he wants.
The interface is design to be easy understandable at the first view of any user. For that, some
paper drafts were drawn. An evaluation of drafts was done, and one interface was selected to
be tried on the Graphical User Interface designer of Visual Studio 2010. (Figure 5)
Figure 5
The design is compose in the main area of a Picture Box component. The Picture Box element is
a graphical element in C# designer, to load a picture into it, and display to the user. It could load
different types of images, as .bmp (bitmap), .jpg and .png.
On the left of the interface, a tab panel is implemented. This panel could display information
about the pictures, and modification tools, depend of the selected tab. There are actually 4 tabs.
The first one is use to display information about the image (Exiff) and the histogram of the
Page 26
representation of the intensity of color component in the image. Second tab provides some tool
to adjust the brightness, contrast and saturation of the image. The level of adjustment could be
select by the track bar. This could be more right to left, to increase or decrease the intensity of
the effect. The Track Bar element authorizes a range of values between ‐100 to 100 and is at 0
to default.
The third panel is a Hue/Saturation adjustment panel. In this panel you could adjustment the
level of intensity for particular color. The modification of this level is control by a Track Bar with
the same range of value for the adjustment panel.
The last panel is in preparation of the next evolution prototype. It has nothing in it for the
moment.
On the Left side of the interface, there is a column. This column is empty, but it is the same as
for the “Noise Reduction” tab, it will be use in a future prototype. In the original design, this
column will display some vignette of pictures loaded. The user could navigate in the picture he
wanted to modify by clicking on the vignette of the picture to see it in the main zone, and then
apply the modification to it.
In the menu bar, the “File” Menu Strip gives 2 others Menu Strip to load and save an image.
5.1.1. Loading an image into the application
When the user decide to load an image, he uses the File / Load.. Menu. This will open a Load
dialog box. This dialog box is a class from the C# library. In this box, the user can navigate into
the directory tree of his computer to search his image.
On this load dialog box, the application applies a filter to restrict the display for only JPG image.
if (open.ShowDialog() == DialogResult.OK) { Bitmap temp = new Bitmap(open.FileName); img.loadImage(open.FileName); MainBox.Image = img.getImage(); //TODO modify GUI to be automatic stretch & resize MainBox.SizeMode = PictureBoxSizeMode.Zoom; }
The openFileDialog object is initiate this a filter to sort only .jpg images. This piece of code, open
the image selected in the dialog box, set it into a Picture image, and load the image into the
main picture display component (MainBox). The image is after stretch to fit in the picture box
zone.
Page 27
The picture object contains a bitmap, and the difference information that the image could have,
as EXIFF.
5.1.2. Save of a modify image
The application saves the image in JPG format. The simplest way to save the last modify image is
to take it from the main picture displayed and give it to SaveFileDialog tool to save it as a JPG
image.
5.2. EXIFF of the picture
Exiff of a picture are information store into the picture. Information could be much diversified,
like the information of how the picture was taken, or the different parameters of the camera
who take the picture, etc.
All the information are store as metadata. The metadata is a simple information store at a
specific place in the file. Each metadata is encoded at exactly the same place in JPG file. Each
byte representing the metadata in the memory of the JPG file will be a property of the Image
class in C#. To sort out the properties we are interested in, we need to how exactly what address
corresponding to which property, and how it is encode. The project focus on only some specific
properties regroups in the spreadsheet bellow. These elements are the main elements a
photographer, or people interested in photography want to look at.
Property name Address in JPG file Encoding
Focal Length 37386 Int16
ISO Speed 34855 Int16
Aperture 33437 Int16
Exposure Time 33434 Int16
Creation Date 306 Ascii
Make of the Camera 271 Ascii
Model of the Camera 272 Ascii
Metering mode 37383 Int16
Orientation 274 Int16
A picture taking by any camera is a combination of 4 parameters (Focal length, Aperture,
Exposure Time, ISO speed). These parameters regroup optical techniques, and electronicals
Page 28
device for digital camera. Most of the camera not gives the possibility to play with these
parameters independently, but Digital SLR camera did. That’s why for a photographer the
information are important.
Focal Length: The focal length is much knows as zoom. In fact it represented the measure of the
convergence of the light into a lens.
ISO Speed: The iso speed will define the sensibility of the captor. If the value is high the
sensibility of the captor to the light is very accurate, but the sensor will also made noise on the
image because the sensor start to heat due to the amount of light coming on. The default ISO
speed for a picture without noise is 100, after it depends of the camera.
Aperture: The aperture is a number to represent the maximum opening of a lens. The opening is
big when the number is low, and the hole reduce when this number increase. For example a
Aperture of f1.8 will provide a very light lens, and f4 is darker because the stop where the light
go through is smaller. The aperture will provide the focal distance in a picture, the lower the
aperture is, and the lower the focal will be. This could be seen with a blur background very
present at high aperture (low number), and a detail background for low aperture.
Figure 6. f/1.8 (left) and f/5.6 (right)
On the first picture, the aperture is open at the maximum of the lens (f/1.8) on a 50mm, and the
second was taken at f/5.6, which is 10 times close. The deep of field in the first picture which a
very large aperture is reduce, and only the flower in the middle is net, even the leafs are blur. In
the second example the deep of field is greater. The flower and leafs next are in the net section
of the focus, and the background has more information that the first one, but is still blur.
Exposure Time: this is the time the sensor will be exposing to the light, and get the information
to create the image. On analogical camera, it was the time during the film was expose, and the
light print on the image. Now the sensor is digital, and convert light into data to create the
image. This time could vary a lot, it depends of the picture you want to do. You can fix the image
Page 29
with a very quick exposure time, or make the image dynamics with a control movement of the
camera which could be retranslate in the image.
The left image has a quick exposure time, and instant the image. The wave is taken in its move
and instant. The second image on the right, the exposure time is extend. The sea make like a fog
on the rocks.
Exposure Time, Aperture, and ISO speed are the keys to the intensity of color in a picture, and
the quality of the picture.
Page 30
5.3. Adjustment Library
The project is base to make some adjustment on image. All the functions to make the
adjustment are store in a package out of the Graphical User Interface, in a dynamic link library
(DLL). This DLL has only one class, with many functions which can be use by any project who
want to include the DLL in its linked library. The main usability of the DLL is to be reusable in any
project, without the need of rewrite everything.
This DLL gives the adjustment function for:
Adjust Brightness and Contrast
Saturation level
Saturation of specific color.
The algorithm is base on an color matrix modification to apply on the whole image. C# has a
class name “ColorMatrix”. This matrix is the representation of a pixel encoding. It is compose of
a 5 x 5 matrix, the first three line represents the three main component of the color space (Red,
Green, Blue), then there is the line for the Alpha channel. The last line is use to add a value to
the color channel component corresponding to the column.
R 0 0 0 0
0 G 0 0 0
0 0 B 0 0
0 0 0 1 0
A A A 0 1
The 4th line, representing the alpha channel, is never use. Alpha channel represent the
transparency on the image, and the project requirement as no function with any transparency
modification, so the value of the alpha channel never change.
The matrix is use to avoid the other algorithm which consist of opening each pixel, and them
modify the value of the component it has to be change, then close the pixel, and go on the next
one. This techniques on iteration on each pixel manually is time consuming, and especially on
image taken with DSLR camera which could be easily around 18Mpx, so 3456 * 5184 =
17.915.904 pixels to open.
ColorMatrix class in C# uses the class ImageAttributes to apply directly the new matrix of color
on it. Then draw a new image, which is display to the user to have the result of the actual
modification.
Page 31
5.3.1. Brightness and contrast modification
The contrast modification will be the coefficient by how the component will be multiply, and the
brightness will be the sum to add to the component.
float v = 0;
v = (float)(cValue + 100) / 100f;
float m = 0.5f * (1.0f ‐ v) + (float)bValue / 100f;
/// The matrix is compose of 4 lines for Red, Green, Blue parameters
/// then the line for Alpha parameter
/// and the identity line which is use to multiply the matrix
cm = new ColorMatrix(new float[][]{
new float[]{v,0,0,0,0},
new float[]{0,v,0,0,0},
new float[]{0,0,v,0,0},
new float[]{0,0,0,1,0},
new float[]{m,m,m,0,1}
});
ImageAttributes ia = new ImageAttributes();
ia.SetColorMatrix(cm);
The cValue in the previous example code represent the contrast value given on the track bar,
and the bValue the brightness value on the brightness track bar. The contrast value is translated
in a first time as a value to multiply on the color component. If the contrast value is 75, colors
component is multiply by 1.75. Then the brightness will be add. The brightness value is compute
as a number between ‐1.0 to 1.0. If the contrast is change in the same time, the value of the
contrast is taken in the algorithm. This change the brightness by the half of the difference
between the vector of the contrast to the normal contrast vector. (0.5f * (1.0f ‐ v))
5.3.2. Saturation
The color saturation is used to describe the intensity of color in an image. Most of saturated
images have overly bright color. De‐saturation of a color translates the color image into a
monochrome one. The saturation algorithm use 3 weighs value given by adobe as the luminance
value for the color. The color green is highly recognizable by human eyes, so it luminance value
is high, blue is the color that human eyes has the less perception, and red is in the middle range.
The sum of the three values makes 1. The weight must be add the value of the saturation given
Page 32
by the track bar. Color saturation is weight and the complementary color has to be weight to.
The all matrix will be multiply.
The main component of the color is weigh by the luminance, and then the saturation vector is
apply in addition. Complementary colors are only weight by luminance vector.
float red = (float) 0.3086;
float green = (float) 0.6094;
float blue = (float) 0.0820;
float sat = (float) (satValue + 100) / 100f ;
float redSaturation = (1 ‐ sat) * red + sat;
float redSaturationComp = (1 ‐ sat) * red;
float greenSaturation = (1 ‐ sat) * green + sat;
float greenSaturationComp = (1 ‐ sat) * green;
float blueSaturation = (1 ‐ sat) * blue + sat;
float blueSaturationComp = (1 ‐ sat) * blue;
cm = new ColorMatrix(new float[][]{
new float[]{redSaturation,redSaturationComp,redSaturationComp,0,0},
new float[]{greenSaturationComp,greenSaturation,greenSaturationComp,0,0},
new float[]{blueSaturationComp,blueSaturationComp,blueSaturation,0,0},
new float[]{0,0,0,1,0},
new float[]{0,0,0,0,1}
});
5.3.3. Specific color saturation
The color filter not given the results wanted. The algorithm gives the possibility to apply a color
filter on the image.
Normally to saturate a specific color, the channel of this color has to be multiply by a vector, and
the others channel not changes. And the de‐saturation has to apply a filter where the others
colors can go through, but not the color we want to de‐saturate. The human vision is a
perception of color, if a filter is place before eyes analyze the color percepted, one color could
be remove (filter).
Page 33
Figure 7
There are 3 filters to block primary colors:
Cyan : block Red, let Blue and Green
Yellow : block Blue, let Red and Green
Magenta : block Green, let Red and Blue
float blue = (float)0.0820;
float sat = (float)(satValue + 100) / 100f;
float blueSat = (1 ‐ sat) * blue + sat;
float bComp = (1 ‐ sat) * blue;
cm = new ColorMatrix(new float[][]{
new float[]{1,0,0,0,0},
new float[]{0,1,0,0,0},
new float[]{0,0,1,0,0},
new float[]{0,0,0,1,0},
new float[]{0,0,blueSat,0,1}
});
ImageAttributes ia = new ImageAttributes();
ia.SetColorMatrix(cm);
5.4. Histogram
One of the ways to evaluate if an image is good, is to check the histogram of the image. The
histogram will be the statistics of the intensity of the color present in the image by a graphical
way. Histogram could also be to show the brightness of a image for monochrome picture.
Drawing a histogram require to calculate some statistics as pre‐processing. A pixel stores a color
component as a value of 8bit, so the value for a color varies from 0 to 255. (28 = 256) A value of
0 in the color represents a total absence of this color, and 255, that the color reaches it
maximum intensity. The black color is the absence of any of the 3 primary component, so it will
be encode Red = 0, Green = 0, Blue = 0. And the inverse for the white, but for the color blue the
Page 34
encoding is Red = 0, Green = 0, Blue = 255, and it is possible to make any colors by vary the bit in
each component.
The statistics will store the number of pixel which has a specific value varying from 0 to 255, in
three different set. Each set is color component storage, and the set is composing of 256 values.
The total number of pixel to archive is given by the height and width of the picture, 3456 * 5184
= 17.915.904 pixels. This number of pixel could be found by make the sum of each number
contains in the 256 values of a color.
private void calculateColor()
{
Color pix;
Bitmap bit = (Bitmap)imgSrc;
for (int height = 0; height < imgSrc.Height; height++)
for (int width = 0; width < imgSrc.Width; width++)
{
pix = bit.GetPixel(height, width);
red[pix.R]++;
stat.incRed(pix.R);
green[pix.G]++;
stat.incGreen(pix.G);
blue[pix.B]++;
stat.incBlue(pix.B);
}
}
Then the histogram has to be draw into a graphic image. This graphics image is create from the
height and width of the histogram window in the graphical interface.
Before drawing lines, the drawing tool needs to know the size of the pen to use. It could be
calculate by the Width of the graphic where the histogram will be draw divide by the number of
value draw, 256. The pen has the same size for each color to draw, only the color of the pen
changes. Then the line to draw to have the histogram has a height. The maximum possible
height for a value, and draw as the line touch the ceil of the window, will be the number of pixel
of the image,
Page 35
5.5. Threading
Image processing requires a lot of computer resources, and calculation time on the processor.
Thread is a processus which is launch in parallel to execute tasks. In fact program of application
is threaded on actual operating system, there is only one main thread (init), and the operating
system and software are just other thread. Thread make tasks in parallel without blocking other
thread execution, so the two processus seems to be executed simultaneously.
The main resources consumer is the calculation of the statistics, and the drawing of the
histogram, for the project. To avoid any blockage on the Graphical User Interface, due to the
time the computer takes to compute statistics of the image, the best method is to execute that
in parallel of the application. One thread takes the calculation in parallel of the main executing
thread where the GUI is. The thread calculate and draw the histogram, before indicated to the
main thread, he has finish his job, and the main thread could get back the image of the drawing
histogram, and display it to the user.
In the project, the object PictureBox has a method “refresh”, but this method is call to often,
and it is link to the main thread, without any method to delink it to include the synchronize
method in it. The solution is to create a second thread, with the role of being the handler of the
display of the histogram when it is ready.
The project has in fact 3 thread, which must be synchronize each other to work in parallel
without make any deadlock, and block the application. Different techniques were tried during
the time of the project, but the technique finally use is with a semaphore.
The other techniques could solve the same problem but often with more complexity. This
synchronism problem could be solving either by:
Background worker: The background worker is a class which does a work in background,
and could give the progress of its done job. It is use for simple problem solving, which
require quickly a big computational resource.
Interlocked : the class interlocked gives some tools to work on arithmetic operations
without been interrupted
ReaderWriterLock: This class implements the solution of the problem of reader and
writer. This problem is one of the class study example to explain thread and deadlock.
Mutex: Mutex is a class to implement mutual exclusion. It is mainly use to synchronize
threads which maybe not inherit from the same processus. A mutex could be seen as a
flag, and only one thread at a time could posse this flag. Every thread who want the flag
Page 36
will be put into wait state, and when the flag become free, only one thread could
acquire it.
The project implements another type of synchronism with a semaphore. The semaphore is a
simple object with only a value in it initially put a 0. This class is share between thread, and
thread could try to acquire the value, or release a value in the semaphore. These two methods
are protected with into a critical section with the class monitor.
When a thread want to acquire the semaphore, the semaphore will give him the value (in fact
reduce by one the value in the Semaphore), but if the value include in the semaphore is 0, the
semaphore put the asking thread into waiting state until a thread release a value. The Release
method enter in the critical section to add a value, and then signal to waiting thread on the
semaphore that a new value just has been posted, and them to wake up.
class Semaphore { private int val; public Semaphore() { val = 0; } public void acquire() { Monitor.Enter(this); while (val == 0) { Monitor.Wait(this); } val‐‐; Monitor.Exit(this); } public void release() { Monitor.Enter(this); val++; Monitor.Pulse(this); Monitor.Exit(this); } }
The synchronism has to be between the painting handler thread, and the histogram creator
thread. Like the creation of the histogram must be signal, the histogram creation thread call the
release method after the end of the drawing histogram function, when the histogram image is
ready to be picking up by the handler. The handler thread during this time is trying to acquire
the semaphore, but until the histogram is not ready, this handler is put in wait state. When the
Page 37
wake up signal is given by the release method, the handler thread acquires the mutex, and could
pick up the histogram image into the histogram thread, and them display it to the user.
Graphical components, as text box, track bar, couldn’t be access from other threads. The only
who could access the component is the thread who has create the component. [STAthread]
attribute of the main function indicate this self‐belong. (STA : Single Threaded Appartement).
The solution to have an access to objects of the graphical interface into a parallel thread is to
have a resort with delegates. Delegate is an object capable of executing a function. It is a
derivate of the Function Object in C++. The delegate function is created with a type “delegate”,
and the header of the function to point on. Then a object of the name of function could be
created. This object could be call by the function Invoke() in a parallel thread. Invoke create a
safe pipe between the two threads.
Page 38
6. Evaluation
6.1. System testing
Application is develop on the base of an evolving prototype. The system is test all along the
develop life time. This technique allow to discover some bugs, or errors of design made, and
quickly change them, without crash the all application, or change the development plan.
The first set of tests was made on the design of the graphical interface. Test tried to evaluate the
interface, and see if any elements could be better organize. Tests were made with different
unitary console write to test the action performed on the interface.
The second set of tests was made after the implementation of the EXIFF print tool. EXIFF are
place in the same place for each jpg image, so for any photography take with a DSLR, a compact
or a phone device, the image should includes EXIFF. Tests work well especially on DSLR and
compact photography device. The aperture made only wrong value for under F/2, for example
an aperture of f/1.8 is transcribe as f/.9.The other things coming back during the test is that not
all the device register EXIFF, for example some pictures from camera on phone doesn’t have any
EXIFF, and a null value is now set is no EXIFF is found.
Third set of test is on the Adjustment library. The library must product the adjusted image
quickly as possible, but also has a good render on the display image. The brightness and contrast
algorithm give back an image as the potential of some commercial software like “Adobe
Lightroom”, as the same for the saturation algorithm, which on human perception is exactly the
same render.
Last set of tests, is on the implementation of the histogram tool, and the threading of the
application. The histogram was working without threading, but stops the application during a
time of around 10sec, and when it needs to be recomputed it was the same. That’s the main
reason of the threading of the application. The histogram calculation has not to disturbed the
normal utilization of the whole software. Threading the application is made with lots of different
try to find the best way to thread, with a lot of time spent of interblocking thread without
blocking the GUI. This includes the usage of Mutex, background worker utilization possibility.
Threads interblock each other without causing any deadlock, but the delegate parts not
working. The delegate implement is really new in the development life time, this is find when
threads were working properly, but no histogram is display to the user.
Page 39
Algorithm is try against the commercial software Adobe Lightroom. Brightness and contrast
algorithms are little less powerful than the commercial product, but have significant results to
be classify as good for the purpose of the application. The saturation algorithm gives the same
results as LightRoom. This algorithm is slower to be applying on image, but as the results is the
same as the commercial product. The time is not the main purpose for a general user, but it will
count for professional use.
6.2. Usability evaluation
Evaluation usability was made on people more or less related to photography.
The evaluation for non photographer is base on usage of the interface, and the speed of the
application. Users find the interface not hard to use. This is mainly due to the tabs classification
of the different options (Information display and Adjustments). The speed of the application is
enough good for them. There is some points in the interface to be modify to have a better
general usability; like the possibility of enter directly the number of the adjustment, and a
cropping tool.
For photographers, the application is too slow to apply the modification, but the interface is nice
to use. Tabs coming back as a good tool to classify the different options provided. A cropping
and rotation tool could be a really improvement to the application. But most of the feedbacks
are a good start for an image processing tool for armature.
Page 40
7. Conclusion
This project is make in an quite new programming language (C#), and uses some library which
are rarely use during studying years. The project subject is on a subject that regroups a lot of
technology and a lot of knowledge to learn. Image processing subject required a lot learning
about the representation of image into computer vision, and this representation into the
programming language to use. A lot of theory as to be understand before implement any of the
algorithm, like the representation of the histogram is simple to understand, but the creation
from scratch and the mathematical algorithms to use have to be really well understand and
implement to have no errors on the representation. The different try on saturation values took a
lot of time, but the most important time spent is on the Threading method in C#. C# is not a
language learns during the 4 last years of studies, but it is not really hard to understand, and
start to create a application without previous knowledge of Oriented Object Programming
language. This project regroups most of the last 4 years of studies, in France and at Robert
Gordon University, knowledge together, and tries to put them together in the same project.
Page 41
8. Future Development
As the usability tests feedbacks reveal, the application will be implemented of a cropping tool.
This cropping tool is in development actually, but not functional to be present as a tool to use.
The cropping tool only allow to draw a selecting array from point A to point B on the image, but
without resize it.
In the future development, the application could implement a noise reduction tool. This tool
could be useful to reduction the noise made by high sensitive ISO speed sensor. The noise is
produced by the heat of the sensor during the photography exposition.
The main improvement will be a development of a multiple images opening tool. This could
allow the user to load images in the interface, and then select the one he wants to adjusts
graphically.
Page 42
Appendix A – References
[1] Wilhelm Burger & Mark J. Burge, “Digital Image Processing An Algorithmic introduction using
java”, 2007
[2] Tobin Titus, Fabio Claudio Ferracchiati, Tejaswi Redkar, and Srinivasa Sivakumar, “C#
Threading”, 2003
[3] Gerard leblanc, “C# et .Net versions 1 à 4”, 2009
[4] Adobe Systems. « Adobe RGB (1998) Color space specification » (2005)
Page 43
Appendix B – C# namespaces
Namespace Description Classes Example
System Basic type access Console access
Int32, Int64, Int16 Byte, Char String Float, Double Console Type
System.Collections Object collections ArrayList, HashTable, Queue, Stack, SortedList
System.IO Files access File, Directory Stream, FileStream BinaryReader, BinaryWriter TextReader, TextWriter
System.data.Common Databases ADO.Net access DbConnection, DbCommand, DataSet
System.Net Network access Sockets TcpClient, TcpListener UdpClient
System.Reflection Metadata access FieldInfo, MemberInfo, ParameterInfo
System.security Security control Permissions, Policy Cryptography
System.WinForms Windows oriented component Form, Button, ListBox MainMenu, StatusBar, DataGrid PictureBox, TrackBar
System.Web.UI.WebControls Windows oriented components for web
Buton, ListBox, HyperLink, DataGrid
Page 44
Appendix C ‐ Presentation slides
Page 45
Page 46
Appendix D – Project Log
Meeting took place on Monday during the First semester, and then on Thursday during second
semester.
From the 4th of October 2010 to the 25th of October 2010, requirements were discussed and
decided, and a first implementation of a prototype following the system design was done.
From 1st of November 2010 to the 17th of February 2011, meetings were on the advance of the
different implementation of the requirements. The meeting consisted of show the new
implementation and discussion about a possible improvement of it, or the next requirement to
implement in the project.
From the 17th of February to the 3rd of March 2011, meetings were about the implementation of
the histogram system. The idea of threading the application came during discussion with the
supervisor, because of the time taken by the histogram to be draw.
From the 10th of March to 7th of April 2011, meetings were on the implementation of threads,
and analyze of the threading techniques used under .net architecture. Discussion and trying of
different implementation techniques to solve the mutual exclusion threading problem.
Since the 7th of April until the last meeting on the 21th of April 2011, meetings were on the
evolution on the threading problem, and about this report construction.
top related