proceedings of the international conference on

294
Proceedings of the International Conference on Mathematical and Computer Sciences Jatinangor, October 23 rd -24 th , 2013 i

Upload: others

Post on 01-Dec-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th, 2013

i

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th, 2013

ii

ISBN: 978-602-19590-4-6

Proceedings of

INTERNATIONAL CONFERENCE ON

MATHEMATICAL AND COMPUTER SCIENCES

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th, 2013

iii

PREFACE

This event is a forum for mathematician and computer scientist for discussing and exchanging

information and knowledge in their area of interest. It aims to promote activities in research,

development and application not only on mathematics and computer sciences areas, but also

all areas that are related to those two fields.

This proceeding contains sorted papers from the International Conference on Mathematical and

Computer Sciences (ICMCS) 2013. ICMCS 2013 is the inaugural international event organized

by Mathematics Department Faculty of Mathematics and Natural Sciences University of

Padjadjaran, Indonesia.

In this proceeding, readers can find accepted papers that are organized into 3 track sections,

based on research interests which cover (1) Mathematics, (2) Applied Mathematics, (3)

Computer Sciences and Informatics.

We would like to express our gratitude to all of keynote and invited speakers:

Prof. Dr. M. Ansjar (Indonesia)

Assoc. Prof. Dr. Q. J. Khan (Oman)

Prof. Dr. Ismail Bin Mohd (Malaysia)

Prof. Dr. rer. nat. Dedi Rosadi (Indonesia)

Prof. Dr. T. Basarudin (Indonesia)

Assoc. Prof. Abdul Thalib Bin Bon (Malaysia)

Prof. Dr. Asep K. Supriatna (Indonesia)

We also would like to express our gratitude to all technical committee members who have

given their efforts to support this conference.

Finally, we would like to thank to all of the authors and participants of ICMCS 2013 for their

contribution. We hope your next participation in the next ICMCS.

Editorial Team

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th, 2013

iv

PROCEEDINGS

INTERNATIONAL CONFERENCE ON MATHEMATICAL

AND COMPUTER SCIENCES

UNIVERSITAS PADJADJARAN JATINANGOR

on October, 23rd-24th 2013

EDITORS

Dr. Setiawan Hadi, M. Sc.CS.

Dr. Sukono, MM., M.Si.

Dr. Diah Chaerani, M.Si.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th, 2013

v

PROCEEDINGS

INTERNATIONAL CONFERENCE ON MATHEMATICAL

AND COMPUTER SCIENCES

UNIVERSITAS PADJADJARAN JATINANGOR

on October, 23rd-24th 2013

REVIEWERS

Prof. Dr. Budi Nurani R., M.S.

Prof. Dr. Asep Kuswandi Supriatna, M.S.

Prof. Sudradjat Supian, M.S.

Prof. Dr. Ismail Bin Mohd.

Assoc. Prof. Dr. Abdul Talib Bon

Dr. Stanley Pandu Dewanto, M.Pd.

Dr. Atje Setiawan Abdullah, M.S., M.Kom.

Dr. Sukono, MM., M.Si.

Dr. Diah Chaerani, M.Si.

Dr. Endang Rusyaman, M.S.

Dr. Nursanti Anggriani, M.S.

Dr. Juli Rejito, M.Kom.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th, 2013

vi

PROCEEDINGS

INTERNATIONAL CONFERENCE ON MATHEMATICAL

AND COMPUTER SCIENCES

UNIVERSITAS PADJADJARAN JATINANGOR

on October, 23rd-24th 2013

SCIENTIFIC COMMITTEE

1. Prof. Dr. A.K. Supriatna (Unpad, Indonesia)

2. Prof. Dr. Budi Nurani (Unpad, Indonesia)

3. Prof. Sudradjat (Unpad, Indonesia)

4. Prof. Dr. Edy Soewono (ITB, Indonesia)

5. Prof. Dr. Ismail Bin Mohd. (UMT, Malaysia)

6. Prof. Pramila Goyal, Ph.D (CAS-IIT, India)

7. Prof. Ronald Wasserstein, Ph.D (ASA, USA)

8. Prof. Sjoerd Verduyn Lunel, Ph.D (Leiden, The Netherland)

9. Prof. Preda Vasile, Ph.D (Buchurest, Romania)

10. Prof. Marius Lofisescu, Ph.D (Academic, Romania)

11. Assoc. Prof. Dr. M. Suzuri (UMT, Malaysia)

12. Assoc. Prof. Dr. Anton Kamil (USM, Malaysia)

13. Assoc. Prof. Dr. Abdul Talib Bon (UTHM, Malaysia)

14. Assoc. Prof. Anton Prabuwono, Ph.D (UKM, Malaysia)

15. Dr. Atje Setiawan, M.Kom (Unpad, Indonesia)

16. Dr. F. Sukono, MM., M.Si.

17. Dr. Eng. Admi Syarif (Unila, Indonesia)

18. Dr. Sannay Mohamad (UBD, Brunei)

19. Assoc. Prof. Dr. Q.J.A. Khan (SQU, Oman)

20. Prof. Dr. T. Basarudin (UI, Indonesia)

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th, 2013

vii

PROCEEDINGS

INTERNATIONAL CONFERENCE ON MATHEMATICAL

AND COMPUTER SCIENCES

UNIVERSITAS PADJADJARAN JATINANGOR

on October, 23rd-24th 2013

ORGANIZING COMMITTEE

Responsible Person Prof. Dr. Asep K. Supriatna

Chairman

Vice Chairman

Dr. Asep Sholahuddin, MT.

Edi Kurniadi, M.Si.

Secretary Dianne Amor, M.Pd.

Treasurer Betty Subartini, M.Si.

Papers and Proceedings Dr. Setiawan Hadi M.Sc.CS.

Dr. Diah Chaerani, M.Si.

Publications and Documentations Rudi Rosadi, M.Kom.

Deni Setiana, M.Cs.

Transportations H. Agus Supriatna, M.Si.

Consumptions Elis Hartini, Dra., M.SE.

Dwi Susanti, M.Si.

Logistics H. Ino Suryana, M.Kom.

Events H. Eman Lesmana, M.SIE.

Deni Djohansyah, Drs.

Proceedings Akmal, MT.

Anita, M.Si.

Sponsors Erick Paulus, M.Kom.

Firdaniza, M.Si.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th, 2013

viii

TABLE OF CONTENTS

PREFACE ................................................................................................................................ iii

EDITORS .................................................................................................................................. iv

REVIEWERS ............................................................................................................................. v

SCIENTIFIC COMMITTEE .................................................................................................... vi

ORGANIZING COMMITTEE ................................................................................................ vii

TABLE OF CONTENTS ....................................................................................................... viii

A Noble Great Hope for Future Indonesian Mathematicians, Muhammad ANSJAR ............ 1

Determining Radius of Convergence of Newton’s Method Using Curvature

Function, Ridwan PANDIYA, Herlina NAPITUPULU, and Ismail BIN MOHD .............. 9

Optimization Transportation Model System, Abdul Talib BON,

Dayang Siti Atiqah HAMZAH ............................................................................................. 21

Outstanding Claims Reserve Estimates by Using Bornhutter-Ferguson Method,

Agus SUPRIATNA, Dwi SUSANTI ..................................................................................... 36

Bankruptcy Prediction of Corporate Coupon Bond with Modified First

Passage Time Approach, Di Asih I MARUDDANI, Dedi ROSADI, GUNARDI &

ABDURAKHMAN ................................................................................................................... 45

Subdirect Sums of Nonsingular M-Matrices and of Their Inverses, Eddy DJAUHARI,

Euis HARTINI .......................................................................................................................... 53

Teaching Quotient Group Using GAP, Ema CARNIA, Isah AISAH &

Sisilia SYLVIANI ..................................................................................................................... 60

Analysis of Factors Affecting the Inflation in Indonesia by Using

Error Correction Model, Emah SURYAMAH, Sukono, Satria BUANA ............................ 67

Network Flows and Integer Programming Models for The Two Commodities

Problem, Lesmana E. .............................................................................................................. 77

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th, 2013

ix

Necessary Conditions for Convergence of Ratio Sequence of Generalized

Fibonacci, Endang RUSYAMAN & Kankan PARMIKANTI ................................................... 86

Mean-Variance Portfolio Optimization on Some Islamic Stocks by Using

Non Constant Mean and Volatility Models Approaches, Endang SOERYANA,

Ismail BIN MOHD, Mustafa MAMAT, Sukono, Endang RUSYAMAN ......................... 92

Application of Robust Statistics to Portfolio Optimization, Epha Diana SUPANDI,

Dedi ROSADI, ABDURAKHMAN .................................................................................... 100

Multivariate Models for Predicting Efficiency of Financial Performance in

The Insurance Company, Iin IRIANINGSIH, Sukono, Deti RAHMAWATI .................. 108

A Property of 𝒛−𝟏𝑭𝒎 [[𝒛−𝟏]] Subspace, Isah AISAH, Sisilia SYLVIANI ............................. 119

Fractional Colorings in The Mycielski Graphs, Mochamad SUYUDI,

Ismail BIN MOHD, Sudradjat SUPIAN, Asep K. SUPRIATNA, Sukono ..................... 123

Simulation of Factors Affecting The Optimization of Financing Fund for

Property Damage Repair on Building Housing Caused by The Flood Disaster,

Pramono SIDI, Ismail BIN MOHD, Wan Muhamad AMIR WAN AHMAD,

Sudradjat SUPIAN, Sukono, Lusiana ................................................................................ 134

Analysis of Variables Affecting the Movement of Indeks Harga Saham Gabungan

(IHSG) in Indonesia Stock Exchange by using Stepwise Method, Riaman, Sukono,

Mandila Ridha AGUNG ...................................................................................................... 142

Learning Geometry Through Proofs, Stanley DEWANTO ................................................. 153

Mathematical Modeling In Inflammatory Dental Pulp On The Periapical Radiographs,

Supian S., Nurfuadah P., Sitam S., Oscanda F., Rukmo M., and Sholahuddin A ......... 157

The Selection Study of International Flight Route with Lowest Operational

Costs, Warsito, Sudradjat SUPIAN, Sukono ..................................................................... 164

Scientific Debate Instructional to Enhance Students Mathematical Communication,

Reasoning, and Connection Ability in the Concept of Integral., Yani RAMDANI ............. 172

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th, 2013

x

Rice Supply Prediction Integrated Part Of Framework For Forecasting Rice

Crises Time In Bandung-Indonesia, Yuyun HIDAYAT, Ismail BIN MOHD,

Mustafa MAMAT ,Sukono .................................................................................................... 183

Global Optimization on Mean-VaR Portfolio Investment Under Asset Liability

by Using Genetic Algorithm, Sukono, Sudradjat SUPIAN, Dwi SUSANTI .................... 193

Controlling Robotic Arm Using a Face, Asep SHOLAHUDDIN, Setiawan HADI ................ 202

Image Guided Biopsies For Prostate Cancer, Bambang Krismono TRIWIJOYO ............ 214

Measuring The Value of Information Technology Investment Using The Val It

Framework (Case Study : Pt Best Stamp Indonesia Head Office Bandung ), Rita

KOMALASARI, Zen MUNAWAR .................................................................................... 231

Assessment Information Technology Governance Using Cobit Framework

Domain Plan and Organise and Acquire and Implement (Case Study: Pt. Best

Stamp Indonesia Bandung Head Office), Zen MUNAWAR .................................................. 242

Curve Fitting Based on Physichal Model Accelerated Creep Phenomena for

Material SA-213 T22, Cukup MULYANA, Agus YODI ........................................................ 254

Growing Effects in Metamorphic Animation of Plant-like Fractals based on

Transitional IFS Code Approach, Tedjo DARMANTO, Iping S. SUWARDI &

Rinaldi MUNIR .................................................................................................................... 260

Mining Co-occurance Crime Type Patterns for Spatial Crime Data, Arief F HUDA,

Ionia VERITAWATI ........................................................................................................... 267

An Improved Accuracy CBIR using Clustering and Color Histogram in Image

Database, Juli REJITO ......................................................................................................... 276

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th, 2013

xi

KEYNOTE SPEAKER

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

1

A Noble Great Hope for Future Indonesian

Mathematicians

Muhammad ANSJAR

Institut Teknologi Bandung, Indonesia

[email protected]

Abstract: Mathematics has strong interactions with human life since ancient time all over the

world. Mathematics has also contributed in developing human mind. Mathematics is never

absence in all efforts developing new science, technology and engineering to improve the

quality of human life. Besides the contribution of body of knowledge of mathematics in the

development of science, technology and engineering, there are values in mathematics, which is

worth to adopt for pleasant and peaceful life in a community, or in a country.

It is fair to imagine at one time, Indonesia with such a great populations, will contribute

a significant number of mathematicians actively participate with other scientists and engineers

to develop modern science and technology in this country, which is also acknowledge by other

modern countries.

The dream which is better called A Noble Hope will come true, if Indonesian

mathematicians start to work today, may be start with a small scale well calculated action, and

slowly, but sure, expand widely. It may take decades to come, but it might be also sooner.

There are two main programs could be followed

Program one is about the developing of mathematics graduate research. Firstly, it is

necessary to strengthen and to improve mathematics graduate and undergraduate programs in

all departments. These activities should be along the line, or parallel to the existing government

programs besides proposing new programs.

Program two concerns all level of pre university mathematics education.

Mathematicians should establish contacts with groups of mathematics teachers. Through these

contact mathematicians help the teachers to master and understand correctly the concepts they

are teaching, make them used with the way of thinking and reasoning in mathematics, and adopt

the values in mathematic. This will enable them to make their student also understand correctly

the concepts they are learning, and introduce thinking and reasoning in mathematics slowly.

They could also make the students familiar with the values in mathematics. However, this

should be done in whatever curriculum is being used and ways of teaching are being applied.

This is purely to improve and to correct the mathematical background of the teachers,

in order they can provide the students with proper and correct understanding of mathematical

concepts they are learning, free from interfering to the teaching practice. The most expected

result is the mass improvement of pre university mathematics education, how slow it might be.

This also guaranties the possible continuation of the first program.

Meanwhile, mathematicians should always give input to the government and

community about the correct mastering in mathematics, and should welcome any invitation to

improve mathematics educations, and participate actively.

A strong message behind these programs is the responsibility of each Indonesian

mathematician for improvement the quality of pre university mathematics educations.

Keywords: Mathematics with human life, mathematics and human mind, mathematics and

science and technology, a noble hope, values in mathematics, improving the quality of pre

university mathematics education.

1. Introduction.

The main desireof Indonesian, as well as the people of all nations, are living in prosperity and

respectedamong the nations. The continuous effort by all citizens generation by generation is the main

key to achieve the goal. High quality education in all levels is the only basic and effective prescription

to prepare each generation to hold thisresponsibility.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

2

Being excellence in science and technology is imperative to create prosperity in a nation, as

shown in history and anticipated for the future. A nation advanced in science and technology is highly

regarded and never beneglected among the nations. A nation adopting noble values shownas the

personality and the character of its citizens in their way of life, will get respect and well accepted in the

world community.

Therefore, our education must be able to provide the students with goodconductin science, and

to develop their respectable personality and character.Althoughnot everybody should be in the frontier

of sciences, good and appropriateunderstanding of science in anylevel is necessary in the desired

pleasant living.

Scienceisdeveloping rapidly and more sophisticated. It develops not onlyvertically in each

branch of sciences, but also horizontally as interaction among branches of sciences. The linkages grow

stronger and the interactionsexpand. Branches of sciences look like to converge and in some occasions,

it creates new sciences. All of these situations involve mathematics as a branch of sciences, as well as

in the internal of mathematics itself.The expression of 'science and mathematics' is developing to the

expression of 'mathematics, science, technology, and engineering'. Lately the expression of 'science,

technology, engineering, arts, and mathematics' abbreviated as STEAM appears, which includes

humanity.

Mathematics has shown the ability to provide significant supports in the development and

application of science and other knowledge as demanded. This is not only in the advance levels, but

also in all levels of human life. Even, mathematics has had significant interaction with various aspects

of human life since ancient time. Mathematics as human mind is an idealistic way of thinking, and there

are values in mathematics, which reflect respectable behavior in life. These will contribute essentially

to the development of personality and character of one, who learned and understood mathematics well.

There is nothing new in the last statements, but it should get attention more seriously.

These conclude that mathematics as a science, along with good and correct understanding plays

important and decisive role in human life and in humanity in general. It is not only for the future, but

also for the time being; even it has been so since ancient time.

All of these could raise a question for mathematicians in Indonesia, when a significant number

of Indonesian mathematicians will actively participate to playthe role of mathematics, in developing

modern science and engineering together with Indonesian scientists end engineers to create the bright

life of the nation, namely prosperity and pleasant life among the citizens, and respectable position in

the world community.

This dream drove me to write this short presentation, when invited to take part in the program

of IC-ICMS 2013. What I want to say might be an old story or a dream. However, I believe that more

serious attention should be given, and followed by concrete actions, and make the dream come true.

2. Mathematics and human life. A quick glance.

Mathematics has had interaction with human life since the birth of mathematics, which is unknown

until today.It will continue to the future. Mathematics has contributed to the human life, culture, and

humanity. On the other hand, the demand to fulfill the needs of human life hastriggered various

developments in mathematics.

The pyramid in the era of ancient Egypt (around 200 B.C.) showed that the ancient Egyptian

had been using some mathematical concepts known today. Restructuring of land boundary covered by

the mud brought by the annual flood of the Nile might have triggered the formulation of the basic

concepts of current geometry, whatever was the formulation. However, this is far from the estimated

date of written ancient Egyptian mathematics known as Moscow Mathematical papyrus and Rhind

papyrus. Both papyri discovered in 1858 dated from about 1700 B.C. Both papyri contain problems and

their solutions. It is most likely that that the problems related to the work faced by the people as far

back as 3500 B.C. Egyptian mathematics must have existed since that time.

At the same time mathematics also developed in Babylonia. Babylonian mathematics related

not only to astronomy, as well as in Egypt, but also related to various daily activities such as counting,

trading, etc. The existence of dams, canals, irrigations and other engineering work in Babylonia also

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

3

suggested that they had been using mathematics, although possibly in much simpler versions. Probably,

it is more precise to say that the engineering work created mathematics in Babylonia.

In 332 B.C.,Egyptian and Babylonian mathematics merged with Greek mathematics,after

Alexander the Great conquered Egypt and Babylonia. Since then mathematics belonged to the Greek

and developed until the year of 600.

Meanwhile, mathematics also grew and developed solely in China since 1300 B.C. The first

contact with the rest of the world was estimated around 400 B.C. Mathematics in China was developed

responding to the need for trading, government administrations, architecture, calendar, etc.

However, mathematics as an organized, independent, and reasoned discipline firstly introduced

in Classic Greek of the period from 600 to 300 B.C. The Greek made mathematics abstract, where an

idea is processed only by human mind. They insisted on deductive proof. This is one of the great

contributions of the Greek mathematics. The Greek from this Classic Greek period followed by the

Alexandrian Greek period also created many other foundations for current mathematics. Euclidcreated

geometry known as Euclidean geometry in the Alexandrian period.

The interaction between mathematics and real phenomena as introduced in Greek mathematics

is another vital contribution of the Greek. The Greek identified that mathematics is an abstraction of

physical world, and saw in it the ultimate truth about the structure and the design of the universe. It

means that they considered mathematical concepts as the abstract form of the nature. The phrase

'mathematics is the essence of nature's reality' appeared as a doctrine. Mathematical equations express

various real phenomena, which is familiar today as mathematical models of the respective phenomena.

The Greek also identified mathematics with music. They even considered mathematics as an art. They

felt and saw beauty, harmony, simplicity, clarity, and orderliness in mathematics.

The vitality of Greek Mathematical activities started to decline in the Christian era, especially

after Alexandrian Greek in North Africa defeated by the Roman Empire. The situation became worst

when the Arab Kingdom conquered Alexandria in 640. However, a century later the Arab kingdom

opened the door to invite Greek and Persian scientists to perform in the kingdom. Greek mathematics

started blooming again as Arab mathematics. The mathematics community in Arab translated and

completed former work of Greek mathematicians. This became the sources for European mathematics

lately, to replace the missing original manuscripts in their period. Mathematics performed in Arab

through astronomy to determine the praying and fasting calendar, the direction of kiblah for praying,

etc.

Outside of Greece and Italy, mathematics started to develop in Europe only around the 12th

century, after churches were established. In the circle of churches, learning mathematics was relatively

important. By emphasizing on deductive reasoning, they considered learning mathematics as an

exercise for debating oraugmenting. A priest needs this ability in theology and spreading the religion.

Besides, they considered arithmetic as a science of number applied to music, geometry as a study of

stationary objects, and astronomy as a study on moving objects.

In the period of 1400 – 1600, the humanist studied abstract mathematics together with physics,

architecture and other sciences, to support the developmentof their work of art. They started to use

perspective in fine arts as a direct involvement of mathematics. They wrote various books on

mathematics for arts, and some could be categorized as mathematics books.

At the end of 16thcentury, the role of mathematics in sciences, especially in astronomy, was

increasing. Copernicus and Kepler strongly believed on the laws in astronomy and mechanicsobtained

through mathematics. The heliocentric replaced the geocentric concept in astronomy. At the beginning

of the 17th century, namely in 1610, Galileo wrote a famous statement:

"Philosophy [nature] is written in the great book whichever lies before our eyes – I mean

the universe – but we cannot understand it if we do not first learn the language and grasp

the symbols in which it is written. The book is written in the mathematical language, and the

symbols are triangles, circles and other geometrical figures, without whose help it is

impossible to comprehend a single word of it; without which one wonders in vain through

a dark labyrinth."

This statement agrees with current perceptions. A mathematical model is a mathematical representation

of nature or other real phenomena. We can explore the nature as well as the other real phenomena and

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

4

solve their problems mathematically through the respective mathematical models. Descartes even said

that the essence of science is mathematics.

Newton and Leibniz created calculus separately in the 17th century, and considered as a greatest

creation in mathematics, next to Euclidean geometry. The work in calculus to solve physical problems

led to the creation of ordinary differential equations and extended later to partial differential equations.

Those are among the most powerful mathematical models to solve various real world problems. It began

from Newton's law of motion.

The interaction between mathematics and various activities for the benefit of human life has

increased in modern time until now through its applications in science, technology, and engineering, as

well as in art and social sciences. One time, physics needed a particular function with properties

considered impossible at the time. The function, which is familiar today as Dirac-δ function, is

impossible in conventional function. It is a discontinuous function which is zero along a real axis except

at the origin, but integrable along the whole axis with the value of 1.To back up the existence of such

function the theory of Generalized Functions is created. This function opened the new era in quantum

mechanics in modern physics, which provideda lot of benefit for human life. The application of

mathematics to win the World War II was the starting point of operational research. Operational

research has many applications to support many activities related to the better living. Genetic algorithm

is a useful method in engineering. This method based on the abstract mathematical expression of

genetics theory in biology. Finite element method is another powerful method in structural

engineering.The formulation of the method based on physical views of a structure. Through abstract

mathematical formulation, this method is also applicable as a powerful method in fluid dynamics.

Mathematics has also played significant role in creating and developing new sciences and

technology such as nano sciences, nano technology, information technology, etc., which also provide

valuable contributions to human life. We must not neglect the progress of computer science, side by

side with numerical analysis. Mathematical computation provides strong supports to science,

technology and engineering leading to prosperous living, as well as in mathematics itself.

3. Mathematics as an expression of human mind.

When classical Greek introduced mathematics as an abstract concept, it means that any ideas in

mathematics are the results ofprocesses of human mind. The Greek mathematicians insisted onusing

deductive proof. Last century,Freudenthal stated that mathematics is not only a body of knowledge, but

also a human activity. By considering human activity also means human mind. Therefore, this statement

agrees with the earlier statement of Richard Courant, who called mathematics as an expression of human

mind. The full Courant's statementis as follows.

"Mathematics as an expression of human mind reflects the active will, contemplative

reason, and the desire for aesthetics perfection. Its basic elements are logical and

intuition, analysis and construction, generality and individuality."

This statement points out that thinking in mathematics is an active anddynamic thinking.Strong reasons

always support all statements. Thinking in mathematics is not aiming just fora perfect outcome, but the

outcome with aesthetic perfection. Strong intuition should accompany logical thinking. Besides paying

attention comprehensively to the existing situations, mathematical thinking is also looking forwardto

build up something new. Besides aiming at general situation, never mathematics never neglect any

particular cases. It is common to say that mathematical thinking is logical, critical and systematic

thinking, as well as consistent, creative and innovative. It is clear that these are also the way of thinking,

which is a great worth in real life.

This way of thinking has built up strong structure of mathematical theories, which make

mathematics always stable in their development. Each mathematical theory consists of a chain of neatly

ordered truths known as axioms, theorems, lemmas, prepositions, etc. There are no contradictions

among the truths in mathematics, and obeyed consistently. A truth in mathematics is a relative truth. A

statement becomes a new truth if it seems logical consequences of previous truths, or at least it does not

contradict them. However, an axiom is an initial truth or one of the basics of other truthsaccepted only

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

5

as a consensus.Ones agree on a consensus because it is a self-evident truth, or designed for a certain

purpose.

The ten axioms in Euclidean geometry are self-evident truths. Then a doubt rises whether the

parallel axioms was really a self evident truth. To clear the doubt, the parallel axiom was replaced by

a contradictory axiom. The replacement axiom was accepted as a consensus for a purpose to clear the

doubt. All results obtained involving the replacing axiom turned out contradictory to the results

obtained involving the original parallel axiom. This means that the parallel axioms is true as a self

evident truth, the truth non derivable from the other nine axioms. The original procedure was very

complicated. What is mentioned above is a simple way we can do today.

Due to the doubt, 100 years after Euclid, Gauss initiated a new geometry named later as non

Euclidean geometry, and developed later by Lobachevski and others. The new geometry adopted the

10 axioms in the Euclidean geometry, except the parallel axiom which were replaced by a version of

contradictory axiom formulated by Lobachevski. Both geometries are accepted in mathematics as two

different theories, and developed without interference of one to the other. It is interesting to note, that

non Euclidean geometry almost agrees with the Euclidean geometry in the very small area in the center

of the area of validity of non Euclidean geometry. This is because of the formulation of the replacement

axiom. It is also realized later, that the Einstein's theory of relativity fits the non Euclidean geometry,

while Newtonian mechanics fits Euclidean geometry. And Einstein's mechanics also fit Newtonian

mechanics for velocity, which istoo small compare to the velocity of light.

The short note above, points out that there are moral values in mathematics, besides its role in

science and technology, which could contribute to the benefit of good life of a community, even of a

nation.

Good comprehension of the essence of mathematics will affect an appreciative way of life, and

the way of thinking could provide meaningful contributions to the society. The human mind, as reflected

in mathematics, let us call it as mathematical mind, worth to be adopted by every mathematician. By

thinking mathematically, which could be simplified as logical, critical, and systematic thinking, as well

as consistent, creative and innovative, one can produce meaningful strong ideas, which are looking

forward, implementable, and perfectly formulated based on contemplative reasons. This mathematical

mind will grow if one learns mathematics to a good comprehension, no matter how far one has learned.

The firm structure of theories in mathematics could be considered as a reflection of an ideal

structure of a nation or a society. Laws and regulations in a country should be based on the constitution

as a basic consensus in the country. The constitution as well as laws and regulations should be free of

contradictory, and obeyed consistently by all citizens. Local societies, political parties, or any other

organizationwith various bases in a country should also be based mainly on the constitution, and do not

contradict any laws and regulations in the country. Besides, each organization could have its own basic

consensus to be obeyed by its members. Two political parties might have contradictory ideologies, but

not with the nation's ideology. However, the two parties should go on side by side without interfering

each other, for the benefit of the country, as reflected by the Euclidean and non Euclidean geometry in

mathematics.

4. A Noble Great Hope.

It has been shown that the general wishes of every nation is living in prosperity and respectable among

the world community. To achieve this situation mastery in science, technology and engineering is

imperative for every nation. Meanwhile, mathematics has been playing significant roles in the

development of science and technology, and has been interacted with real life since early history, and

contributed to the development of human mind for a long history.

Therefore, it is a great hope, or, probably a noble great hope to see at one time, Indonesia with

such enormous population will contribute a significant number of mathematicians directly participate

in the modern development of science, technology and engineering in this country. One may say that

this is only a nice dream. However, this is a dream that may come true, although frankly it is not easy

and may take very long, long time. It needs very strong will, followed by continuous strong big efforts

and tireless work for a long time. May be it will take decades to come, but may be less. This is a

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

6

challenge, for Indonesian mathematicians; a challenge to reach what one may call as A Noble Great

Hope.

If there is a will, this effort has to start by improving and intensifying research activities,

improving undergraduate and graduate mathematics programs, and the mot important thing is

improving pre university mathematics teaching in all levels.

This may sounds as an old song, a song so far nobody can sing. However, there are roles for

individual mathematicians; good mathematicians with graduate or undergraduate degrees. They could

do something along the line of government policy and government's programs and provide indirect, but

valuable support. The roles should be played continuously, gradually more intensive, and this will

provide benefit from time to time to our education..

4.1. A Nobel Great Hope program I: Improving university mathematics program.

This program should start with standardizing the mathematical knowledge and mathematical abilities

among the mathematicians in each mathematics departments in every university, as a prerequisite for

intensifying research programs.

The existing research activities should be pushed more intensively and whenever possible

design a new well-planned proposals. Whenever possible try to start collaborative or joint research with

colleges in any branches of sciences and engineering. Mathematicians could contribute for example

through modeling, identifying mathematical aspects in the problem, or introducing mathematical tools

and computation to solve the problem, etc.

To make this is more possible for a long range, the undergraduate and graduate mathematics

curriculum should allow, even encourage the students to take one or two courses in other branches of

sciences, or engineering. Mathematics department should establish cooperation and good relations with

related departments.

If this program runs as expected, we hope not very long, a small group of mathematicians,

engineers, or other scientists will come out with respectable results. This group, as an embryo should

develop in quality and quantity. Several other embryos should grow in the short future.

In this case, mathematicians are working completely within the framework of government

programs, with some innovations.

4.2. A Noble Great Hope program 2: Improving mathematics at pre university education

The weakness of mathematics at current pre university education is a reality and we have to admit. This

situation weakened any efforts to develop university mathematics educations.

The main factor to weaken the pre university mathematics educations in all levels are the

teachers. There are a very limited number of teachers and most of them do not equipped properly with

mathematical knowledge they have to teach. Even many of them do not understand correctly the

concepts they have to teach. This must not happen. They must get help. They must and need to

understand correctly the concepts they are teaching. With this help, they must be able to make their

students also understand correctly and free from miss conceptions.

Besides, the teachers must used with thinking and reasoning mathematically. They should also

understand some values in mathematics appropriate for real life, so that they can also pass this to their

students.

This could be done if an individual or a small group of mathematicians organized regular

meetings with small groups of teachers. The meetings completely discuss the correct understanding of

the concepts to teach, thinking and reasoning mathematically, and understanding on the values in

mathematic. So, it nothing to do with curriculum and the way of teaching being adopted.

The activities should extend to many more groups, however the old groups must be closely

monitored. Emphasizing to a small group is only for effective reason.

This is a contribution of Indonesian mathematician to improve pre university mathematics

educations, without interfering government programs.

Meanwhile, mathematicians should provide regular information on mathematics to the

government and the society, for instants comments on school mathematics books, popular clarifications

about mathematics, etc.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

7

Certainly, mathematicians should participate actively, when invited to discuss about

curriculum, mathematical educations, research development, etc.

4.3. An ultimate message.

To improve the pre university mathematic education should not be left to government only.

Mathematicians have an obligation to play a role outside government programs.

5. Concluding remark.

This paper should have more analysis, especially on the pre university mathematics educations. The

same thing for The Noble Great Hope program 1 and 2, more detail should be outlined. But myillness

restricts me from working much. Therefore, in the program 2 every body should arranged the activity

individually.

I thanks and appreciate highly, the understanding and the patient of the committee.

At last, I there are still small things could be drawn from this short paper. I apologize for the

imperfect things of this paper due to my illness.

6. References

Ellis, M. W. (2005). The Paradigm Shift in Mathematics Education. The Mathematics Educator Vol 15,

7-17.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th, 2013

8

INVITED SPEAKERS

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

9

Determining Radius of Convergence of

Newton’s Method Using Curvature Function

Ridwan PANDIYA, Herlina NAPITUPULU, and Ismail BIN MOHD

Department of Mathematics, Universiti Malaysia Terengganu, Terengganu, Malaysia

Abstract In this paper, we propose an idea as well as a method how to manage the convergences of

Newton’s method if its iteration process encounters a local extremum. This idea build the

oscullating circle at a local extremum, and use that radius of oscullating circle or known as the

radius of curvature as an additional number of local extremum, then take the addition of local

extremum and radius of curvature at that local extremum as an initial guess in finding a root

close to that local extremum. Several examples which demonstrates that our idea is successful

and perform to fulfill the aim of this paper, will also be given in this paper.

Keyword : Newton’s method, Convergence, Curvature Function, Radius of Curvature

1. Introduction

One of the most frequently occuring problems in scientific work is to find the roots of the equations of

the form

( ) 0f x (1)

Iterative procedures for solutions (1) are routinely employed. Starting with the classical Newton’s

method, a number of methods for finding roots of equations have come to exist each of which has its

own advantages and limitations.

The Newton’s method of root finding based on the iterative formula is given by

1

( )

'( )

kk k

k

f xx x

f x . (2)

Newton’s method displays a faster quadratic convergence near the root while it requires evaluation of

the function and its derivative at each step of the iteration.

However, when the derivative evaluated is zero, Newton’s method stalls ([1]).

Newton’s method will face several obstacles if it has low values of the derivative, the Newton

iteration offshoots away from the current point of iteration and may possible converge to a root far away

from the intended domain. For certain forms of equations, Newton’s method diverges or oscillates and

fails to converge to the desire root. We observe these obstacles by considering the function with

expression 3 2( ) 3f x x x x

where its graph is given in Figure 1 ([2]). If we start the iteration at 0x where

0x is a fixed number in

the interval (1.5,1.6), then we will obtain infinite sequence like

0x ,

1x , 0x .

1x , …

which does not converge to *x .

3

*x 0x 1

1x 0 0.999...x

Figure 1 : 0x ,

1x , 0x .

1x , … Figure 2 : 0x ,

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

10

If we start at 0 0.999...x as shown in Figure 2, we may get

1x which approaches (exceed the

computer number). Therefore the algorithm cannot proceed.

In this paper, we would like to compute all the zeroes of a function when its graph is much like in

Figure 3.

Figure 3 : 2

( ) sin( ) sin3

f x x x

( [3,20])x

Furthermore, if the derivative at a point is zero, then the point is a critical point of a function, and

special for this case, we consider those points as minimizer or maximizer (extreme). Our idea is that

how does Newton’s method can still be used where its initial point as an initial is minimizer or

maximizer.

When we apply the Newton’s method to a point which has zero derivative, the iterative process

would not work properly since the denominator equal to zero. In order to ensure that Newton’s method

can still be used, we need to add such a number to that minimizer or maximizer so that its derivative

does not equal to zero. The question is how to obtain such a number? Of course we cannot add any

numbers, therefore, we use the theory of curvature of a function at any extreme point for Newton’s

method works.

In Section 2, it will be explained in advance of curvature and radius of curvature. Section 3 contains

the explanation how the radius of curvature can be used to satisfy the Liptschitz property for Newton’s

method. We will illustrate the idea of curvature through an example in Section 4. IN Section 5, we will

show how to improve the idea of curvature if the curve bends very sharply. We have used 20 testing

examples to observe the impact of our algorithm and this will be given in Section 6. The conclusion

will be given in Section 7 which ends this paper.

2. Curvature of a Function

The idea of curvature is the measure of how sharply a curve bends. We would expect the curvature to

be 0 for a straight line, to be very small for curves which bend sharply. If we move along a curve, we

see that the direction of the tangent line will not change as long as the curve is flat. Its direction will

change if the curve bends. The more the curve bends, the more the direction of the tangent line will

change. As we know that the movement of Newton’s method searching process is depended on the

tangent line of each iteration. We are thus led to the following definition and theorem which are taken

from [2].

Definition 1

Let the curve C be given by the differentiable vector function 1 2( ) ( ) ( )f t f t f t i j . Suppose that

( )t denotes the direction of '( )f t . (i) The curvature of C , denoted by ( )t , is the absolute value

of the rate of change of direction with respect to arc length ( s ), that is,

( )d

tds

, note that ( ) 0t .

(ii) The radius of curvature ( )t is define by

1( )

( )t

t

, if ( ) 0t .

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

11

Theorem 2

If ( )T t denotes the unit tangent vector to f , then ( )dT

tds

.

Theorem 3

If C is a curve with equation ( )y f x where f is twice differentiable then

3/ 2

2

"( )

1 '( )

f x

f x

.

According to [3], when the curvature ( ) 0t , the center of curvature lies along the direction of

( )N t at distance 1/ from the point ( )t . When ( ) 0t , the center of curvature lies along the

direction ( )N t at distance 1/ from ( )t . In either case, the center of curvature is located at,

1( ) ( )

( )t N t

t

.

The osculating circle, when 0 , is the circle at the center of curvature with radius 1/ which is

called the radius of curvature. The osculating circle approximates the curve locally up to the second

order (the illustration is in Figure 4).

Figure 4

3. An Appropriate Initial Guess

We have mentioned in Section 1 about our idea to find a promised number to be added to extreme point

such that the new point will be a good initial guess for Newton’s method succeeded in finding the zero

point of a function.

Let consider Figure 5.

Figure 5

Basically, in order to make Newton’s method converges to *x , the zero of a function f , we need an

initial estimation that is close enough to *x . Therefore, we would like to get the number such that

*

kx becomes the best estimation as an initial point for Newton’s method. We use the radius of

curvature of xf to be the number , therefore *

kx is the initial best estimation when we use

*kx

*kx

N3

yx

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

12

the Newton’s method to find the root of xf . We will prove that * *

kx x is the radius of the

largest interval around *x in which the application of Newton’s method to any point in * *,x x

will converge to *x . However, we will need the following definition.

Definition 4 ([4])

The function RRDf : is Lipschitz continuous function with constant in an interval D ,

written ,DLipf if for every Dyx , ,

yxyfxf .

For the convergence of Newton’s method, we need to show that DLipf ' . This condition has

been shown in [4] through the following Lemma.

Lemma 5 ([4])

If (i) RRDf : for an open interval D , (ii) DLipf ' , then for any Dyx , ,

2( )( ) ( ) '( )( )

2

y xf y f x f x y x

.

For most problems, Newton’s method will converge q -quadratically to the root of one nonlinear

equation in one unknown. We shall now state the fundamental theorem of numerical mathematics.

Theorem 6 ([4])

If (i) RRDf : for an open interval D , (ii) DLipf ' , (iii) for some 0 ,

'( )f x for every Dx , (iv) ( ) 0f x has a solution *x D , then there is some 0 such that

if *

0x x , then the sequence nx generated by

n

nnn

xf

xfxx

'1 ,...2,1,0n

exists and converges to *x . Furthermore , for ,...2,1,0n

* * 2

1| | | |2

n nx x x x

. (3)

Now, we prove that * *

1ˆ ˆx x the radius of the largest interval around the solution of

( ) 0f x holds Theorem 3.1.We just prove for ( )f x holds Theorem 3.1.

Theorem 7

If (i) RRDf : is an objective function with *

1x is a local minima of ( )f x , (ii)

'f Lip X , (iii) for some 0 , '( )f x for every Dx , (iv) ( ) 0f x has a solution

*x D , then there is some 0 such that if *

0x x , then the sequence nx generated by

1'

nn n

n

f xx x

f x ,...2,1,0n

exists and converges to *x . Furthermore , for ,...2,1,0n

* * 2

1| | | |2

n nx x x x

. (4)

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

13

Proof

Let be the radius of curvature of ( )f x at *

1x . Let be the radius of the largest interval around

*x , that is contained in D and define

ˆmin{ ,(2 / )} .

We will show by induction that for ,...2,1,0n , the equation (4) holds, and

* *1| | | |n nx x x x .

Take * *ˆ ˆkx x

as the radius of the largest interval around *x D , and let *

0 kxx be an initial

point which is a lower bound or an upper bound of * *ˆ ˆ,x x . The proof simply shows at each

iteration that the new error *

1| |nx x is bounded by a constant times the error the affine model makes,

in approximating f at *x .

For 0n , we have

*00* * *

1 0 0

0 0' '

f x f xf xx x x x x x

f x f x

* *

* *

*

ˆˆ

ˆ'

k

k

k

f x f xx x

f x

* * * * * * *

* * * *

ˆ ˆ ˆ ˆ ˆ' ' '

ˆ ˆ ˆ ˆ' ' ' '

k k k k k

k k k k

x f x f x x f x f x f x

f x f x f x f x

* * * * * * *

*

1ˆ ˆ ˆ ˆ ˆ' ' '

ˆ'k k k k k

k

f x f x x f x x f x f xf x

* * * * *

*

1ˆ ˆ ˆ'

ˆ'k k k

k

f x f x f x x xf x

By taking the absolute value of the above, we have

* * * * * *

1 *

1ˆ ˆ ˆ'

ˆ'k k k

k

x x f x f x f x x xf x

.

Thus by Lemma 5, we have

2* * *

1 *ˆ

ˆ2 'k

k

x x x xf x

and by assumptions on '( )f x , we obtain

2* * *

2kx x x x

.

Since

* * 2ˆ

kx x

,

we have

2* * * * * * *

1

2ˆ ˆ ˆ

2 2k k kx x x x x x x x

.

The proof of the induction step then proceeds identically.

Based on the above discussion, now we try to do experiment on some example problems to

demonstrate that the use of radius of curvature is quite effective to make Newton’s method converges

to the desire solutions.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

14

4. Computation Examples

In this section, we will try to obtain the nearest root closed to an extreme point with initial guess *kx

, and *kx where

*kx is an extreme point, and is the radius of curvature at this extreme point. In this

section, we will use five examples (Exp.) given in Table 1, and try to obtain a root to the right of extreme

point, and a root the left extreme point of each function.

Table 1 : Data for the testing.

Exp Function Domain Graph

1 21( ) 1f x x

[ 2,2]

2 2

2( ) 2 3f x x x [ 5,3]

3 3 2

3( )f x x x [ 1.5,1]

4 4

2( ) sin sin

3f x x x

[3,10]

5 5

3( ) cos cos(2 ) sin

5f x x x x

[9,13]

Table 2 shows that the use of initial guess *kx , with *

kx is a local extremum of a function, and

is a radius of curvature at that local extremum, will make Newton’s method (N) converges (C) to a root

of a function closest to the local extremum. However, when the radius of curvature is too small,

Newton’s method iterative process failed (F) to bring to the expected solution, this can be seen in Exp.

5, the column is colored gray. To overcome this obstacle, we have made a modification to the radius of

curvature which will be discussed in the next section.

Table 2 : Results of five testing examples

Exp. *kx *x

*( )f x

*x

*( )f x

N

1

0 0.5 1 0 -1 0 C

2

-1 0.5 1 0 -3 0 C

3

-2/3 0.5 -1.75962e-162 0 -1 0 C

4

5.36225 1.01755 7.53982 -1.01481e-016 3.76991 1.82363e-016 C

5

10.9598 0.191837 12.1839 4.40999e-016 6.71118 -1.89519 F

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

15

5. An Improved Starting Point

Before we go through about this special case, now consider Figure 6

Figure 6 : The behavior of Newton’s method

Unfortunate case when Newton’s method encounters a trial guess near such a local extremum, then

Newton’s method will send its solution far away from the desired solution (see Figure 6). This situation

happend in Exp. 5 of Table 2 where the size of radius of curvature to be added to the minima point is

not enough to bring that point to the expected root.

For details, in Exp. 5, * 10.9598kx is a minimizer of 5( )f x , 0.191837 is radius of curvature at

*kx , then *

kx is an initial guess in finding the nearest root from *kx on the right, and *

kx on the

left. In Table 2, it has been given a sign that Exp. 5 failed in getting the root on the left side of *kx . Now

we try to double up the radius of curvature become 2 , and use * 2kx as the new initial guess, then

with that new initial guess, we will obtain * 9.93822x which is the nearest root of *kx on the left. So

we assume that the failure due to Exp. 5 caused by the small radius of curvature. To overcome this

obstacle, we decide to restrict the radius of curvature as

r for (0,1)r .

The algorithm of modified radius of curvature can be described in Algorithm M.

Algorithm M

This simple algorithm computes using data 0( , , , )x r m where 0x is local

extremum of a function, is a tolerance, r is a real number, and m is the maximum number of

iteration.

1.

3/ 22

0

0

1 '

"j

f x

f x

2. : j

3. : 1i

4. while i r do

4.1. :i ji

4.2. : 1i i

4.3. : i

5. return.

( )f x

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

16

6. Numerical Results

In this section, we employ our method to solve some nonlinear equations. All experiments were

performed on a personal computer with AMD Dual-Core Processor E-350 1.6 GHz and 2 GB memory.

The operating system was Windows 7 Starter (32-bit) and the implementations were done in Microsoft

Visual C++ 6.0.

We used the following 20 test functions and display the approximate zeros *x .

1

2( ) sin( ) sin

3f x x x

2

3( ) cos cos(2 ) sin( )

5f x x x x

3

2 1( ) cos sin cos( )

5 10f x x x x

4

4( ) sin sin( )

9f x x x

5

1 1( ) cos( ) sin(2 )

2 3f x x x 6

2( ) sin(2 )sin( ) sin

3f x x x x

5

7

1

( ) cos 1i

f x i i x i

8

10( ) sin( ) sin ln 0.84 3

3f x x x x x

5

9

1

( ) exp(0.1 ) 1i

f x x i x i

10

10 10 1( ) cos( ) cos 0.84

3 3f x x x

x

5

11

1

( ) sin 1i

f x i i x i

12

10( ) sin( ) sin ln 0.84

3f x x x x x

5

13

1

( ) sin 1i

f x i x i

2

14( ) 1f x x

215( ) 2 3f x x x

3 216( )f x x x

2 4 617

1( ) 2 1.05

6f x x x x x

4 3 218( ) 4 4 0.5f x x x x

319( ) 3f x x x

6 4 220( ) 22 9 102f x x x x

Table 3 : Computational results of twenty testing examples

Exp *kx

r

*x

*( )f x r

*x

*( )f x

1

5.36225 1.01755 0

7.53982 -1.01481e-

016

0 3.76991

1.82363e-

016

8.39609 1.73864 0

9.42478 1.22461e-

016

0 7.53982

-1.01481e-

016

10.4535 1.73861 0

11.3097 -2.71918e-

016

0 9.42478

1.22461e-

016

13.4873 1.01755 0

15.0796 7.69079e-

016

0 11.3097

-2.71918e-

016

17.0392 0.72109

3

0 18.8496

-1.22461e-

015

0 15.0796

7.69079e-

016

2

10.9598 0.19183

7

0 12.1839

4.40999e-

016

0.2 9.93822

-1.28993e-

015

3

-15.708 0.85470

1

0 -13.4781

-6.18754e-

016

0 -17.9378

1.50541e-

016

-12.1808 1.07705 0

-10.5816 -6.62719e-

017

0 -13.4781

-6.18754e-

016

-9.5911 1.17664 0

-8.66132 -6.2244e-016 0

-10.5816 -6.62719e-

017

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

17

-6.4786 0.96611

8

0 -4.59502

-9.98415e-

017

0 -8.66132 -6.2244e-016

-3.06054 0.92092

7

0 -1.44966

6.19554e-

017

0 -4.59502

-9.98415e-

017

0.099921

6 1.00009

0 1.70283

3.91522e-

017

0 -1.44966

6.19554e-

017

3.23845 1.10178 0

4.88978 2.5247e-016 0

1.70283 3.91533e-

017

6.07014 1.05587 0

7.16913 -2.27954e-

016

0 4.88978 2.5247e-016

9.29931 0.88308 0

11.2033 8.2974e-016 0

7.16913 -2.27954e-

016

4

-8.42662 1.03325 0

-7.06858 8.65927e-

017

0 -9.42478

-3.18162e-

016

-6.66794 1.12298 0

-6.28319 -8.3768e-017 0

-7.06858 8.65927e-

017

-4.50953 0.87740

8

0 -3.14159 1.206e-016

0 -6.28319 -8.3768e-017

-1.93595 0.94786

3

0 -1.08393e-162 0

0 -3.14159 1.206e-016

1.93595 0.94786

3

0 3.14159 1.206e-016

0 1.08393e-

016 0

4.50953 0.87740

8

0 6.28319 -8.3768e-017

0 3.14159 1.206e-016

6.66794 1.12298 0

7.06858 8.65927e-

017

0 6.28319 -8.3768e-017

8.42662 1.03325 0

9.42478 -3.18162e-

016

0 7.06858

8.65927e-

017

5

2.56634 0.61094 0

3.98965 5.55112e-

017

0 1.5708

7.14354e-

017

6 4.231 0.21952

6

0 4.71239

-2.44921e-

016

0 3.78726

-3.99529e-

016

5.19377 0.21952

6

0 5.63752

-3.18051e-

016

0 4.71239

-2.44921e-

016

7 -9.28634 0.10406

2

0.1 -9.03415

-9.76996e-

015

0.1 -9.55476

8.43769e-

015

-8.79406 0.10405

1

0.1 -8.57612

5.77316e-

015

0.1 -9.03415

-9.76996e-

015

-8.29039 0.10135

4

0.1 -8.05487

3.55271e-

015

0.1 -8.57612

5.77316e-

015

-7.70831 0.10270

9

0.1 -7.40542

-7.21645e-

015

0.1 -8.05487

3.55271e-

015

-7.08351 0.10192

7

0.1 -6.73964

-3.19744e-

014

0.1 -7.40542

-7.21645e-

015

-6.47857 0.10327

6

0.1 -6.15885

-1.77636e-

015

0.1 -6.73964

-3.19744e-

014

-5.94894 0.10492 0.1

-5.70985 -1.5099e-014 0.1

-6.15885 -1.77636e-

015

-5.4614 0.1052 0.1

-5.19666 -8.88178e-

015

0.1 -5.70985 -1.5099e-014

-4.96318 0.10479

9

0.1 -4.71693

-1.11011e-

015

0.1 -5.19666

-8.88178e-

015

-4.47753 0.10402

7

0.1 -4.23649

4.76789e-

015

0.1 -4.71693

-1.11011e-

015

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

18

-3.98396 0.10501

1

0.1 -3.73129

2.88658e-

015

0.1 -4.23649

4.76789e-

015

-3.49725 0.10015

3

0.1 -3.27157

-2.22045e-

015

0.1 -3.73129

2.88658e-

015

-3.00316 0.10406

2

0.1 -2.75097

-1.33227e-

015

0.1 -3.27157

-2.22045e-

015

-2.51088 0.10405

2

0.1 -2.29294

4.44089e-

016

0.1 -2.75097

-1.33227e-

015

8 5.19978 0.33496

4

0.3 5.83696

8.88178e-

016

0.3 4.60891

8.88178e-

016

6.15443 0.36497

6

0.3 6.48083

8.88178e-

016

0.3 5.83696

8.88178e-

016

7.06776 0.38518

8

0.3 7.67077

8.88178e-

016

0.3 6.48083

8.88178e-

016

7.9385 0.33898

4

0.3 8.18818

-1.77636e-

015

0.3 7.67077

8.88178e-

016

9 4.48968 0.20543

4 0.2 4.74779

-3.65861e-

015

0.2 4.02655

5.14902e-

016

5.16194 0.20087

6

0.2 5.53319

8.04363e-

015

0.2 4.74779

-3.65861e-

015

5.85113 0.20718

7

0.2 6.31858

-7.46456e-

016

0.2 5.53319

8.04363e-

015

10 12.2438 0.22176 0.2

12.7028 1.66533e-

015

0.2 11.7764

1.11022e-

016

13.179 0.21162 0.2

13.6398 2.44249e-

015

0.2 12.7028

1.66533e-

015

14.1643 0.21672

3

0.2 14.7318

6.99441e-

015

0.2 13.6398

2.44249e-

015

15.0633 0.22110

3

0.2 15.3878

8.65974e-

015

0.2 14.7318

6.99441e-

015

16.0141 0.21064

9

0.2 16.6224

-1.02141e-

014

0.2 15.3878

8.65974e-

015

16.9904 0.21848

8

0.2 17.3735

-1.18794e-

014

0.2 16.6224

-1.02141e-

014

17.885 0.21997

8

0.2 18.3692

4.21885e-

015

0.2 17.3735

7.77156e-

015

18.8495 0.21032

3

0.2 19.3297

2.78666e-

014

0.2 18.3692

4.21885e-

015

11 -9.03744 0.10260

5

0.1 -8.79418

-1.02141e-

014

0.1 -9.29755

-2.35367e-

014

-8.54977 0.10009

4

0.1 -8.33823

3.55271e-

015

0.1 -8.79418

-1.02141e-

014

-8.00868 0.10177

5

0.1 -7.73795

4.44089e-

016

0.1 -8.33823

3.55271e-

015

-7.39728 0.10062

7

0.1 -7.06931

-3.61933e-

014

0.1 -7.73795

4.44089e-

016

-6.77458 0.10002

8

0.1 -6.42988

-2.66454e-

015

0.1 -7.06931

-3.61933e-

014

-6.20297 0.10216

6

0.1 -5.93072

1.02141e-

014

0.1 -6.42988

-2.66454e-

015

-5.70624 0.10129

1

0.1 -5.4615

-1.19904e-

014

0.1 -5.93072

1.02141e-

014

12 0.62006 0.23159

2

0.2 0.925759

-1.11022e-

016

0.2 0.369959

2.22045e-

016

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

19

1.41124 0.20787

2

0.2 1.88437 0

0.2 0.925759

1.11022e-

016

2.26149 0.26022

7

0.1 2.59313

4.44089e-

016

0.2 1.88437 0

13 -8.08035 0.21685

5

0.2 -7.81858

4.79548e-

015

0.2 -8.53982

6.58544e-

016

-7.40995 0.20245

4

0.2 -7.03319

-4.67072e-

015

0.2 -7.81858

4.79548e-

015

-6.72004 0.21261

1

0.2 -6.24779

1.85789e-

015

0.2 -7.03319

-4.67072e-

015

-6.14385 0.21484

1

0.2 -6.02655

-7.90844e-

016

0.2 -6.24779

1.85789e-

015

-5.72898 0.20874

9

0.2 -5.46239

-1.98916e-

015

0.2 -6.02655

-7.90844e-

016

14 0 0.5 0 1 0 0 -1 0

15 -1 0.5 0 1 0 0 -3 0

16 -0.66667 0.5 0 -1.68131e-

0162 0

0 -1 0

17 0.2704 0.32201

3

0.1 0.614543 0

0.1 0 0

0.94168 0.30850

3

0.1 1.2004 0

0.1 0.614543 0

1.75767 0.15630

4

0.1 2.023

-4.44089e-

016

0.1 1.2004 0

18 -2 0.125 0 -1.5412 0 0 -2.30656 0

-1 0.25 0 -0.458804 0 0 -1.5412 0

0 0.125 0 0.306563 0 0 -0.458804 0

19 -1 0.33333

3

0.2 0 0

0.2 -1.73205

-8.88178e-

016

1 0.33333

3

0.2 1.73205

8.88178e-

016

0.2 0 0

20 -3.80252 0.30010

1

0.3 -1.59114 0

0.3 -4.62113 -8.2423e-013

3.80252 0.30010

1

0.3 4.62113 -8.2423e-013

0.3 1.59114 0

Table 3 shows that the use of radius of curvatute at the extreme point will make Newton’s method

always converges to the roots closed to this extreme point. Nonzero value of r indicate that the functions

have the small radius of curvature at their extreme points.

7. Conclusion

In this paper, we have presented that the radius of curvature at maximizer or minimizer points can be

used as an increment to those extremum points in the attempt to find the radius of convergence of

Newton’s method near to the said maximizer or minimizer of a function. Numerical results show that

our valuable method succeed in finding the desire solutions.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

20

References

[1] T. T. Ababu, “A Two-Point Newton Method Suitable for Nonconvergent Cases and with Super-

Quadratic Convergence”, Advance in Numerical Analysis, Hindawi Publishing Corporation,

Article ID 687383, http://dx.doi.org/10.1155/2013/687382, 2013.

[2] I. B. Mohd, The Width Is Unreachable, The Travel Is At The Speed Of Light, Siri Syarahan Inaugral

KUSTEM : 6 (2002), Inaugral Lecture of Prof. Dr. Ismail Bin Mohd, 14th September 2002

[3] S. I. Grossman, Calculus Third Edition, Academic Press, 1984.

[4] J. E. Dennis and R. B. Schnabel, Numerical Methods for Unconstrained Optimization and

Nonlinear Equations, Prentice-Hall, 1983.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

21

Optimization Transportation Model System

Abdul Talib BONa*, Dayang Siti Atiqah HAMZAHb

a,bFaculty of Technology Management, and Business

Universiti Tun Hussein Onn Malaysia *[email protected]

Abstract: In today globalization world, transportation is the main source for huma to

go anywhere, especially for a university student. Basically, transportation is meant that

“any device used to move an item from one location to another”. We could see to our

surrounding, almost all people in this world have their own transportation. Referring to

the transportation management in Universiti Tun Hussein Onn Malaysia (UTHM),

transportation is very important for the students. For the students who live at the

residential college, bus is the main transportation for them to go and from class as they

do not have any transportation. Then, it became an issue after students had to face the

transportation problem. The issue of bus services arised because of the current

requirement that cannot be met. Comparison between Northwest Corner Method

(NWCM) and Vogel Approximation Method (VAM) is used to solve these available

issues. Observation, interview, and data collection will be carried out on the bus service

that send students to the class towards on residential colleges that involved, such as

Residential College of Perwira and Residential College of Taman Universiti to ensure

that the requirements of the research objectives will be achieved.

Keywords: Transportation, Transportation Model System, North West Corner Method

(NWCM), Vogel Approximation Method (VAM).

1. Introduction

According to Tran and Kleiner (2005), public transportation is defined as transportation service

providers on an ongoing basis, general and specifically to the public. Transport does not include school

bus, chrter busses and bus service that offering sightseeing. Examples of public transportation used by

people other than busses are trains and ferries.

1.1 Background of Study

The study was conducted in Universiti Tun Hussein Onn Malaysia (The University) and involves daily

bus transportation for students to and from the class. The study involved six (6) facultieswhich are

Faculty of Mechanical and Manufacturing Engineering (FKMP), Faculty of Civil and Environmental

Engineering (FKAAS), and the Faculty of Electrical and Electronic Engineering (FKEE). Some of the

students expressed these faculties are those who were place in the Residential College of Perwira while

the rest are in Residential College of Tun Dr. Ismail and Residential College of Tun Fatimah. While for

students from the Faculty of Computer Science and Information Technology (FSKTM), Faculty of

Technical and Vocational Education (FPTV) and Faculty of Technology Management and Business

(FPTP) most of the students from the these faculties are staying at Residential College of Perwira and

only certain of them who involves third year students and fourth year students are living at the specific

rented house.

1.2 Problem Statement

Problems can be identified when the increases of sum of the number of students who live in the

Residential College of Perwira and Residential College of Taman Universiti caused the number of bus

provided are not able to accommodate the capacity of students. This lead to a badly damaged for an

existing bus in every two-month. An average estimated cost of bus maintenance for every month is

about RM 500 while the maintenance costs for a buss less than five (5) years of service is at least RM

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

22

3000. Lastly, for the bus more than five (5) years of service take about RM 6000 and above. When this

happens, the parties involved have to spend a big amount of money for repairing a break down bus in

order to recover them for immediate use. Bus travel times also play an important role which it will

decide a student to be at class whether sooner or later. Students often complain that the bus is always

late to send them to class without knowing the exact causes of delay.

1.3 Research Questions

These are the questions that emerge in this case:

a) Is the method of Northwest Corner Method (NWCM) and Vogel Approximation (VAM),

especially in terms of capacity can solve the problems exist in the existing of transportation

model system?

b) How to determine the number of frequency of student use daily bus service to go to class,

particularly during peak hours in a day?

c) What are the methods that will be used to identify the best way to save the time of the journey

of a bus?

1.4 Objectives of Study

The objectives of this study are:

a) To identify the best method in terms of capacity in solving the problems that exist in the model

system of transport available either on Northwest Corner Method (NWCM) or Vogel

Approximation Method (VAM).

b) To determine the frequency number of student use daily bus service to go to the class in a day.

c) To determine the best route to save a bus trip time through the technique of a "Minimum

Spinning Tree".

1.5 Scope of Study

This study covers the area which is in UTHM which includes two (2) types of daily bus transportation

which are Colourplus and Sikun Jaya. It involves traveling from college -Such as Colorplus, from

Perwira Residential College to UTHM while Sikun Jaya hand the journey from Residential College of

Taman Universiti to UTHM.

1.6 Significant of Study

Its helps to identify the problems that often occur in daily bus transportation system and and then it

helps to find the best solution to resolve it from occur again. Therefore, it is very important to come out

with the models that are inherent in the transportation system in order to propose the appropriate model

to be used by UTHM at this current situation. In addition, it is intended to facilitate the journey of bus

driver to by studying the suitable routes for the bus driver use and this can saves the shipment time and

cost of daily operation of bus transportation. For students, the benefits received are through short

delivery time will allows students to arrived early to the class and university shall not have to bear the

cost that much to add more buses in order to accommodate the number of students to go to the class,

particularly during peak periods.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

23

2. Literature Review

2.1 Introducution

Means different types of transport vehicles used whether by air, land, and water to move or carry goods

and passengers from one place to another.

2.2 Public Transportation

Public transport is defined as a system of motorized transport such as buses, taxis, and trains used by

people in a specific area with fixed fares (Kamus Dewan Bahasa dan Pustaka Edisi Keempat).

2.3 North West Corner Method (NWCM)

According Sudirga (2009), after data has been received from sources of supply and demand in a number

of places, it is compiled into the table. Researchers should determine the most initial feasible solution

in the transportation problem.

2.4 Vogel Approximation Method (VAM)

According to Samuel and Venkatachalapathy (2011) VAM was improved by using matrix total

opportunity cost (TOC) as well as to consider the cost of replacement provision. Total opportunity cost

matrix is obtained by adding the "matrix opportunity cost row" (matrix of opportunity costs row: for

each of row, the cost of the smallest in the row subtracted from every element in the same row) and for

the "matrix opportunity cost column" (matrix of opportunity costs column: for each of column from the

matrix of the actual cost of transportation, the cost of the smallest value in the column is subtracted by

every element in the same column.

2.5 Minimum Spaning Tree Technique

According Rahardiantoro (2006), Minimal Spinning Tree Technique is a technique which is used to

find a way that can connects all the points in the same network until the minimum distance will obtain.

Minimal Spanning Tree problem has similarities with the shortest path problem (Shortest Route), but

the purpose of using this technique is to link the entire nodes in a network so that the total path is

minimized. The resulting network can connect all nodes in the network at minimum total distance.

Technical measures of Minimal Spinning Tree are:

1. Select a node in the network in anarbitrary way

2. Connect the node is the closest node to minimize the total distance

3. All the nodes is observed to determine whether there are still nodes that have not been

connected, discover and connect nodes that have not been connected.

Repeat the third step until all nodes can be connected

2.6 Transportation Problem

According to Reeb and Leavengood (2002), the transport problem are known as one of the most

important applications and succeed in quantitative analysis to solve business problems involving the

distribution of the product. In essence, it aims to minimize the cost of shipping goods from one location

to another so that the needs in each area can be filled and every ship that operates leading capacity of

goods to a predetermined location.

2.7 Public Transportation Problem

According to Iles (2005) in his book entitled "Public Transport in Developing Countries", public

transportation is important to a broad majority without involving private transport. The need to personal

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

24

mobility, in particular the entry for job opportunities, but with the ability of low income levels is a

common problem, and the service provided is always required because it is not sufficient.

3. Research Methodology

3.1 Introduction

The research methodology is the most important aspect in chapter three (3) because it discusses the

methods of a study conducted by researchers to prove something authentic study done or not. It is

carefully constructed based on the guidance of reference resources available to ensure the collection of

data to be acquired is systematic.

3.2 Design of Study

The research conducted is a case of study that was done at the University Tun Hussein Onn Malaysia

(The University) by using qualitative methods. It was chosen to enable the researcher to understand it

deeply why it is necessary to build an effective transport system model to solve the problem of transport

available.

3.4 Sampling Method

Respondents were randomly selected as an observation based on the two residential colleges studied.

This refers to the collection of data that are made on a daily basis from one month to another month. As

for the interview, the respondents were selected among bus supervisors and bus drivers.

3.5 Population of Study

Selection based on the students who inhabit residential colleges that offer daily bus transport services

such as Residential College of Perwira, and Residential College of Taman Universiti. Total population

can be identified for residential colleges involved are of 3267 students.

3.6 Sample of Study

Samples were selected through students either boys or girls who use this daily bus service as their main

transportation to get to class. The main focus is on a group of fisrt year students because they are the

biggest users of bus services. The study sample was taken based on the two residential colleges

involved. While through the interview, the sample was selected are the group of people related with

daily bus transportation such as the supervisors and bus drivers.

3.7 Instrument of Study

The suitable instrument that can be used by the researcher in this study is through observation and

interview methods. Indirectly, an observation per passenger volume data can be taken at any time and

it can be recorded accurately and systematically. While by interviewing, the data obtained as a result of

the interview conducted on the respondent particularly supervisors and bus drivers to support the data

obtained as a result of the observations made and the data are recorded in detail.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

25

3.8 Method of Data Collection

3.8.1 Primary Data

Primary data obtained are collected through two methods, which are:

3.8.1.1 Observation Method

The observation that has been done is a daily method which is from one month to another. The purposes

of this method is to identify the estimated number of students who use the bus service on a daily basis

either at the peak or at the usual time. The data obtained are subject to fluctuations in the number of

passengers on the bus at anytime in a month.

3.8.1.2 Interview Method

The researchers also used the interview technique to support the data derived from observation. The

interview that is conducted is a interview that focus on method which they are more focused on the

group of respondents who are involved with the daily bus transportation services. The researcher will

select three respondents from two daily bus transportation companies, which served as the Colourplus-

Tunas Tinggi Pt. Ltd Company and the Sikun Jaya Pt. Ltd.Company to be interviewed that is related

with the issue of bus transportation problems.

3.8.2 Secondary Data

Secondary data is data obtained from the studies that have been done. Data is divided into data obtained

from sources printed and non-printed sources. Print resources available through the magazines, articles

and books in the library. For printed sources, the researcher acquire an existing reference sources

through data from the internet and the researcher can obtain the desired journals through sites such as

Emerald, EBSCO HOST, ProQuest, Science Direct and IEEE.

3.9 Method of Data Analysis

Data analysis is based on data obtained through observations made from one month to another month

and a structural interview was conducted on the respondents. The analysis of data is very important

because it will determine whether a study reach or fulfill the needs of the research objectives or not. In

the study conducted, the researchers use the Production and Operation Management (POM) Software

to calculate the overall data. In addition, researchers are using QSR NVivo 10 Software to translate the

results obtained in the form of words to statistics.

4. Data Analysis

4.1 Introduction

Data analysis discusses the calculation steps of the two models used for comparison in determining to

decide which model is the best and how to calculate the frequency of respondents that use the service

of daily bus transportation as well as how to identify the best route to save costs and time of the journey

of a bus.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

26

4.2 Analisis Data

The analysis of data is made based on three (3) main objective that need to be achieved, which are:

a) To identify the best method in terms of capacity to solve problems inherent in the existing

transportation model system either on using the method of North West Corner Method (NWCM)

or Vogel Approximation Method (VAM).

b) To determine the frequency number of student use daily bus service to go to the class.

c) To identify the best route to save timeof a bus trip through the technique of a Minimum Spanning

Tree.

4.3 Northwest Corner Method (NWCM)

4.3.1 Data is transferred to the Table

Table 4.3.1: Tabulated of Data

Susur Gajah G3 Library KK TDI/TF Supply

KK Perwira 50 10 0 60

KK Taman U 4 30 20 54

Bus Stop ATM 0 4 40 44

Demand 54 44 60

Table generated after data entered. It is very clear that how the supply of transport available to meet the

needs of the current student demand.

4.3.2 Shipping Transport through Northwest Corner Method (NWCM).

Table 4.3.2: Transportation Shipment

Kos Optimal = RM 392 Susur Gajah G3 Library KK TDI/TF

KK Perwira 60

KK Taman Universiti 54 0

Bus Stop ATM 0 44

Optimal cost results obtained from the analysis using the method of Northwest Corner Method

(NWCM) is at RM 392.00.

4.3.3 Marginal Cost

Table 4.3.3:Marginal Cost

Susur Gajah G3 Library KK TDI/TF

KK Perwira 66 22

KK Taman Universiti 22

Bus Stop ATM 24

Marginal cost, which arise from the analysis of the Production and Operation Management (POM)

Software.

4.3.4 Final Solution Table

Table 4.3.4: Final Solution Table

Susur Gajah G3 Library KK TDI/TF

KK Perwira [66] [22] 60

KK Taman Universiti 54 [22] 0

Bus Stop ATM 0 44 [24]

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

27

Here can be seen how to calculate the costs incurred through travel bus transportation is by way of

method of Northwest Corner Method.

4.3.5 Iteration

Table 4.3.5: Iteration

Susur Gajah G3 Library KK TDI/TF

Iterasi 1

KK Perwira 54 6 (0)

KK Taman Universiti (-66) 38 16

Bus Stop ATM (-90) (-46) 44

Iterasi 2

KK Perwira 16 44 (-90)

KK Taman Universiti (24) (90) 54

Bus Stop ATM 38 (44) 6

Iterasi 3

KK Perwira 10 44 6

KK Taman Universiti (-66) (0) 54

Bus Stop ATM 44 (44) (90)

Iterasi 4

KK Perwira (66) 44 16

KK Taman Universiti 10 (0) 44

Bus Stop ATM 44 (-22) (24)

Iterasi 5

KK Perwira (66) (22) 60

KK Taman Universiti 54 (22) (0)

Bus Stop ATM (0) 44 (24)

Method of Northwest Corner Method (NWCM) analysed through Production and Operation (POM)

Software has five (5) iteration calculation process where it is shown in the table above.

4.3.6 Shipment with Cost

Table 4.3.6: Shipment with Cost

Susur Gajah G3 Library KK TDI/TF

KK Perwira 60/RM0

KK Taman Universiti 54/RM216 0/RM0

Bus Stop ATM 0/RM0 44/RM176

Figure above shows the cost of delivery schedule of students involves the location of the

PerwiraResidential to Tun Dr.Ismail/Tun Fatimah Residential College of 60/RM0 while for Taman

Universiti Residential College to “Susur Gajah G3” was obtained at a cost of 54/RM216. The rest is

from the Taman Universiti Residential College to Tun Dr.Ismail/Tun Fatimah Residential College

recorded a cost of 0/RM0 and the ATM Bus Stop to “Susur Gajah G3”, the cost is of 0/RM0. The last

one is from ATM Bus Stop to Library is a 44/RM176.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

28

4.3.7 Shipping List

Table 4.3.7: Shipping List

From To Shipment Cost

per unit

Shipping Cost

KK Perwira KK TDI/TF 60 0 0

KK Taman Universiti Susur Gajah G3 54 4 216

KK Taman Universiti KK TDI/TF 0 20 0

Bus Stop ATM Susur gajah G3 0 0 0

Bus Stop ATM Library 44 4 176

Through this list, the researcher can identify the cost per unit of shipping from Taman Universiti

Residential College to “Susur Gajah G3” is of RM 4.00 while for Taman Universiti Residential College

to Tun Dr.Ismail/Tun Fatimah Residential College was RM 20.00. The last one is from ATM Bus Stop

to Library of RM 4.00.

4.4 Vogel Approximation Method (VAM)

4.4.1 Data Table

Table 4.4.1: Transfer Data to Table

Susur Gajah G3 Library KK TDI/TF Supply

KK Perwira 0 40 20 60

KK Taman Universiti 14 0 40 54

Bus Stop ATM 40 4 0 44

Demand 54 44 60

This data enable the researcher to compute the optimal total cost required to initiate movement of a bus

trip around the university.

4.4.2 Transportation Shipment through Vogel Approximation Method (VAM).

Table 4.4.2: Transportation Shipment

Kos Optimal = RM 460 Susur Gajah G3 Library KK TDI/TF

KK Perwira 44 16

KK Taman Universiti 10 44

Bus Stop ATM 44

Optimal cost derived from the delivery of transport is RM 460.00.

4.4.3 Marginal Cost

Table 4.4.3:Marginal Cost

Susur Gajah G3 Library KK TDI/TF

KK Perwira 54

KK Taman Universiti 6

Bus Stop ATM 60 38

Marginal cost analysis of the results of the Production and Operation Management (POM) Software.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

29

4.4.4 Final Solution Table

Table 4.4.4: Final Solution

Susur Gajah G3 Library KK TDI/TF

KK Perwira 44 [54] 16

KK Taman Universiti 10 44 [6]

Bus Stop ATM [60] [38] 44

Here can be seen how to calculate the costs involved through bus transportation made travel.

4.4.5 Iteration

Table 4.4.5:Iteration

Susur Gajah G3 Library KK

TDI/TF

Iteration 1

KK Perwira 54 (60) 6

KK Taman Universiti (-6) 44 10

Bus Stop ATM (60) (44) 44

Iteration 2

KK Perwira 44 (54) 16

KK Taman Universiti 10 44 (6)

Bus Stop ATM (60) (38) 44

Method of Vogel Approximation Method (VAM) are analysed through Production and Operation

(POM) Software have two (2) iteration calculation process where it is shown in the table above.

4.4.6 Shipment with Cost

Table 4.4.6: Shipment with Cost

Susur Gajah G3 Library KK TDI/TF

KK Perwira 44/RM0 16/RM320

KK Taman Universiti 10/RM140 44/RM0

Bus Stop ATM 44/RM0

Figure above shows the delivery schedule of the students, the costs of location from Perwira Residential

College to the “Susur Gajah G3” is 44/RM0 while for Taman Universiti Residential College to “Susur

Gajah G3” was obtained at a cost of 10/RM140. The rest is from the Residential College of Taman

Universiti to the Libarary recorded at a cost of 44/RM0 and the Residential College of Tun Dr. Ismail/

Tun Fatimah, the cost is as much as 16/RM320. The last one is from Bus Stop”ATM to Kolej Kediaman

Tun Dr. Ismail/ Tun Fatimah the cost is 44/RM0.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

30

4.4.7 Shipping List

Table 4.10.6: Shipping List

From To Shipment Cost

Per Unit

Shipping

KK Perwira Susur Gajah G3 44 0 0

KK Perwira KK TDI/TF 16 20 320

KK Taman

Universiti

Susur Gajah G3 10 14 140

KK Taman

Universiti

Library 44 0 0

Bus Stop ATM KK TDI/TF 44 0 0

Through this list, the researcher can identify the cost per unit of shipping from Perwira Residential

College to Tun Dr. Ismail/ Tun Fatimah Residential College is of RM20.00 while for Taman Universiti

Residential College to “Susur Gajah G3” is RM14.00.

4.5 Determination of the Best Model

The researcher decided to propose to the university to use Northwest Corner Method model because

through the use of this model to the university, it can save cost through the payment of the minimum

optimal cost of RM 392 to the daily bus transportation contract company for UTHM for one day

operating a bus. This is due to shipping costs using the provided transportation route is calculated only

from the location of the Taman Universiti Residential College to “Susur Gajah G3” and also from the

location of ATM Bus Stop to the Library.

4.6 Total Frequency of Students Using Buses in the Day.

4.6.1 Colourplus Bus

Perwira Residential College

Histogram,

Figure 4.6.1 Total Daily Frequency

0

100

200

300

400

500

600

No

. of

Stu

de

nts

(P

ers

on

s)

Mid Point (Hours)

Total Daily Frequency

29.5

89.5

149.5

209.5

269.5

329.5

389.5

449.5

509.5

569.5

629.5

689.5

749.5

809.5

869.5

929.5

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

31

The graph above shows the high frequency is due at the first bus operation from 7.00 am to 8.00 am of

512 students. It also shows a low frequency in the last hour of bus operation, from example from 10.00

pm to 11.00 pm.

4.6.2 Sikun Jaya Bus

Taman Universiti Residential College

Figure 4.6.2 Total Daily Frequency

Figure above shows that the frequency of the highest recorded daily is in the first hour of bus service

operates. This shows that the range of peak hours between 7.00 am to 8.00 am, there were 260 students

who use the bus service to go to the class. While the lowest frequency in the sixth hour of the bus

services operated by 49 students. Identified at the time of the sixth hour is covering at 12.10 pm to 1.00

pm.

4.7 Technique for Minimum Spanning Tree

C = {1, 2, 4, 5. 6. 7, 9, 10, 11, 12, 13, 8, 14, 3} C = {}

Figure 4.7: Minimum Spanning Tree

0

50

100

150

200

250

300N

o. o

f St

ud

en

ts (

Pe

rso

ns)

Mid Point (Hours)

Total Daily Frequecy

29.5

89.5

149.5

209.5

269.5

329.5

389.5

449.5

509.5

569.5

629.5

689.5

749.5

809.5

869.5

929.5

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

32

4.8 Final Solution

Total Distance:

0.3 0.2 0.5 0.2 0.6

0.9

0.2 0.05 0.2 0.2

0.5

0.9 0.3

= 0.3 + 0.2 + 0.5 + 0.2 + 0.6 + 0.9 + 0.2 + 0.2 + 0.05 + 0.2 + 0.5 + 0.9 + 0.3

= 5.05 km

5. Conclusions and Recommendations

5.1 Introduction

In this chapter, the researcher describes and discusses the findings obtained of data analysis done. The

findings obtained from the comparison of the model involved, which are Northwest Corner Method

Model and Vogel Approximation Method Model. The researcher also studied the frequency amount of

students who use the bus services daily and subsequently to identify the best route to shorten the travel

distance and save the cost. Researchers has been given the opportunity to highlight some of the

recommendations that are relevant to the research topic

5.2 Recommendation

The researcher have identified a number of recommendations that need to be concern of and this

recemmendation need some actions to be taken by the specific parties to ensure the smooth operation

of daily bus transportation to students running smoothly.

5.3 Recommendation to Students

5.3.1 Time Planning Aspect

The researcher suggested that each student should start their journey to the class as early as possible,

for example in the morning, as they all aready know at 7.30 am until 8.10 am is the peak time. Students

may be able to wait for the bus as early as 7:10 in the morning to avoid the congestion.

5.3. Student’s Behaviour Aspect

The researchers suggest that students themselves must change their attitudes to be more disciplined and

not take it as easy in some matters. The research also suggests students do not have to return to

residential college or home if the students only have a short time off.

1 2 4 5 6 7

9 10 11 12 13

8 14 3

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

33

5.4 Recommendation to the Bus Drivers

5.4.1 Accuracy of Time Aspect

Looking to the existing schedule, at some time it is very suitable for the driver and it is like a guide to

drivers so that drivers try to follow it as best as possible. With a given schedule, a bit of congestion due

to the many students who are waiting for the arrival of bus can be reduced. If the driver can comply

with time given well, the problem of student complaining against the late bus will not arise.

5.4.2 Behaviour of Driver Aspect

The researcher emphasizes the element of respect and tolerance. Researchers have found that some

students complained that if they asked some of the bus drivers, they get rail from the driver. In addition,

most of the bus drivers working based on their mood or feelings. This situation can be changed if

tolerate’ attitude on each other is manured.

5.5 Recommendation to University

5.5.1 Security Aspect

From the front of the examination hall of F2, the researcher suggests that a security officer is required

to keep traffic running smoothly so the movement of vehicles especially in the morning at which time

the staff and students would like to get into the university and in the evening when the staff and students

would like to go home are going well.

5.5.2 Give a Briefing to the New Students

The researchers suggested that the management of UTHM play an important role to avoid this matter

from happening again. Depth briefing related to transportation case should be given as early as possible

during the “Minggu Haluan Siswa” (MHS).

5.7 Conclusion

Overall, this study has achieved the three (3) objectives to be studied by researcher’s early of her

research. The researcher hope that Northwest Corner Method model (NWCM) that have been suggested

by her could help UTHM in order to solve the cost problem and thus it can save budget and UTHM can

also provide the more better transport facilities.

References

Baxter, P. and Susan, J. (2008). Qualitative Case Study Methodology: Study Design and

Implementation for Novice Researchers. The Qualitative Report, Vol 13(4), 544-559. Dicapai pada

May 2, 2012, dari ms 1 di http://www.scribd.com/doc/46305862/Maksud-Pengangkutan Dicapai

pada December 11, 2012, dari ms 1 di http://dickyrahardi.blogspot.com/2008/05/minimal-

spanning-tree.html.

Iles, R. (2005). Public Transportation in Developing Countries. 1st ed. United Kingdom. Elsevier.

Transportation.

Reeb, J. and Leavengood, S. (2002). Transportation Problem: A Special Case for Linear Programming

Problems. Operational Research, Vol 1, 1-36.

Samuel, A. E. and Venkatachalapathy, M. (2011). Modified Vogel’s Approximation Method for Fuzzy

Transportation Problems. Applied Mathematical Sciences, Vol 5(28), 1367-1372.

Sudirga, R. S. (2009). Perbandingan Pemecahan Masalah Transportasi Antara Metode Northwest

Corner Rule Dan Stepping-Stone Method Dengan Assignment Method. Business & Management. Vol

4 (1), 29-50.

Tran, T. and Kleiner, B. H. (2005). Managing for Excellence in Public Transportation. Importance of

Public Transportation, Volume 28, 154 – 163.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

34

Venkatasubbaiah, K et al (2011). Fuzzy Goal Programming Method for Solving Multi-Objective

Transportation Problems. Engineering, Vol 11 Issue 3 Version 1.0, 5-10.

Nuruljannah Samsudin (2009). Mengkaji Kualiti Perkhidmatan Pengangkutan Bas Dan Van Dari

Perspektif Pelanggan : Sebuah Kajian Kes Di Universiti Tun Hussein Onn Malaysia. Universiti

Tun Hussein Onn Malaysia. Tesis Ijazah Sarjana Muda.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th, 2013

35

PRESENTERS

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

36

Outstanding Claims Reserve Estimates by

Using Bornhutter-Ferguson Method

Agus SUPRIATNAa*, Dwi SUSANTIb

a,bDepartment of Mathematics FMIPA Universitas Padjadjaran, Indonesia a*Email : [email protected]

Abstract: Outstanding claims reserves is an important part of the insurance company. If in

determining the outstanding claims reserve liability insurance is not right, it can lead to

inaccuracies in setting up reserves to cover losses and insurance claims it can disrupt the stability

of the insurance company. In calculating the outstanding claims reserve estimation, there are

several methods that can be used. This paper will discuss the outstanding claims reserve

estimation and calculation of prediction error using Bornhuetter-Ferguson method.

Keywords: Outstanding claims reserves, Insurance, Bornhuetter-Ferguson method, Prediction

Error.

1. Introduction

Uncertainty of events that will happen in the future, it is risky. When high-risk or potentially difficult

to control, then most of the people or companies prefer shifted the risk to the insurance company. The

insurance company takes over or bear some of the risk. Therefore, the policyholder must pay insurance

premiums. As for insurance companies, if the risk occurs, the insurance company must pay the claim

to the insured.

In fact, sometimes the amount of the premium is not balanced by the number of claims filed

insureds. If a claim is filed too much, it will threaten the stability of the insurance company. Therefore,

insurance companies require a solution to overcome it. One way that can be used is to determine the

outstanding claims reserves of insurance.

In determining the amount of the outstanding claims reserves, can use several methods. In this

paper, that the method used is Bornhuetter-Ferguson method.

2. Methodology

2.1 Bornhuetter-Ferguson Algorithm

Without loss of generality, assume that the data consists of a triangle or an increase in claims in

the form of a run-off triangle. Increase this claim can be written as {𝑆𝑖,𝑘 ∶ 𝑖 = 1,… , 𝑛 ; 𝑘 = 1,… , 𝑛 +

1 − 𝑖} where 𝑖 is the year in which the incident occurred is called the event, 𝑘 is the number of periods

until payment completion are called the development years.

Then from the triangle run-off was the summation row consecutive years until the period of the

development, can be written mathematically as follows:

𝐶𝑖,𝑘 = ∑ 𝑆𝑖,𝑗𝑘𝑗=1 . (2.1)

𝐶𝑖,𝑘 expressed the amount of increase in cumulative claims of incident years 𝑖 after 𝑘 the development

years, 1 ≤ 𝑖, 𝑘 ≤ 𝑛.

Bornhuetter-Ferguson method avoids the dependence on the number of claims at this time, then

back up his claim:

��𝑖𝐵𝐹 = ��𝑖(1 − ��𝑛+1−𝑖) (2.2)

where

��𝑖 = 𝑣𝑖��𝑖 (2.3)

and ��𝑘 ∈ [0,1]

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

37

2.2 Initial estimates of Ultimate Claims Ratio

Bornhuetter-Ferguson method aimed at the construction of an estimate for 𝑞𝑖 which is not directly

dependent on the amount of cumulative claims 𝐶𝑖,𝑛+1−𝑖. The first step is to consider the ratio of the

average increase in claims:

��𝑘 =∑ 𝑆𝑖,𝑘𝑛+1−𝑘𝑖=1

∑ 𝑣𝑖𝑛+1−𝑘𝑖=1

(2.4)

of the development years 𝑘 observed to date. Then used a weighted average 𝑟𝑖 from ratios 𝑆𝑖,𝑘 𝑣𝑖⁄ and

��𝑘. That is

𝑟𝑖 = ∑��𝑘

∑ ��𝑗𝑛+1−𝑖𝑗=1

∙𝑆𝑖,𝑘 𝑣𝑖⁄

��𝑘

𝑛+1−𝑖𝑘=1 = ∑

𝑆𝑖,𝑘

𝑣𝑖∑ ��𝑗𝑛+1−𝑖𝑗=1

𝑛+1−𝑖𝑘=1 =

∑ 𝑆𝑖,𝑘𝑛+1−𝑖𝑘=1

𝑣𝑖∑ ��𝑗𝑛+1−𝑖𝑗=1

=𝐶𝑖,𝑛+1−𝑖

𝑣𝑖∑ ��𝑗𝑛+1−𝑖𝑗=1

. (2.5)

Because the development year is expressed by 𝑘, then ∑ ��𝑗𝑛+1−𝑖𝑗=1 on 𝑟𝑖 changed to

𝑟𝑖 =𝐶𝑖,𝑛+1−𝑖

𝑣𝑖∑ ��𝑘𝑛+1−𝑖𝑘=1

. (2.6)

So that, 𝑟𝑖 is the ratio of the individual claims 𝐶𝑖,𝑛+1−𝑖 𝑣𝑖⁄

��𝑖 = √𝑟𝑖𝑑𝑖𝑏𝑎𝑦𝑎𝑟

∙ 𝑟𝑖𝑡𝑒𝑟𝑗𝑎𝑑𝑖

. (2.7)

The ratio of the average increase in claims ��𝑘 must be adjusted for the ratio of the average increase

in claims ��𝑘 based on premium volume description is not adjustable. Therefore, ��𝑘 replaced by

��𝑘 =∑ 𝑆𝑖,𝑘𝑛+1−𝑘𝑖=1

∑ (𝑣𝑖𝑟𝑖∗)𝑛+1−𝑘

𝑖=1

. (2.8)

��∗ = ��1∗ +⋯+ ��𝑛

∗ + ��𝑛+1∗ , (2.9)

This eventually results in prior estimates

��𝑖 = 𝑟𝑖∗��∗ (2.10)

for the ultimate claims ratio of incident years 𝑖 and the ultimate number of claims

��𝑖 = 𝑣𝑖��𝑖 = 𝑣𝑖𝑟𝑖∗��∗ (2.11)

appropriate.

2.3 Estimates of Development Patterns

If an estimate prior for ��𝑖 already owned, then the approach can estimate the pattern of

development. From equation (3.2), it can be deduced

��𝑛+1−𝑖 = 1 −��𝑖

��𝑖=

��𝑖−��𝑖

��𝑖≈

𝐶𝑖,𝑛+1−𝑖

��𝑖. (2.12)

��𝑘 =∑ 𝐶𝑖,𝑘𝑛+1−𝑘𝑖=1

∑ ��𝑖𝑛+1−𝑘𝑖=1

(2.13)

To avoid such a reversal, the corresponding gain is used, ie

𝜁𝑘 =∑ 𝑆𝑖,𝑘𝑛+1−𝑘𝑖=1

∑ ��𝑖𝑛+1−𝑘𝑖=1

(2.14)

��𝑘 = 𝜁1 +⋯+ 𝜁𝑘 (2.15)

and increment between ��𝑘 with ��𝑛+1 is 1

𝜁𝑘 =∑ 𝑆𝑖,𝑘𝑛+1−𝑘𝑖=1

∑ ��𝑖𝑛+1−𝑘𝑖=1

=∑ 𝑆𝑖,𝑘𝑛+1−𝑘𝑖=1

∑ 𝑣𝑖𝑟𝑖∗��∗𝑛+1−𝑘

𝑖=1

=��𝑘

��∗ . (2.16)

𝜁𝑘∗ =

��𝑘∗

��∗ (2.17)

with 𝜁1∗ +⋯+ 𝜁𝑛

∗ + 𝜁𝑛+1∗ = 1 and equation (2.15) becomes

��𝑘∗ = 𝜁1

∗ +⋯+ 𝜁𝑘∗ =

��1∗

��∗ +⋯+��𝑘∗

��∗ =��1∗+⋯+��𝑘

��∗ =��1∗+⋯+��𝑘

��1∗+⋯+��𝑛+1

∗ . (2.18)

Because ��𝑘 has been estimated by ��𝑘∗, then equation (2.2) is converted into

��𝑖𝐵𝐹 = ��𝑖(1 − ��𝑛+1−𝑖

∗ ) (2.19)

and the total outstanding claims reserves are

��𝐵𝐹 = ��1𝐵𝐹 +⋯+ ��𝑛

𝐵𝐹 . (2.20)

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

38

𝐸(𝑅𝑖) = 𝑥𝑖(𝑦𝑛+2−𝑖 +⋯+ 𝑦𝑛+1) = 𝑥𝑖(1 − 𝑧𝑛+1−𝑖) (2.21)

with 𝑧𝑘 = 𝑦1 +⋯+ 𝑦𝑘 which shows that the expectations of claims reserves have the same shape as

the estimated reserves of Bornhuetter-Ferguson.

From the above model, then obtained

𝑉𝑎𝑟(𝑅𝑖) = 𝑥𝑖(𝑠𝑛+2−𝑖2 +⋯+ 𝑠𝑛+1

2 ). (2.22)

Furthermore, note that the 𝑥1, … , 𝑥𝑛 is known,

��𝑘 =∑ 𝑆𝑖,𝑘𝑛+1−𝑘𝑖=1

∑ 𝑥𝑖𝑛+1−𝑘𝑖=1

, (2.23)

is an unbiased estimate the best linear of 𝑦𝑘 with 1 ≤ 𝑘 ≤ 𝑛, and

��𝑘2 =

1

𝑛−𝑘∑

(𝑆𝑖,𝑘−𝑥𝑖��𝑘)2

𝑥𝑖

𝑛+1−𝑘𝑖=1 (2.24)

is an unbiased estimate of 𝑠𝑘2 with 1 ≤ 𝑘 ≤ 𝑛 − 1.

As starting values for the minimization can be used

��𝑘 =∑ 𝑆𝑖,𝑘𝑛+1−𝑘𝑖=1

∑ ��𝑖𝑛+1−𝑘𝑖=1

. (2.25)

From equation (3.23), the obtained the estimates 𝑠𝑘2 that is

��𝑘2 =

1

𝑛−𝑘∑

(𝑆𝑖,𝑘−��𝑖��𝑘∗)2

��𝑖

𝑛+1−𝑘𝑖=1 , (2.26)

with 1 ≤ 𝑘 ≤ 𝑛 − 1 ..

2.4 Prediction of Error

Mean square error of prediction is

𝑚𝑠𝑒𝑝(��𝑖𝐵𝐹) = 𝐸 ((��𝑖

𝐵𝐹 − 𝑅𝑖)2) = 𝑉𝑎𝑟(��𝑖

𝐵𝐹 − 𝑅𝑖) + (𝐸(��𝑖𝐵𝐹) − 𝐸(𝑅𝑖))

2

= 𝑉𝑎𝑟(��𝑖𝐵𝐹) + 𝑉𝑎𝑟(𝑅𝑖). (2.27)

Mean square error of prediction is the sum of estimation error 𝑉𝑎𝑟(��𝑖𝐵𝐹) and of the process error

𝑉𝑎𝑟(𝑅𝑖). To process the error obtained

𝑉𝑎𝑟(𝑅𝑖) = 𝑉𝑎𝑟(𝑆𝑖,𝑛+2−𝑖) + ⋯+ 𝑉𝑎𝑟(𝑆𝑖,𝑛+1) = 𝑥𝑖(𝑠𝑛+2−𝑖2 +⋯+ 𝑠𝑛+1

2 ) (2.28)

to be estimated by

��𝑎𝑟(𝑅𝑖) = ��𝑖(��𝑛+2−𝑖2∗ +⋯+ ��𝑛+1

2∗ ). (2.29)

For the estimation error of ��𝑖𝐵𝐹 = ��𝑖(1 − ��𝑛+1−𝑖

∗ ) is obtained

𝑉𝑎𝑟(��𝑖𝐵𝐹) = (𝐸(��𝑖))

2𝑉𝑎𝑟(��𝑛+1−𝑖

∗ ) + 𝑉𝑎𝑟(��𝑖)𝑉𝑎𝑟(��𝑛+1−𝑖∗ ) + 𝑉𝑎𝑟(��𝑖)(1 − (��𝑛+1−𝑖

∗ ))2

= (𝑥𝑖2 + 𝑉𝑎𝑟(��𝑖))𝑉𝑎𝑟(��𝑛+1−𝑖

∗ ) + 𝑉𝑎𝑟(��𝑖)(1 − 𝑧𝑛+1−𝑖)2. (2.30)

Standard error (𝑠. 𝑒. (��𝑖)), that estimated for √𝑉𝑎𝑟(��𝑖). Like ��𝑖 itself, 𝑠. 𝑒. (��𝑖) obtained from

the best business repricing. So the formula

(𝑠. 𝑒. (��𝑖))2=

𝑣𝑖

𝑛−1∑ 𝑣𝑗𝑛𝑗=1 (

��𝑗

𝑣𝑗− ��)

2

(2.31)

with �� = ∑ ��𝑗𝑛𝑗=1 ∑ 𝑣𝑗

𝑛𝑗=1⁄ applies only if the prior estimate ��𝑗 can be assumed to be uncorrelated. The

exact formula to be

(𝑠. 𝑒. (��𝑖))2= 𝑛 − ∑ 𝜌𝑖,𝑗

𝑈𝑖,𝑗 √

𝑣𝑖

𝑣+

𝑣𝑗

𝑣+ (2.32)

with 𝑣+ = ∑ 𝑣𝑖𝑛𝑖=1 .

Furthermore, it should be decided how to estimate

𝑉𝑎𝑟(1 − ��𝑛+1−𝑖∗ ) = 𝑉𝑎𝑟(��𝑛+1−𝑖

∗ ) = 𝑉𝑎𝑟(𝜁1∗ +⋯+ 𝜁𝑛+1−𝑖

∗ ) = 𝑉𝑎𝑟(𝜁𝑛+2−𝑖∗ +⋯+ 𝜁𝑛+1

∗ ). (2.33)

Because 𝜁1∗, … , 𝜁𝑛

∗ , 𝜁𝑛+1∗ paired negatively correlated, then

𝑉𝑎𝑟(��𝑘∗) = 𝑚𝑖𝑛 (𝑉𝑎𝑟(𝜁1

∗) + ⋯+ 𝑉𝑎𝑟(𝜁𝑘∗), 𝑉𝑎𝑟(𝜁𝑘+1

∗ ) + ⋯+ 𝑉𝑎𝑟(𝜁𝑛+1∗ )). (2.34)

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

39

because the 𝜁𝑘∗ ≈ 𝜁𝑘 ≈

∑ 𝑆𝑗,𝑘𝑛+1−𝑘𝑗=1

∑ 𝑥𝑗𝑛+1−𝑘𝑗=1

can be assumed that the

𝑉𝑎𝑟(𝜁𝑘∗) ≈ 𝑉𝑎𝑟 (

∑ 𝑆𝑗,𝑘𝑛+1−𝑘𝑗=1

∑ 𝑥𝑗𝑛+1−𝑘𝑗=1

) =𝑠𝑘2

∑ 𝑥𝑗𝑛+1−𝑘𝑗=1

, (2.35)

with 1 ≤ 𝑘 ≤ 𝑛. Therefore 𝑉𝑎𝑟(𝜁𝑘∗) estimated by

(𝑠. 𝑒. (𝜁𝑘∗))

2=

��𝑘2∗

∑ ��𝑗𝑛+1−𝑘𝑗=1

, (2.36)

With 1 ≤ 𝑘 ≤ 𝑛.

Altogether, an estimated (𝑠. 𝑒. (��𝑘∗))

2 for 𝑉𝑎𝑟(��𝑘

∗) is

(𝑠. 𝑒. (��𝑘∗))

2= min((𝑠. 𝑒. (𝜁1

∗))2+⋯+ (𝑠. 𝑒. (𝜁𝑘

∗))2, (𝑠. 𝑒. (𝜁𝑘+1

∗ ))2+⋯+ (𝑠. 𝑒. (𝜁𝑛+1

∗ ))2). (2.37)

So finally obtained estimator for the mean square error of prediction is

𝑚𝑠𝑒𝑝(��𝑖𝐵𝐹) = ��𝑖(��𝑛+2−𝑖

2∗ +⋯+ ��𝑛+12∗ ) + (��𝑖

2 + ( 𝑠. 𝑒. (��𝑖))2) (𝑠. 𝑒. (��𝑛+1−𝑖

∗ ))2+ (1 − ��𝑛+1−𝑖

∗ )2,

(2.38)

Prediction of error

𝑃𝐸(��𝑖𝐵𝐹) = √𝑚𝑠𝑒𝑝(��𝑖

𝐵𝐹), (2.39)

and % prediction of the error is

%𝑃𝐸(��𝑖𝐵𝐹) =

𝑃𝐸(��𝑖𝐵𝐹)

��𝑖𝐵𝐹 × 100%. (2.40)

To check the significance of the difference between the estimated reserves or alternative to building the

confidence interval for 𝐸(𝑈𝑖) needed only pure error estimation

( 𝑠. 𝑒. (��𝑖𝐵𝐹))

2= (��𝑖

2 + ( 𝑠. 𝑒. (��𝑖))2) (𝑠. 𝑒. (��𝑛+1−𝑖

∗ ))2+ ( 𝑠. 𝑒. (��𝑖))

2(1 − ��𝑛+1−𝑖

∗ )2. (2.41)

For total reserves 𝑅 = 𝑅1 +⋯+ 𝑅𝑛 above, obtained estimates of total reserves is not biased

��𝐵𝐹 = ��1𝐵𝐹 +⋯+ ��𝑛

𝐵𝐹. Mean square error of prediction of total reserves is

𝑚𝑠𝑒𝑝(��𝐵𝐹) = 𝑉𝑎𝑟(��𝐵𝐹) + 𝑉𝑎𝑟(𝑅). (2.42)

��𝑎𝑟(𝑅) = ∑ ��𝑖(��𝑛+2−𝑖2∗ +⋯+ ��𝑛+1

2∗ ).𝑛𝑖=1 (2.43)

Estimation error 𝑉𝑎𝑟(��𝐵𝐹) more needed because ��1𝐵𝐹 , … , ��𝑛

𝐵𝐹 positively correlated through

parameter estimation 𝜁𝑘∗ (and also through ��𝑖). Obtained

𝑉𝑎𝑟(��𝐵𝐹) = ∑ 𝑉𝑎𝑟(��𝑖𝐵𝐹)𝑛

𝑖=1 + 2∑ 𝐶𝑜𝑣(��𝑖𝐵𝐹 , ��𝑗

𝐵𝐹)𝑖<𝑗 . (2.44)

For 𝐶𝑜𝑣(��𝑖𝐵𝐹 , ��𝑗

𝐵𝐹) = 𝐶𝑜𝑣 (��𝑖(1 − ��𝑛+1−𝑖∗ ), ��𝑗(1 − ��𝑛+1−𝑗

∗ )) obtainable

𝐶𝑜𝑣 (��𝑖(1 − ��𝑛+1−𝑖∗ ), ��𝑗(1 − ��𝑛+1−𝑗

∗ )) = 𝜌𝑖,𝑗𝑈 √𝑉𝑎𝑟(��𝑖)𝑉𝑎𝑟(��𝑗)𝐸(1 − ��𝑛+1−𝑖

∗ )𝐸(1 − ��𝑛+1−𝑗∗ ) +

𝜌𝑖,𝑗𝑧 √𝑉𝑎𝑟(1 − ��𝑛+1−𝑖

∗ )𝑉𝑎𝑟(1 − ��𝑛+1−𝑗∗ )𝐸(��𝑖)𝐸(��𝑗) (2.45)

with a correlation coefficient

𝜌𝑖,𝑗𝑈 =

𝐶𝑜𝑣(��𝑖,��𝑗)

√𝑉𝑎𝑟(��𝑖)𝑉𝑎𝑟(��𝑗)

, (2.46)

𝜌𝑖,𝑗𝑧 =

𝐶𝑜𝑣(1−��𝑛+1−𝑖∗ ,1−��𝑛+1−𝑗

∗ )

√𝑉𝑎𝑟(1−��𝑛+1−𝑖∗ )𝑉𝑎𝑟(1−��𝑛+1−𝑗

∗ )

. (2.47)

If you do not have the possibility to acquire data-based estimates for 𝜌𝑖,𝑗𝑈 and 𝜌𝑖,𝑗

𝑧 , then it will use the

estimated ��𝑖,𝑗𝑈 like ��𝑖,𝑗

𝑈 used in equations (2:32) and

𝜌𝑖,𝑗𝑧 =

��𝑛+1−𝑗∗ (1−��𝑛+1−𝑖

∗ )

��𝑛+1−𝑖∗ (1−��𝑛+1−𝑗

∗ ) (2.48)

for 𝑖 < 𝑗 and ��1∗ ≤ ⋯ ≤ ��𝑛+1

∗ . So we get

( 𝑠. 𝑒. (��𝐵𝐹))2= ∑ ( 𝑠. 𝑒. (��𝑖

𝐵𝐹))2

𝑛𝑖=1 + 2∑ ��𝑜𝑣(��𝑖

𝐵𝐹 , ��𝑗𝐵𝐹)𝑖<𝑗 (2.49)

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

40

with

��𝑜𝑣(��𝑖𝐵𝐹 , ��𝑗

𝐵𝐹) = ��𝑖,𝑗𝑈 𝑠. 𝑒. (��𝑖)𝑠. 𝑒. (��𝑗)(1 − ��𝑛+1−𝑖

∗ )(1 − ��𝑛+1−𝑗∗ ) +

��𝑖,𝑗𝑧 𝑠. 𝑒. (��𝑛+1−𝑖

∗ )𝑠. 𝑒. (��𝑛+1−𝑗∗ )��𝑖��𝑗. (2.50)

So finally obtained the mean square error for the prediction of the total claims reserves is

𝑚𝑠𝑒𝑝(��𝐵𝐹) = 𝑉𝑎𝑟(��𝐵𝐹) + 𝑉𝑎𝑟(𝑅)

= ∑��𝑖(��𝑛+2−𝑖2∗ +⋯+ ��𝑛+1

2∗ )

𝑛

𝑖=1

+∑( 𝑠. 𝑒. (��𝑖𝐵𝐹))

2𝑛

𝑖=1

+ 2∑��𝑜𝑣(��𝑖𝐵𝐹 , ��𝑗

𝐵𝐹)

𝑖<𝑗

,

prediction of the total error is

𝑃𝐸(��𝐵𝐹) = √𝑚𝑠𝑒𝑝(��𝐵𝐹), (2.51)

and % total prediction error is

%𝑃𝐸(��𝐵𝐹) =𝑃𝐸(��𝐵𝐹)

��𝐵𝐹. (2.52)

3. Result and Analysis

The data used in this paper is the data obtained from the journal Mack and Re (2006). The data is the

increase in claims during the 13 years from 1992 to 2004, and at 13 years of development.

Table 3.1 Data from Increase Claims Happen

𝑖 𝑣𝑖 Years of Development (𝑘)

1 2 3 4 5 6 7 8 9 10 11 12 13

1 41020 7362 3981 4881 5080 3806 252

3 792 731 -1 241

-

34

7

3

-

11

5

2 57547 5400 7208 7252 4946 4394 319

8

303

9

-

771 988

-

495

-

18

2

125

1

3 60940 2215 1291

4 6494 5585 2211

336

3

212

6 445 421 118

84

9

4 63034 1109 6581 5833 4827 5672 863

8 12 146

405

4

-

625

5 61256 6220 1006

5

1034

3

1125

9 9032

120

7 26

422

1 378

8 96925 8563 4720

6

5969

5

6004

3

5045

8

512

9

9 16702

1

1177

1

4869

6

8475

0

7736

1

3940

4

1

0

14849

4

1125

9

2700

0

3864

8

5189

0

1

1

16541

0

1185

5

2718

3

2592

7

1

2

22823

9 6236

1821

4

1

3

22645

4 7818

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

41

Table 3.2 Data of Paid Claims increase

𝑖 𝑣𝑖 Tahun Perkembangan (𝑘)

1 2 3 4 5 6 7 8 9 10 11 12 1

3

1 4102

0 234 4643 6249 3530 6539 2737 2546

181

5 335 110 18 26

-

1

2 5754

7

199

4 4936 4825 6180 7659 1951 5110 611 776 409 48

132

7

3 6094

0 -75 3208 7853 7127 5360 3876 3426

144

0

128

3 67

161

6

4 6303

4 236 2202 4125 5003 4189 9064 2202

206

4

324

4

117

9

5 6125

6 976 4719 9397

1325

3 6106 4975 3049

471

9

271

5

6 5723

1

-

730 3353

1290

4

1064

2

1649

1 8886 7228

851

2

7 9113

7 539 5238

1490

1

2486

5

2027

4

1776

9

3293

4

8 9692

5 725

1490

0

3467

6

4359

5

5262

1

2748

0

9 1670

21 312 6442

4359

6

8870

2

3881

2

1

0

1484

94

298

8 9921

2035

7

3458

5

1

1

1654

10 260 7181

2220

2

1

2

2282

39 994 3049

1

3

2264

54

241

1

Table 3.3 Increase the ratio of claims and Estimated Average Patterns

Developments for increase Data Claims Happen

𝑘 ��𝑘 ��𝑘 ��𝑘∗ ∑��𝑘

∗ 𝜁𝑘 ��𝑘 𝜁𝑘∗ ��𝑘

1 0.0593 0.0692 0.0692 0.0692 0.0502 0.0502 0.0502 0.0502

2 0.1844 0.1998 0.1998 0.2689 0.1449 0.1950 0.1449 0.1950

3 0.2804 0.2752 0.2752 0.5441 0.1996 0.3946 0.1996 0.3946

4 0.3225 0.3006 0.3006 0.8448 0.2180 0.6126 0.2180 0.6126

5 0.2243 0.2039 0.2039 1.0487 0.1479 0.7604 0.1479 0.7604

6 0.1157 0.1166 0.1166 1.1653 0.0845 0.8450 0.0845 0.8450

7 0.0962 0.1258 0.1258 1.2911 0.0912 0.9362 0.0912 0.9362

8 0.0143 0.0230 0.05 1.3411 0.0167 0.9528 0.0363 0.9724

9 0.0206 0.0381 0.02 1.3611 0.0276 0.9805 0.0145 0.9869

10 -

0.0034

-

0.0069 0.01 1.3711

-

0.0050 0.9755 0.0073 0.9942

11 0.0020 0.0039 0.005 1.3761 0.0028 0.9783 0.0036 0.9978

12 0.0127 0.0238 0.002 1.3781 0.0173 0.9956 0.0015 0.9993

13 -

0.0028

-

0.0049 0.001 1.3791

-

0.0036 0.9921 0.0007 1

Ekor 0 1.3791 0 1

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

42

Table 3.4 Increase the ratio of claims and the average estimate

for the Development of Improved Data Patterns of Claims Paid

𝑘 ��𝑘 ��𝑘 ��𝑘∗ ∑��𝑘

∗ 𝜁𝑘 ��𝑘 𝜁𝑘∗ ��𝑘

1 0.0074 0.0086 0.0086 0.0086 0.0063 0.0063 0.0063 0.0063

2 0.0564 0.0611 0.0611 0.0697 0.0443 0.0505 0.0443 0.0505

3 0.1793 0.1760 0.1760 0.2457 0.1276 0.1782 0.1276 0.1782

4 0.2812 0.2621 0.2621 0.5078 0.1901 0.3682 0.1901 0.3682

5 0.2270 0.2064 0.2064 0.7143 0.1497 0.5179 0.1497 0.5179

6 0.1450 0.1462 0.1462 0.8604 0.1060 0.6239 0.1060 0.6239

7 0.1307 0.1709 0.1709 1.0313 0.1239 0.7478 0.1239 0.7478

8 0.0562 0.0903 0.11 1.1413 0.0655 0.8133 0.0798 0.8276

9 0.0294 0.0545 0.07 1.2113 0.0395 0.8529 0.0508 0.8784

Table 3.4 Increase the ratio of claims and the average estimate

for the Development of Improved Data Patterns of Claims Paid

10 0.0079 0.0159 0.05 1.2613 0.0115 0.8644 0.0363 0.9146

11 0.0105 0.0205 0.03 1.2913 0.0149 0.8793 0.0218 0.9364

12 0.0137 0.0257 0.02 1.3113 0.0186 0.8979 0.0145 0.9509

13 -

0.000031

-

0.000043 0.02 1.3313 0.0000 0.8979 0.0145 0.9654

Ekor 0.0478 1.3791 0.0346 1

Table 3.5 Claims Ratio Index, Initial Claims Ratio Ultimate, Ultimate Claim,

and Estimation of Outstanding Claims Reserves for Claims Data increase Happen

𝑖 𝑟𝑖 ��𝑖 𝑟𝑖

∗ ��𝑖 ��𝑖 ��𝑖𝐵𝐹

Happen Paid Happen Paid

1 0.5319 0.6129 0.5710 0.5710 0.7874 32299.9191 0 1118.5948

2 0.4737 0.5438 0.5075 0.5075 0.6999 40279.1124 29.2069 1979.0648

3 0.4581 0.5104 0.4835 0.4835 0.6668 40634.6177 88.3941 2585.8263

4 0.4375 0.4744 0.4556 0.4556 0.6283 39604.2625 229.7407 3381.7862

5 0.6536 0.7323 0.6918 0.6918 0.9540 58440.5745 762.7689 7109.0110

6 0.9787 1.0853 1.0307 1.0307 1.4214 81346.9434 2241.4592 14024.4629

7 1.3554 1.2448 1.2989 1.2989 1.7914 163258.6555 10417.5331 41168.2102

8 2.0094 2.0028 2.0061 2.0061 2.7666 268150.6343 41569.6714 100850.1239

9 1.4647 1.4174 1.4409 1.4409 1.9871 331893.0957 79509.4046 160000.5501

10 1.0245 0.8716 0.9450 0.9450 1.3032 193519.8073 74977.1006 122259.4617

11 0.7494 0.7373 0.7433 0.7433 1.0251 169559.7233 102656.5288 139350.9305

12 0.4395 0.2777 0.3494 0.5 0.6895 157381.5643 126690.1157 149426.7861

13 0.5819 1.4354 0.9139 0.5 0.6895 156150.7225 148318.1288 155171.5582

Total 587490.0529 898426.3667

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

43

Table 3.6 Constant variability, Standard Error of 𝜁𝑘∗ and Standard Error of ��𝑘

for Data of An increase Claims

𝑘 ��𝑘 ��𝑘2∗ (𝑠. 𝑒. (𝜁𝑘

∗))2 𝑠. 𝑒. (𝜁𝑘

∗) (𝑠. 𝑒. (��𝑘∗))

2 𝑠. 𝑒. (��𝑘

∗)

1 157.9638 157.9638 0.000091 0.0095 0.000091 0.0095

2 258.7319 258.7319 0.000164 0.0128 0.000255 0.0160

3 241.0382 241.0382 0.000170 0.0130 0.000425 0.0206

4 193.4240 193.4240 0.000155 0.0124 0.000580 0.0241

5 793.3281 793.3281 0.000751 0.0274 0.001331 0.0365

6 677.8112 444.7212 0.000614 0.0248 0.001946 0.0441

7 359.4531 570.0033 0.001250 0.0354 0.001655 0.0407

8 74.5968 73.8335 0.000252 0.0159 0.001402 0.0374

9 80.2661 32.8787 0.000156 0.0125 0.001247 0.0353

10 12.3862 25.1074 0.000164 0.0128 0.001082 0.0329

11 10.7283 21.9404 0.000194 0.0139 0.000889 0.0298

12 35.3697 20.2354 0.000279 0.0167 0.000610 0.0247

13 19.6970 0.000610 0.0247 0 0

Ekor 19.1729 0 0 0 0

Table 3.7 Constants of Variability, Standard Error of 𝜁𝑘∗ and Standard Error of ��𝑘

for Data of Paid Claims increase

𝑘 ��𝑘 ��𝑘2∗ (𝑠. 𝑒. (𝜁𝑘

∗))2 𝑠. 𝑒. (𝜁𝑘

∗) (𝑠. 𝑒. (��𝑘∗))

2 𝑠. 𝑒. (��𝑘

∗)

1 12.5625 12.5625 0.000007 0.0027 0.000007 0.0027

2 97.2829 97.2829 0.000062 0.0079 0.000069 0.0083

3 80.2233 80.2233 0.000057 0.0075 0.000125 0.0112

4 359.6533 359.6533 0.000288 0.0170 0.000413 0.0203

5 204.5555 281.9872 0.000267 0.0163 0.000680 0.0261

6 111.6348 121.1375 0.000167 0.0129 0.000848 0.0291

7 283.9844 171.3740 0.000376 0.0194 0.001224 0.0350

8 69.3003 72.9480 0.000249 0.0158 0.001473 0.0384

9 36.7752 41.6315 0.000197 0.0140 0.001639 0.0405

10 37.5430 31.4504 0.000206 0.0143 0.001434 0.0379

11 22.3647 23.7591 0.000210 0.0145 0.001224 0.0350

12 19.7605 20.6506 0.000285 0.0169 0.000939 0.0306

13 20.6506 0.000639 0.0253 0.000300 0.0173

Ekor 30.4780 0.0002998 0.0173 0 0

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

44

Table 3.8 Prediction of Error

𝑖

Happen Paid

𝑚𝑠𝑒𝑝(��𝑖𝐵𝐹)

Prediction

of Error

%

Prediction

of Error 𝑚𝑠𝑒𝑝(��𝑖

𝐵𝐹) Prediction

of Error

%

Prediction

of Error

1 619284.47 786.95 - 1297249.43 1138.97 102%

2 2555015.90 1598.44 5473% 3583137.19 1892.92 96%

3 3868987.40 1966.97 2225% 4937260.15 2221.99 86%

4 4907526.03 2215.29 964% 6032263.14 2456.07 73%

5 10461552.01 3234.43 424% 13020219.45 3608.35 51%

6 20589594.47 4537.58 202% 23463809.80 4843.95 35%

7 78854859.37 8880.03 85% 72052710.17 8488.39 21%

8 349821064.24 18703.50 45% 171686165.24 13102.91 13%

9 406476203.50 20161.26 25% 212002021.29 14560.29 9%

10 412808001.31 20317.68 27% 173405204.58 13168.34 11%

11 387687171.17 19689.77 19% 202962993.31 14246.51 10%

12 392755269.86 19818.05 16% 199370541.87 14119.86 9%

13 426033714.02 20640.58 14% 211484103.75 14542.49 9%

Table 3.9 Prediction of Total Error

Happen Paid

Estimation Error 2106610420 1084353423

Process Error 393816229 210991508.5

𝑚𝑠𝑒𝑝(��𝐵𝐹) 2500426649 1295344931

Prediction of Error 50004.2663 35990.90068

% Prediction of Error 8.51% 4.01%

4. Conclusion

The things that can be inferred from the results of the discussion are as follows: outstanding claims

reserves estimation results obtained using Bornhuetter-Ferguson method is equal to 587,490.0529 for

claims incurred increased data and 898,426.3667 for data enhancement claims paid. It means that the

amount of estimated outstanding claims reserves that must be provided by insurance companies

amounted to 587,490.0529 for claims incurred increased data and 898,426.3667 for data enhancement

claims paid. The magnitude of the prediction error of the estimate of outstanding claims reserves

Bornhuetter-Ferguson method obtained amounted to 8.51% for data enhancement claims occurred and

4.01% for data enhancement claims.

5. References

.

Mack, Thomas and Re, Munich. 2006. Parameter Estimation for Bornhuetter/Ferguson. Casualty

Actuarial Society Forum Fall 2006, 141-157.

Mack, Thomas. 2008. The Prediction Error of Bornhuetter/Ferguson. Astin Bulletin, 38, 87-103.

Verrall, R. J. 2004. A Bayesian Generalized Linear Model for The Bornhuetter-Ferguson Method of

Claims Reserving. North American Actuarial Journal, 8, 67-89.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

45

Bankruptcy Prediction of Corporate Coupon

Bond with Modified First Passage Time

Approach

Di Asih I MARUDDANIa*, Dedi ROSADIb , GUNARDIc& ABDURAKHMANc a Ph.D Student at Mathematics Department, Gadjah Mada University, Indonesia

b Mathematics Department, Gadjah Mada University, Indonesia c Mathematics Department, Gadjah Mada University, Indonesia dMathematics Department, Gadjah Mada University, Indonesia

*[email protected]

Abstract: Most corporations considering debt liabilities issue risky coupon bonds for a finite

maturity which tipically matches the expected life of the assets being financed. For valuing these

coupon bond, we can consider the common stock and and coupon bonds as a compound option.

The other problem is bond indenture provisions often include safety covenants that give bond

investors the right to reorganize a firm if its value falls below a given barrier. This paper will

shown how to value bonds with coupon based on the first passage time approach. We will

construct a formula for probability of default at the maturity date by computing the historical

low of firm values. Using Indonesian corporate coupon bond data, we will predict the

bankruptcy of this firm.

Keywords: safety covenants, default barrier, probability of default, compound option

1. Introduction

Credit risk management is one of the most important recent developments inthe finance industry.It has

been the subject of considerableresearch interest in banking and finance communities, and has recently

drawn theattention of statistical researchers. Credit Risk is the risk induced from credit events such as

credit rating change, restucturing, failure to pay, bankruptcy, etc. More formal denition, credit risk is

the distribution of financial losses due to unexpected changes in the credit quality of a counterparty in

a financial agreement (Giesecke, 2004).Central to credit risk is the default event, which occurs if the

debtor is unable tomeet its legal obligation according to the debt contract.

Merton (1974) firstly builds a model based on the capital structure of the firm, which becomes

the basis of the structural approach. He assumes that the firm is financed by equity and a zero coupon

bond with face value K and maturity date T. In this approach, the company defaults at the bond maturity

time T if its assets value falls below the face value of the bond at time T.

Black and Cox (1976) extends the definition of default event and generalize Merton’s method

into the First Passage Approach. In this approach, the firm defaults when the history low of the firm

assets value below some barrier D. Thus the default event could take place before the maturity date T.

This theory also need assumption that the corporations issues only one zero coupon bond. Reisz and

Perlich (2004) point out that if the barrier is below the bond’s face value, then default time definition

of Black and Cox theory does not reflect economic reality anymore. In their paper, they modified the

classic First Passage Time Approach, and re-defining the formula of default time.

Up to this time, most corporation tend to issue risky coupon bond. At every coupon date until

the final payment, the firms have to pay the coupon. At the maturity date, the bondholder receive the

face value of the bond. The bankruptcy of the firm occurs when the firm fails to pay the coupon at the

coupon payment and/or the face value of the bond at the maturity date. Geske (1977) has derived

formulas for valuing coupon bonds. In a later paper, Geske (1979) suggested that when company has

coupon bond outstanding, the common stock and coupon bond can be viewed as a compound option.

In this paper we proposed a method for unifying some theory above. We want to produce a new

theory in credit risk that fulfill assumptions in real finance industry. We will derive probability of default

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

46

formula for risky coupon bond with modified first passage time approach. We construct the formula by

computing the historical low of firm values.

2. Theoritical Framework

2.1 Merton’s Model

Consider a firm with market value at time t, Vt, which is financed by equity and a zero coupon bond

with face value, K, and maturity date, T. The firm’s contractual obligation is to repay the amount K to

the bondholder at time T. Merton and Black & Scholes (1973) indicated that most corporate liabilities

may be viewed as an option. They derived a formula for valuing call option and discussed the pricing

of a firm’s common stock and bonds when the stock is viewed as an option on the value of the firm.

Thus, valuing the equity price of the firm is identical to the equation for valuing a European call option.

The firm is assumed to default at the bond maturity date T, if the total assets value of the firm

is not sufficient to pay its obligation to the bondholder. Thus the default time τ is a discrete random

variable given by

𝜏 = {𝑇 jika 𝑉𝑇 < 𝐾∞ jika 𝑉𝑇 ≥ 𝐾

(1)

Figure 1 shows the default event graphically

Figure 1. Default Event in the Merton’s Model

To calculate the probability of default, we make assumption that the standard model for the evolution

of asset prices over time is Geometric Brownian Motion:

𝑑𝑉𝑡 = 𝜇 𝑉𝑡𝑑𝑡 + 𝜎𝑉𝑡𝑑𝑊𝑡 and 𝑉0 > 0 (2)

Where µ is a drift parameter, σ> 0 is a volatility parameter, and W is a standard Brownian Motion.

Setting 𝑚 = 𝜇 − 1

2𝜎2Itto’s lemma implies that

𝑉𝑡 = 𝑉0 exp(𝑚𝑡 + 𝜎) (3)

Since Wt is normally distributed with mean zero and variance T, probability of default is given by

𝑃(𝜏 = 𝑇) = 𝑃[𝑉𝑇 < 𝐾] = 𝑃[𝜎𝑊𝑇 < log 𝐿 −𝑚𝑇] = Φ(log 𝐿−𝑚𝑇

𝜎√𝑇) (4)

where 𝐿 =𝐾

𝑉0 and is the cumulative standard normal distribution function.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

47

Classic First Passage Time Model (Black & Cox’s Model)

In Merton’s model, the firm can only default at the maturity date T. As noted by Black & Cox (1976),

bond indenture provisions often include safety covenants that give bondholder right to reorganize a firm

if its value falls below a given barrier.

We still use geometric Brownian motion to model the total assets of the firm Vt. Suppose the

default barrier B is a constant valued in (0,V0), then the default time τ is modified to

𝜏 = inf{𝑡 > 0: 𝑉𝑡 < 𝐵} (5)

This definition says a default takes place when the assets of the firm fall to some positive level B for

the first time. The firm assumed to take the position of not default at time t = 0.

So, the probability of default is calculated as

𝑃(𝜏 ≤ 𝑇) = 𝑃[𝑀𝑇 < 𝐵] = 𝑃 [min𝑠≤𝑇(𝑚𝑠 + 𝜎𝑊𝑠) < log (𝐵

𝑉0)]

Where M is the historical low of firm values

𝑀𝑡 = min𝑠≤𝑇𝑉𝑠 Since the distribution historical low of an arithmetic Brownian Motion is inverse Gaussian, then the

probability of default can be calculated explicitly by

𝑃(𝜏 ≤ 𝑇) = Φ(ln(

𝐵

𝑉0)−𝑚𝑇

𝜎√𝑇) + (

𝐵

𝑉0)

2𝑚

𝜎2 Φ(ln(

𝐵

𝑉0)+𝑚𝑇

𝜎√𝑇) (6)

Figure 2 shows the default event graphically for Black & Cox’s model.

Figure 2. Default Event in the Black & Cox’s Model

2.2 Coupon Bond (Geske’s Model)

In practice, the most common form of debt instrument is acoupon bond.In the U.S and in many other

countries, coupon bonds paycoupons every six months and face value at maturity.Suppose the firm has

only common stock and coupon bond outstanding. The coupon bond has n interest payments of c dollars

each. The firm is assumed to default at the coupon date, if the total assets value of the firm is not

sufficient to pay the coupon payment to bondholder. And at the maturity date, the firm can default if

the total assets is below the face value of the bond. For this case, if the firm defaults on a coupon

payment, then all subsequent coupon payments (and payments of face value) are also default on.

Geske (1979) proposed a theory for valuing risky coupon bond. When the corporation has

coupon bonds outstanding, the common stock can be considered as a compound option (Geske, 1977).

A compound option is an optionon an option. In other words, the underlying asset is anotheroption(Wee,

2010). For coupon bond, valuing equity price of coupon bond is identical to valuing a European call

option on call option.

At every coupon date until the final payment, the firm have the option of buying the coupon or

forfeiting the firm to bondholder. The final firm option is to repurchase the claims on the firm from the

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

48

bondholders by paying off the principal at maturity. The financing arrangements for making or missing

the interest payments are specified in the indenture conditions of the bond. In Figure 3 we illustrate the

default event of Geske’s model.

Figure 3. Default Event in the Geske’s Model

3. Valuation of Coupon Bond with Modified First Passage Time Approach

3.1 Modified First Passage Time Approach (Reisz & Perlich’s Model)

In their paper, Reisz & Perlich (2004) point out that if the barrier is below the face value of the bond,

then our earlier definition (5) does not reflect economic reality anymore. It does not capture the situation

when the firm is in default because VT<K although MT>B.

Then, they proposed a redefine default as firm value falling below the barrier B<K at any time

before maturity or firm value falling below face value K at maturity. Formally, the default time is now

given by

𝜏 = min(𝜏1, 𝜏2) (7)

Where

τ1 = the maturity time T if assets VT<K at T

τ2 = the first passage time of assets to the barrier B

In other words, the default time is defined as the minimum of the first passage default time (5) and

Merton’s default time (1). This definition of default is consistent with the payoff to equity and bonds.

Even if the firm value does not fall below the barrier, if assets are below the bond’s face value at

maturity he firm default. The default event for Reisz & Perlich’s model is shown at Figure 4.

Assuming that the firm can neither repurchase shares nor issue new senior debt, the payoffs to

the firm’s liabilities at debt maturity T are summarized in Table 1 and Table 2.

Table 1. Payoffs at Maturity in The Modified First Passage Time Approach for B≥ K

State of the firm Assets Bond Equity

No Default MT>B K VT– K

Default MT≤ B B>K B 0

B = K K 0

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

49

Table 2. Payoffs at Maturity in The Modified First Passage Time Approach for B<K

State of the firm Assets Bond Equity

No Default MT>B, VT≥ K K VT– K

Default MT>B, VT< K VT 0

MT≤ B B 0

Figure 4. Default Event in the Reisz & Perlich’s Model

3.1 Valuation of Coupon Bond with Modified First Passage Time Approach

In this section, we want to begin with assumption that the firm have assets value Vt, which is financed

by equity and a single coupon bond with face value K and only one time coupon payment at tc for the

bond period.

Suppose the default barrier B is a constant valued in (0,V0) and c<B< K, then the default time τ

is given by

𝜏 = min(𝜏1, 𝜏2, 𝜏3, 𝜏4, 𝜏5) (8)

Where

τ1 = the maturity time T if assets VT<K at T

τ2 = the first passage time of assets to the barrier B at time (tc,T)

= inf{𝑡𝑐 < 𝑡 < 𝑇: 𝑉𝑡 < 𝐵} τ3 = the coupon payment date if assets VT<B or assets VT<c at time tc

τ 4 = the first passage time of assets to the barrier B at time (0,tc)

= inf{0 < 𝑡 ≤ 𝑡𝑐: 𝑉𝑡 < 𝐵} τ5 = ∞, otherwise

With the definition above, we can summarize the default time by

𝜏 = min(𝜏1, 𝜏2∗) (9)

Where

τ2* =the first passage time of assets to the barrier B at time (0,T)

= inf{0 < 𝑡 < 𝑇: 𝑉𝑡 < 𝐵} The default event is shown at Figure 5.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

50

Figure 5. Default Event for Coupon Bond with Modified First Passage Time Approach

We have to check wether this default definition is consistent with the payoff to investor. We need to

consider two scenarios.

1. B≥ c + K

a. If the firm value never falls below the barrier B over the term of the bond (MT>B), then at

coupon payment the bond investor receive the coupon c, and at the maturity date receive

the face value c + K, K <V0. The equity holders receive the remaining VT – (c + K) at the

maturity date.

b. If the firm value falls below the barrier at some point during the bond’s term (MT ≤ B), then

the firm default. In this case, the firm stop operating, bond investors take over its assets B

and equity investor receive nothing.

Bond investor is fully protected: they receive at least the face value and coupon c + K upon

default and the bond is not subject to deafult rik anymore.

c. If the assets value VTis less then c + K, the ownership of the firm will be transferred to the

bondholder, who lose the amount (c + K) – VT. Equity is worthless because of limited

liability.

2. B<c + K

This anomaly does not occur if we assume B<c + K so that bondholder is both exposed to some

default risk and compensated for bearing that risk.

a. If the firm value never falls below the barrier B over the term of the bond (MT>B) and VT ≥

c + K, then at coupon payment the bond investor receive the coupon c, and at the maturity

date receive the face value c + K, K <V0. The equity holders receive the remaining VT – (c

+ K) at the maturity date.

b. If MT>B but VT<c + K, then the firm default, since the remaining assets are nor sufficient

to pay off the debt in full. Bondholder collect the remaining assets VT and equity become

worthless.

c. If MT≤B, then the firm default as well. Bond investor receive B<K at default and equity

become worthless.

To calculate probability of default for this case, first we define M as the historical low of firm values,

that is

𝑀𝑡 = min𝑠≤𝑇𝑉𝑠

Then, we get the corresponding probability of default as

𝑃(𝜏 ≤ 𝑇) = 𝑃(min(𝜏1, 𝜏2∗) ≤ 𝑇)

= 1 − 𝑃(min(𝜏1, 𝜏2∗) > 𝑇)

= 1 − 𝑃(𝜏1 > 𝑇, 𝜏2∗ > 𝑇)

= 1 − 𝑃(𝑀𝑇 > 𝐵, 𝑉𝑇 < 𝐾)

Using the joint distribution of an Arithmetic Brownian Motion and its running minimum, we get

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

51

𝑃(𝜏 ≤ 𝑇) = Φ(ln(

𝐵

𝑉0)−𝑚𝑇

𝜎√𝑇) + (

𝐵

𝑉0)

2𝑚

𝜎2 Φ(ln(

𝐵2

𝐾𝑉0)+𝑚𝑇

𝜎√𝑇) (10)

The probability of default for coupon bond with modified first passage time approach is higher than

the corresponding probability of the classical approah, equation (6).

4. Empirical Study in Indonesian Bond Market

In this case study we use data sets from and Indonesian Bond Market Directory 2011 that is published

by Indonesian Stock Exchange (IDX) and Indonesian Bond Pricing Agency (IBPA). We use bond that

is issued by PT Bank Lampung (BPD Lampung), namely Oligasi II Bank Lampung Tahun 2007, with

code number BLAM02 IDA000035208. The profile structure of this bond is given at Table 2. Total

assets data of the firm is published by Indonesian Bank is given at Table 3.

Table 2. Profile Structure of Obligasi II Bank Lampung Tahun 2007

Outstanding Listing Date Maturity Date Issue Term Coupon Structure

300,000,000,000 Nov 12, 2007 Nov 9, 2012 5 years Fixed 11.85 %

Tabel 3.1Total Asset Value of PT Bank Lampung Tbk in last 2 years

Years Month Total Assets Value Years Month Total Assets Value

2011 January 2,454,583,000,000 2012 January 3,178,145,000,000

February 2,583,697,000,000 February 3,402,327,000,000

March 2,854,050,000,000 March 3,611,431,000,000

April 2,837,780,000,000 April 3,845,560,000,000

May 2,846,133,000,000 May 4,085,869,000,000

June 3,020,574,000,000 June 4,082,279,000,000

July 3,129,483,000,000 July 3,978,328,000,000

August 2,919,372,000,000 August 3,447,473,000,000

September 3,115,696,000,000 September 4,047,746,000,000

October 3,000,063,000,000 October 4,026,786,000,000

November 3,043,340,000,000 November 4,295,751,000,000

December 3,130,048,000,000 December 4,221,274,000,000

For deriving the probability of default of bond, we have to construct the formula by computing

the historical low of firm values. All the computation is done by R programming. In this study, we use

a fixed barrier level in 2,000,000,000,000.

Using formula (10) we have the probability of default for Obligasi II Bank Lampung Tahun

2007 is 0.00003627191. This probability of default is very small because of the outstanding of the bond

is very low than the total assets value. It can be seen from Table 2 and Table 3, that the face value of

the bond is 300,000,000,000,000 and the total assets value at the end of 2012 is 4,221,274,000,000. In

the normal situation, the total assets value is very sufficient for paying the principal of the bond.

Acknowledgements

We would like to thank to Hibah Disertasi Doktor from DIKTI grant research in 2013.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

52

References

Black, F. & Cox, J., 1976, Valuing Corporate Securities: Some Effects of Bond Indenture Provisions,

Journal of Finance, 31 (2), 351-367.

Black, F. & Scholes, M., 1973, The Pricing of Option and Corporate Liabilities, Journal of Political

Economy, 81, 637-654.

Geske, R., 1977, The Valuation of Corporate Liabilities as Compound Options, Journal of Financial

and Quantitative Analysis, 12, 541-552.

Geske, R., 1979, The Valuation of Compound Options, Journal of Financial Economics, 7, 63-81.

Giesecke, K., 2004, Credit Risk Modeling and Valuation: An Introduction, Credit Risk: Models and

Management, Vol.2, D. Shimko (Ed.), Wiley. New York.

Merton, R., 1974, On The Pricing of Corporate Debt: The Risk Structure of Interest Rate, Journal of

Finance, 29 (2), 449-470.

Reisz, A. & Perlich, C., 2004, A Market-Based Framework for Brankruptcy Prediction, Working Paper,

Baruch College and New York University.

Wee, L.T., 2010, Compound Option, Teaching Note.

Website of Bank Indonesia (BI), 2013, Data Total Aset Bank. www.bi.go.id. [May 20, 2013]

Website of Bursa Efek Indonesia (BEI), 2012, Indonesian Bond Market Directory 2011, www.idx.co.id

[May 20, 2013]

Website of Indonesia Bond Pricing Agency (IBPA), 2012, Data Obligasiwww.ibpa.co.id. [May 20,

2013]

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

53

Subdirect Sums of Nonsingular M-Matrices

and of Their Inverses

Eddy DJAUHARIa, Euis HARTINIb aDepartment of Mathematics -FMIPA, Universitas Padjadjaran,Indonesia bDepartment of Mathematics -FMIPA, Universitas Padjadjaran, Indonesia

Email : [email protected] b [email protected]

Abstract: In this paper, we would like to discuss the question of when the subdirect sum of two

nonsingular 𝑍-matrices be a nonsingular 𝑀-matrix and how to get their inverses. By seeing the

value of entries of an inverse are nonnegative numbers, it is shown that matrix is a nonsingular

𝑀 -matrix. In particular, the two blocks of lower and upper triangular nonsingular 𝑀-matrices,

these are the 𝑘-subdirect sum of two matrices is a nonsingular 𝑀 -matrix. In this paper, all of

above cases are suitable with properties and theorems of these matrices will be given.

Keywords: Subdirect sum, 𝑀 -matrices, Inverse of 𝑀 -matrix.

1. Introduction

Let 𝐴 and 𝐵 be two square matrices of order 𝑛1and 𝑛2, respectively, and let 𝑘 be an integer such that1 ≤𝑘 ≤ min (𝑛1, 𝑛2). Let 𝐴 and 𝐵 be partitioned into 2 × 2 blocks as follows:

𝐴 = [𝐴11 𝐴12𝐴21 𝐴22

], and 𝐵 = [𝐵11 𝐵12𝐵21 𝐵22

] (1.1)

where A22 and B11 are square matrices of order k. The 𝑘-subdirect sum of 𝐴 and 𝐵 and denote it by

𝐶 = 𝐴 ⨁𝑘 𝐵 = [𝐴11 𝐴12 0𝐴21 𝐴22 + 𝐵11 𝐵120 𝐵21 𝐵22

], (1.2)

where 𝐶 is square matrix of order 𝑛 = 𝑛1 + 𝑛2 − 𝑘.

We are interested in the case when 𝐴 and 𝐵 are nonsingular matrices. We partition the inverses of 𝐴

and 𝐵 conformably to (1.1) and denote its blocks as follows:

𝐴−1 = [��11 ��12��21 ��22

], and 𝐵−1 = [��11 ��12��21 ��22

], (1.3)

where ��22 and ��12 are square matrices of order 𝑘.

In the following result we show that nonsingularity of matrix ��22 + ��11is a necessary and sufficient

condition for the 𝑘-subdirect sum 𝐶 to be nonsingular. The proof is based on the use of the relation

𝑛 = 𝑛1 + 𝑛2 − 𝑘 to properly partition the indicated matrices.

Some definitions, properties and theorems will be given in this paper. A nonsingular 𝑍-matrix

is a matrix which has positive the first-diagonal entries and nonpositive off-diagonal entries. A

nonsingular 𝑀-matrix is a nonsingular 𝑍-matrix which has nonnegative the inverse of 𝑍-matrix

entries. In this paper, we want to obtain the 𝑘-subdirect sum of two nonsingular 𝑍-matrices by using

some definitions, properties and theorems, such that the inverses of 𝑍-matrices be a nonsingular 𝑀-

matrix or not. Furthermore, in this paper, we give some examples which help illustrate the theoretical

results.

2. The 𝒌-subdirect sum of nonsingular 𝑴 -matrix To obtain a nonsingular matrix of order 𝑛 be a nonsingular 𝑀-matrix must be hold the properties and

theorem as follows:

Properties 1.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

54

(i) The diagonal of a nonsingular 𝑀-matrix is positive.

(ii) If 𝐵 is a 𝑍-matrix and 𝑀 is a nonsingular 𝑀-matrix, and𝑀 ≤ 𝐵, then 𝐵 is also a nonsingular

𝑀-matrix. In particular, any matrix obtained from a nonsingular 𝑀-matrix by setting

certain off-diagonal entries to zero is also a nonsingular 𝑀-matrix.

(iii) A matrix 𝑀 is a nonsingular 𝑀-matrix if and only if each principle submatrix of 𝑀 is a

nonsingular 𝑀-matrix.

Theorem 2.1 Let 𝐴 and 𝐵 be nonsingular matrices of order 𝑛1and 𝑛2, respectively, and let 𝑘 be an

integer such that 1 ≤ 𝑘 ≤ min(𝑛1, 𝑛2). Let 𝐴 and 𝐵 be partitioned as in (1.1) and their inverses be

partitioned as in (1.3). Let 𝐶 = 𝐴 ⨁𝑘 𝐵, then 𝐶 is nonsingular if and only if �� = ��22 + ��11 is

nonsingular.

Proof. Let 𝐼𝑚 be the identity matrix of order m. The theorem follows from the following relation:

[𝐴−1 00 𝐼𝑛−𝑛1

] 𝐶 [𝐼𝑛−𝑛2 0

0 𝐵−1] = [

��11 ��12 0

��21 ��22 00 0 𝐼𝑛−𝑛1

] [𝐴11 𝐴12 0𝐴21 𝐴22 + 𝐵11 𝐵120 𝐵21 𝐵22

] [

𝐼𝑛−𝑛2 0 0

0 ��11 ��120 ��21 ��22

]

= [

𝐼𝑛−𝑛2 ��12 0

0 �� ��120 0 𝐼𝑛−𝑛𝑛

].

(1.4)

We first consider the 𝑘-subdirect sum of nonsingular 𝑍-matrices. From (1.4) we can explicitly write

𝐶−1 = [𝐼𝑛−𝑛2 0

0 𝐵−1] [

𝐼𝑛−𝑛2 −��12��−1 ��12��

−1��12

0 ��−1 −��−1��120 0 𝐼𝑛−𝑛1

] [𝐴−1 00 𝐼𝑛−𝑛1

]

From which we can obtain

𝐶−1 = [

��11 − ��12��−1��21 ��12 − ��12��

−1��22 ��12��−1��12

��11��−1��21 ��11��

−1��22 −��11��−1��12 + ��12

��21��−1��21 ��21��

−1��22 −��21��−1��12 + ��22

] (2.1)

and therefore we can state the following immediate result.

Example 2.2. 𝐴 = [2 12 2

] → 𝐴 = [𝐴11 𝐴12𝐴21 𝐴22

] → 𝐴−1 =1

2[2 −1−2 2

] = [1 −

1

2

−1 1]

where ��11 = 1, ��12 = −1

2, ��21 = −1, ��22 = 1.

𝐵 = [1 12 3

] → 𝐵 = [𝐵11 𝐵12𝐵21 𝐵22

] → 𝐵−1 =1

1[3 −1−2 1

] = [3 −1−2 1

]

where ��11 = 3, ��12 = −1, ��21 = −2, ��22 = 1 and �� = ��22 + ��11 = 1 + 3 = 4 is nonsingular.

𝐶 = 𝐴 ⨁1 𝐵 = [

𝐴11 𝐴12 0𝐴21 𝐴22 + 𝐵11 𝐵120 𝐵21 𝐵22

] = [2 1 02 2 + 1 10 2 3

] = [2 1 02 3 10 2 3

], from here we have

[𝐴−1 00 𝐼𝑛−𝑛1

] 𝐶 [𝐼𝑛−𝑛2 0

0 𝐵−1] = [

1 −1

20

−1 1 00 0 1

] [2 1 02 3 10 2 3

] [1 0 00 3 −10 −2 1

]

= [1 −

1

2−1

2

0 2 10 2 3

] [1 0 00 3 −10 −2 1

] = [1 −

1

20

0 4 −10 0 1

]

Thus, 𝐶 is nonsingular.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

55

Theorem 2.3. Let 𝐴 and 𝐵 be nonsingular 𝑍-matrices of order 𝑛1and 𝑛2, respectively, and let 𝑘 be an

integer such that 1 ≤ 𝑘 ≤ min(𝑛1, 𝑛2). Let 𝐴 and 𝐵 be partitioned as in (1.1) and their inverses be

partitioned as in (1.3). Let 𝐶 = 𝐴 ⨁𝑘 𝐵, and �� = ��22 + ��11 be nonsingular. Then 𝐶 is nonsingular

𝑀-matrix if and only if for every entries of 𝐶−1 is nonnegative.

Example 2.4.

𝐴 = [2 −4−1 1

] and 𝐵 = [4 −3−2 1

] are nonsingular 𝑍-matrices.

We have𝐴−1 =1

−2[1 41 2

] = [−1

2−2

−1

2−1], where��11 = −

1

2, ��12 = −2, ��21 = −

1

2, ��22 = −1.

𝐵−1 =1

−2[1 32 4

] = [−1

2−3

2

−1 −2], where ��11 = −

1

2, ��12 = −

3

2, ��21 = −1, ��22 = −2 and

�� = ��22 + ��11 = −1 −1

2= −

3

2, so ��−1 = −

2

3.

The 𝑘-subdirect sum of 𝐴 and 𝐵 for 𝑘 = 1 is

𝐶 = 𝐴⊕1 𝐵 = [𝐴11 𝐴12 0𝐴21 𝐴22 + 𝐵11 𝐵120 𝐵21 𝐵22

] = [2 −4 0−1 5 −30 −2 1

], and from (2.1) we have

𝐶−1 = [

��11 − ��12��−1��21 ��12 − ��12��

−1��22 ��12��−1��12

��11��−1��21 ��11��

−1��22 −��11��−1��12 + ��12

��21��−1��21 ��21��

−1��22 −��21��−1��12 + ��22

] =

[ 1

6−2

3−2

−1

6−1

3−1

−1

3−2

3−1]

, we can

see from here that the entries of 𝐶−1 are nonpositive. Thus, 𝐶 is not a nonsingular 𝑀-matrix.

Example 2.5.

𝐴 = [1 0 0−1 3 −1−2 −1 1

] and 𝐵 = [1 0 0−2 4 −2−1 −1 1

] are nonsingular 𝑍-matrices, we have

𝐴−1 =1

2[2 0 03 1 17 1 3

] = [

1 0 03

2

1

2

1

27

2

1

2

3

2

], where ��11 = [1], ��12 = [0 0], ��21 = [

3

27

2

] , ��22 = [

1

2

1

21

2

3

2

].

𝐵−1 =1

2[2 0 04 1 26 1 4

] = [

1 0 0

21

21

31

22

], where ��11 = [1 0

21

2

] , ��12 = [01] , ��21 = [3

1

2] , ��22 = [2].

Since the entries of 𝐴−1 and 𝐵−1 are nonnegative, respectively, then the matrices of 𝐴 and 𝐵 are also

nonsingular 𝑀-matrices.

�� = ��22 + ��11 = [

1

2

1

21

2

3

2

] + [1 0

21

2

] = [

3

2

1

25

22] and ��−1 =

4

7[2 −

1

2

−5

2

3

2

] = [

8

7−2

7

−10

7

6

7

].

The 𝑘-subdirect sum of 𝐴 and 𝐵 for 𝑘 = 2 is 𝐶 = 𝐴⊕2 𝐵 = [

1 0 0 0−1 3 −1 + 1 0 0−2 −1 1 −2 4 −20 −1 −1 1

]

𝐶 = [

1 0 0 0−1 4 −1 0−2 −3 5 −20 −1 −1 1

] and from (2.1) we have

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

56

𝐶−1 = [

��11 − ��12��−1��21 ��12 − ��12��

−1��22 ��12��−1��12

��11��−1��21 ��11��

−1��22 −��11��−1��12 + ��12

��21��−1��21 ��21��

−1��22 −��21��−1��12 + ��22

]

from which we obtain 𝐶−1 =

[ 1 0 0 05

7

3

7

1

7

2

713

7

5

7

4

7

8

718

7

8

7

5

7

17

7 ]

.

Since the entries of 𝐶−1 are nonnegative then 𝐶 is a nonsingular 𝑀-matrix.

Example 2.6

𝐴 = [2 0 −1−2 1 −30 −1 1

] and 𝐵 = [1 −1 0−1 2 −1−2 −4 3

] are nonsingular 𝑍-matrices but not nonsingular 𝑀-

matrices.

𝐴−1 =1

−6[−2 1 12 2 82 2 2

] =

[ 1

3−1

6−1

6

−1

3−1

3−4

3

−1

3−1

3−1

3]

,

where ��11 = [1

3], ��12 = [−

1

6−1

6], ��21 = [

−1

3

−1

3

] , ��22 = [−1

3−4

3

−1

3−1

3

].

𝐵−1 =1

−3[2 3 15 3 18 6 1

] =

[ −

2

3−1 −

1

3

−5

3−1 −

1

3

−8

3−2 −

1

3]

,

where ��11 = [−2

3−1

−5

3−1] , ��12 = [

−1

3

−1

3

] , ��21 = [−8

3−2] , ��22 = [−

1

3].

�� = ��22 + ��11 = [−1

3−4

3

−1

3−1

3

] + [−2

3−1

−5

3−1] = [

−1 −7

3

−2 −4

3

] and

��−1 = −3

10[−4

3

7

3

2 −1] = [

2

5−

7

10

−3

5

3

10

].

The 𝑘-subdirect sum of 𝐴 and 𝐵 for 𝑘 = 2 is 𝐶 = 𝐴⊕2 𝐵 = [

2 0 −1 0−2 1 −3 + 1 −1 00 −1 1 −1 2 −10 −2 −4 3

]

𝐶 = [

2 0 −1 0−2 2 −4 00 −2 3 −10 −2 −4 3

] and from (2.1) we have

𝐶−1 = [

��11 − ��12��−1��21 ��12 − ��12��

−1��22 ��12��−1��12

��11��−1��21 ��11��

−1��22 −��11��−1��12 + ��12

��21��−1��21 ��21��

−1��22 −��21��−1��12 + ��22

]

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

57

from here we obtain 𝐶−1 =

[ 11

30−

2

15−

1

10−

1

30

−1

6−1

6−1

2−1

6

−4

15−

4

15−

3

15−

1

15

−7

15−

7

15−3

5

2

15 ]

.

Since the entries of 𝐶−1 are not all nonnegative, then 𝐶 is not a nonsingular 𝑀-matrix.

In the special case of 𝐴 and 𝐵 block lower and upper triangular nonsingular 𝑀-matrices, respectively,

the result of Theorem 2.2 is easy to establish.

Let 𝐴 = [𝐴11 0𝐴21 𝐴22

] and 𝐵 = [𝐵11 𝐵120 𝐵22

]

(2.2)

with 𝐴22 and 𝐵11 square matrices of order 𝑘.

Theorem 2.7. Let 𝐴 and 𝐵 be nonsingular lower and upper block triangular nonsingular 𝑀-matrices,

respectively, partitioned as is (2.2). Then 𝐶 = 𝐴⊕𝑘 𝐵 is a nonsingular 𝑀-matrix.

In this particular case of block triangular matrices we have

��12 = 0, ��21 = 0, ��22 = 𝐴22−1, ��11 = 𝐵11

−1, 𝐴22 = 𝐵11 and �� = 𝐴22−1 + 𝐵11

−1.

𝐴−1 = [��11 0

��21 ��22] , 𝐵−1 = [

��11 ��120 ��22

].

𝐶 = 𝐴⊕1 𝐵

𝐶 = [𝐴11 0 0𝐴21 𝐴22 + 𝐵11 𝐵120 0 𝐵22

] , |𝑐| = 𝐴11(𝐴22 +𝐵11)𝐵22 and 𝑎𝑑𝑗(𝐶) = [

𝐶11 𝐶21 𝐶31𝐶12 𝐶22 𝐶32𝐶13 𝐶23 𝐶33

],

where 𝐶11 = (−1)2(𝐴22 + 𝐵11). , 𝐶21 = (−1)

3(0) = 0, 𝐶31 = (−1)4(0) = 0

𝐶12 = (−1)3𝐴21𝐵22, 𝐶22 = (−1)

4𝐴11𝐵22, 𝐶32 = (−1)5𝐴11𝐵12

𝐶13 = (−1)4(0) = 0, 𝐶23 = (−1)

5(0) = 0, 𝐶33 = (−1)6𝐴11(𝐴22 + 𝐵11).

𝑎𝑑𝑗(𝐶) = [

(𝐴22 + 𝐵11)𝐵22 0 0−𝐴21𝐵22 𝐴11𝐵22 −𝐴11𝐵12

0 0 𝐴11(𝐴22 + 𝐵11)] ,then

𝐶−1 =1

|𝐶|𝑎𝑑𝑗(𝐶) =

1

𝐴11(𝐴22 +𝐵11)𝐵22[

(𝐴22 + 𝐵11)𝐵22 0 0−𝐴21𝐵22 𝐴11𝐵22 −𝐴11𝐵12

0 0 𝐴11(𝐴22 + 𝐵11)]

=

[ (𝐴22+𝐵11)𝐵22

𝐴11(𝐴22+𝐵11)𝐵220 0

−𝐴21𝐵22

𝐴11(𝐴22+𝐵11)𝐵22

𝐴11𝐵22

𝐴11(𝐴22+𝐵11)𝐵22

−𝐴11𝐵12

𝐴11(𝐴22+𝐵11)𝐵22

0 0𝐴11(𝐴22+𝐵11)

𝐴11(𝐴22+𝐵11)𝐵22]

.

Thus 𝐶−1 = [

𝐴11−1 0 0

−1

2𝐴22−1𝐴21𝐴11

−1 1

2𝐴22−1 −

1

2𝐴22−1𝐵12𝐵22

−1

0 0 𝐵22−1

]. (2.3)

Example 2.8.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

58

𝐴 = [2 0−1 3

] and 𝐵 = [3 −30 2

] are nonsingular lower and upper block triangular nonsingular 𝑀-

matrices, then 𝐴−1 =1

6(3 01 2

) = [

1

20

1

6

1

3

],where 𝐴11−1 =

1

2, 𝐴12

−1 = 0, 𝐴21−1 =

1

6, 𝐴22

−1 =1

3

𝐵−1 =1

6(2 30 3

) = [

1

3

1

2

01

2

], where 𝐵11−1 =

1

3, 𝐵12

−1 = 1

2, 𝐵21

−1 = 0, 𝐵22−1 =

1

2 and 𝐴22 = 𝐵11.

The 𝑘-subdirect sum of 𝐴 and 𝐵 for 𝑘 = 1 is 𝐶 = 𝐴⊕1 𝐵

𝐶 = [2 0 0−1 3 + 3 −30 0 2

] = [2 0 0−1 6 −30 0 2

],

then from (2.3) we obtain 𝐶−1 = [

𝐴11−1 0 0

−1

2𝐴22−1𝐴21𝐴11

−1 1

2𝐴22−1 −

1

2𝐴22−1𝐵12𝐵22

−1

0 0 𝐵22−1

]

=

[

1

20 0

−1

2.1

3. (−1).

1

2

1

2.1

3−1

2.1

3. (−3).

1

2

0 01

2 ]

=

[ 1

20 0

1

12

1

6

1

4

0 01

2]

and therefore

𝐶 is a nonsingular 𝑀-matrix as expected.

3. Conclusion

From the result of our discussion above, we got conclusion as follows:

1. The 𝑘-subdirect sum of two nonsingular matrices 𝐴 and 𝐵 be partitioned into 2 × 2 blocks and

denote it by 𝐶 = 𝐴⊕𝑘 𝐵 is nonsingular if and only �� = ��22 + ��11 is nonsingular.

2. The 𝑘-subdirect sum of two nonsingular 𝑍-matrices 𝐴 and 𝐵 and denote it by 𝐶 = 𝐴⊕𝑘 𝐵 is

nonsingular 𝑀-matrices if and only if each of the entries of 𝐶−1 is nonnegative, where 𝐶−1

is obtained from 𝐶−1 = [

��11 − ��12��−1��21 ��12 − ��12��

−1��22 ��12��−1��12

��11��−1��21 ��11��

−1��22 −��11��−1��12 + ��12

��21��−1��21 ��21��

−1��22 −��21��−1��12 + ��22

].

3. The 𝑘-subdirect sum of two nonsingular 𝑀-matrices 𝐴 and 𝐵 and denote it by 𝐶 = 𝐴⊕𝑘 𝐵 is

also nonsingular 𝑀-matrices because each of entries of 𝐶−1 is nonnegative.

4. The 𝑘-subdirect sum of two nonsingular lower and upper block triangular nonsingular 𝑀-

matrices 𝐴 and 𝐵 and denote it by 𝐶 = 𝐴⊕𝑘 𝐵 is also nonsingular 𝑀-matrix, if

��12 = 0, ��11 = 𝐴11−1, ��21 = 𝐴21

−1, ��22 = 𝐴22−1

��11 = 𝐵11−1, ��12 = 𝐵12

−1, ��21 = 0, ��22 = 𝐵22−1 and 𝐴22 = 𝐵11,

then 𝐶−1 is obtained from 𝐶−1 = [

𝐴11−1 0 0

−1

2𝐴22−1𝐴21𝐴11

−1 1

2𝐴22−1 −

1

2𝐴22−1𝐵12𝐵22

−1

0 0 𝐵22−1

].

Acknowledgment

We would like to thank all the people who helped this paper and Department of Mathematics who made

this seminar.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

59

References

Ayres Jr.Phd, Frank. (1974). Matriks. Translated by I nyoman Susila.(1994). Jakarta: Erlangga.

Bru, Rafael., Francisco Pedroche., and Daniel B.Szyld. (July 2005). Subdirect Sums of Nonsingular M-Matrices

and of Their Inverses. Electronic Journal of Linear Algebra ISSN 1081-3810.Vol.13 pp 162-174. Retrieved

June 1, 2013. http://hermite.cii.fc.ul.pt/iic/ela/ela-articles/13.html.

S.M. Fallat and C.R.Johnson. (1999). Sub-direct sums and positively classes of matrices. Linear Algebra Appl.,

288:149-173.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

60

Teaching Quotient Group Using GAP

Ema CARNIAa*, Isah AISAHa & Sisilia SYLVIANIa aDepartment of Mathematics, Faculty of Mathematics and Natural Sciences , Universitas

Padjadjaran, Indonesia

*[email protected]

Abstract: Quotient group as one of the subjects in abstract algebra is often difficult perceived

most undergraduate students. Therefore, we need a method that can facilitate the students to

understand the concept of quotient group. One alternative that can be done is by using GAP

(Group, Algorithm, and Programming) software as an instrument in learning the quotient group.

GAP can make a presentation of the quotient group concept becomes more attractive. By using

GAP, it is expected to facilitate the students to have a better understanding of the quotient group

concept.

Keywords: Quotient group, GAP

1. Introduction

Quotient group is one of the materials studied in the abstract algebra course at undergraduate level. The

teachers often encounter students’ difficulty in accepting and understanding the material. Most students

have difficulty understanding the concept of sets whose members is set. Orit Hazzan, in his paper

(1999), also mentioned that many of the Abstract Algebra teachers reported students’ difficulties in

understanding the material. In 1994, Dubinsky, Dautermann, Leron, and Zazkis point out that the major

difficulties in understanding group theory appear to begin with the concepts leading up to Lagrange’s

Theorem and Quotient Groups – cosets, cosets multiplication and normality. Therefore, this paper

emphasizes on the teaching quotient group.

Various attempts have been made to assist students in understanding the material. For example,

conducting tutorials, explaining quotient group materials in detail accompanied by examples that more

real. Many researchers have conducted research to develop teaching abstract algebra materials. Brown

(1990), Kiltinen and Mansfield (1990), Czerwinski (1994), and Leganza (1995), they all provide

examples of specific abstract algebra tasks for students and then examined the responses given by the

students. Dubinsky, Dautermann, Leron and Zazkis, in 1994, conducted a study on the development of

learning some topics in abstract algebra, including coset, normality, and quotient group. In 1997, Asiala,

Dubinsky, Mathews, and Morics conduct research that concentrates on developing students'

understanding coset, normality, and quotient group materials.

Some researchers have used programming language for teaching abstract algebra materials. For

example, in 1976 Gallian using a computer program written in the Fortran programming language to

investigate finite groups (Gallian, 1976). There was also a researcher who uses “Exploring Small

Groups” (Geissinger, 1989), and “Cayley” (O 'Bryan & Sherman, 1992). Some of them use software

package that does not specialize in computational in discrete abstract algebra, such as Matlab (Makiw,

1996). On the contrary, to help teach abstract algebra, Dubinsky and Leron (1994) uses software

package that specializes in computational in discrete abstract algebra that is ISETL. However, over the

times, the use of ISETL felt less effective, because the program has a lot of limitations in terms of its

functions and library. In addition, this program is designed specifically for teaching, so it cannot be

used for research purposes. When we teach abstract algebra we should think that we educate

undergraduate students who will be a graduate student in the future, so we better provide them with the

tools they can use to do research in the future.

In this paper, to solve the same problem, GAP software will be used as a tool in teaching the

concept of quotient group to undergraduate students and review the role of software GAP to deepen

students' understanding of the material. GAP is a software package for computational that is used for

computational in discrete abstract algebra (The GAP Group, 2013). Compared with ISETL, GAP has

many advantages. Beside GAP can be used as a tool in teaching abstract algebra materials, it can also

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

61

use for research purposes. Another advantage of GAP is that this software is still being developed until

today.

2. About GAP

GAP stands for Group, Algorithm, and Programming. GAP is a free, open, and extensible software

package which is used for computation in abstract algebra, with particular emphasis on Computational

Group Theory. This software is used in research and teaching for studying groups and their

representations, rings, vector spaces, algebras, combinatorial structures, and more (The GAP Group,

2013).

GAP was first developed in 1985 at Lehrstuhl D für Mathematik, RWTH Aachen Germany by

Joachim Neubüser, Johannes Meier, Alice Niemayer, Werner Nickel, and Martin Schönert. The first

version of GAP, which was released to the public in 1988, is version 2.4. Then, in 1997 coordination

of GAP development was moved to St. Andrews, Scotland. GAP version 4.1, which was released in

July 1999, is a complete internal redesign and almost complete rewrite of the system. In 2008, GAP

received an award from the ACM / SIGSAM Richard Dimick Jenks Memorial Prize as a superior

software engineering for computational algebra. Until now GAP is still being developed at the

University of St. Andrews, in St. Andrews, Scotland (The GAP Group, 2013). The current version of

GAP is 4.6.5 which was released on 20 July 2013. Figure 1 shows the GAP user interface version 4.6.5.

Figure 1. GAP version 4.6.5 user interface.

Alexander Hulpke develop a GAP installer version 4.4.12. The installer will install GAP and

GGAP, a graphical user interface for the system. Figure 2 shows GGAP user interface.

Figure 2. GGAP user interface.

Although GGAP look better in terms of appearance, GGAP is still using GAP version 4.4.

Consequently there are some drawbacks. There are many changes between GAP version 4.4 and 4.6.5.

GAP 4.6.5 has more packages and improved functionality than 4.4. Some bugs which was found in 4.4,

which could lead to incorrect result, fixed in version 4.6.5. Therefore, although we can still use GGAP

to execute some specific commands, but the use of GAP version 4.6.5 is recommended.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

62

GAP has more than 100 packages that serve as algorithms, methods, or library. From a

programming standpoint this software has a lot of functions and operations. GAP currently has more

than 800 default functions to study topics in algebra. So that GAP can be used to provide many

examples, from the simple to the complex example, in a relatively short time compared to manually

search. There are at least five ways GAP can be a useful educational tool, that is GAP can be use as a

fancy calculator, as a way to provide large or complicated examples, as a way for students to write

simple computer algorithms, as a way for producing large amounts of data so that the student can

formulate a conjecture and as a means for students to work in collaboration (Gallian, J.A., 2010).

GAP is an interactive system and based on a "read-eval-print" loop: the system takes in user

input, given in text form, evaluates this input (which typically will do the calculations) and then prints

the result of this calculation. (Hulpke. A, 2011).

The interactive nature of GAP allows the user to write an expression or command and see its

value immediately. User can define a function and apply it to arguments to see how the function works

(The GAP Group, 2006).

3. Using GAP in Teaching Quotient Group Concept

When the student has sufficient knowledge about group and subgroup, they can be given an explanation

of the right and left relations concept in the group theory. The rules that are given to this relation leads

to a necessary and sufficient condition for a subset to be subgroups. Both right and left relations are an

equivalence relation. With the above relation, the group will be partitioned into equivalence classes

called coset. In particular, the left relation will result in the formation of left coset, and the right relation

will generate the right coset.

Figure 3. Finding left and right coset one at a time.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

63

Figure 3 gives an overview of the work steps that can be done by the student to find cosets. First, in the

GAP worksheet, the students asked to define a group and its subgroup. After that, by using GAP, the

students are assigned to find all the right and left cosets that based on the understanding they have

earned. Students can also view and compare the definition of right and left coset.

The teacher can give some example to help the students understand how to use GAP to find coset. For

example, first define G is group that generated by permutation (1 2 3 4) and use the command in GAP

to print all of its elements. Then, in a similar way to G, define H subgroup of G that generated by

permutation (1 3) (2 4) and print all of its elements. Now, find all left and right cosets H in G one at a

time. After that, the teacher can give another example to find the coset to the students.

By using GAP, students can be trained to find coset with a more pleasant way. That is because they can

see directly the real form of all cosets that they are looking for. So that making it easier for them to

understand the concept of coset which has previously given.

Once they were able to find the cosets one by one, the teacher can raise the question what if we want to

bring all of these cosets in just one command line GAP. This question is raised with the intention that

not only the students can understand the concept of coset addition theoretically, but also they can apply

the concepts they have acquired into a computer algorithm. It can exercise their creativity in learning

mathematics.

The next thing to do is they asked to observe all cosets that they obtained. The question that can be

raised is whether all cosets they obtained different from each other? Or is there cosets that has something

in common. There are several ways to answer these questions. One of them is by manually comparing

cosets they have obtained, i.e. checks whether the right cosets is the same with left cosets? This way,

the first thing they do is compare cosets that they obtained and then write down what conclusions they

can take. From this, the teacher can raise a question, whether it is still an effective way to do for cosets

with many elements? The answer to that question can raise another way to check whether the right coset

is equivalent with the left coset. It can also exercise their creativity. Because to answer that question,

the students are required to be able to apply the formal definition that they have obtained into a computer

algorithm. Figure 4 shows another way of checking wheter every right coset is left coset.

Figure 4. Finding the left and right cosets

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

64

Once the students understand how to find the right and left coset as well as check that the right and left

coset are equivalent, teacher can provide them understanding of normal subgroups. Normal subgroups

are a group that has the same right and left coset. Thus, the students can understand the concept of

normal subgroups easier because they have indirectly applied the definition to GAP.

From the exercise above the students already know what is meant by coset, left and right coset, and

normal subgroup. Thus they have had enough knowledge to face the quotient group material.

To introduce the concept of quotient group G / H, which is the set of cosets Hg or gH, the teacher will

show the necessary and sufficient condition for subgroups H such that operation on G / H well defined.

The necessary and sufficient condition is that H is a normal subgroup (the right coset is equivalent with

the left coset) of the group G.

After introducing the concept of quotient group, students will be introduced some examples of quotient

group. However, sometimes students still cannot understand very well how to find the quotient group.

Therefore, after doing such examples manually, the students are assigned to work on the GAP

worksheet. By using GAP, students will be able to see and understand the concept of quotient group

easier. Figure 5 shows how to obtain quotient group and its elements using GAP.

Figure 5. Obtaining quotient group and its elements using GAP.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

65

For example, define a symmetric group S4. Then, using the GAP command, find all of its elements.

After that, define N a subgroup of S4 that generated by (1 2) (3 4) and (1 3) (2 4). Find all elements in

N by using GAP command. Check whether N is a normal subgroup of S4. If N is a normal subgroup of

S4 then print quotient group S4/N, otherwise find another subgroup of S4 that normal in S4.

Based on the explanation above, it can be concluded that by using GAP the teacher can introduce the

concept of quotient group to the students in a fun way. By using GAP, the students are also trained to

improve their creativity, particularly in the mathematics areas. That is because they are trained to apply

algebraic concepts they have obtained in the form of programming algorithms. They can also be easier

to understand a new concept they receive, because they can directly see the real form of the concept.

With the use of GAP, in learning abstract algebra, it is expected to provide the motivation to learn

abstract algebra in a way that is not monotonous, which in turn can increase students' understanding of

the course.

4. Conclusion

Abstract algebra is one of the subjects that are often difficult perceived by most students. Dubinsky et.al

found great difficulty encountered by students starting with the concept that led to quotient group.

Therefore we need an innovative method of learning the quotient group concept. One of the things that

can be done is by using GAP as an instrument in the learning quotient groups. The use of GAP in

learning the concept of quotient group is expected to reduce abstractness of the concept of quotient

group, thus helping students to understand the quotient group concept.

Acknowledgements

We would like to thank all the people who prepared and revised previous versions of this document.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

66

References

Asiala, M., Dubinsky, E., Mathews, D. M., Morics, S. and Oktaç, A. (1997). Development of Students’

Understanding of Cosets, Normality, and Quotient groups. Journal of Mathematical Behavior, 16(3), 241–

309.

Dubinsky, Ed & Leron, Uri. (1994). Learning Abstract Algebra with ISETL. New York: Springer-Verlag.

Gallian, J.A. (1976). Computers in Group Theory. Mathematics Magazines, 49, 69-73.

Gallian, J. A. (2010), Abstract Algebra with GAP for Contemporary Abstract Algebra 7th edition. Brooks/Cole,

Cengage Learning. Boston.

Geissinger, Ladnor. (1989). Exploring Small Groups (Ver1.2B). San Diego: Harcourt Brace Jovanovich.

Hazzan, O. (1999). Reducing Abstraction Level When Learning Abstract Algebra Concepts. Education Studies

in Mathematics, 40 (1), 71-90.

Hulpke, A. (2011). Abstract Algebra in GAP. The Creative Commons Attribution-Noncommercial-Share Alike

3.0. United States, California.

Makiw, George. (1996). Computing in abstract algebra. The College Mathematics Journal, 27, 136-142.

O’Bryan, John & Sherman, Gary. (1992). Undergraduates, CAYLEY, and mathematics. PRIMUS, 2, 289-308.

Rainbolt, J.G. (2002). Teaching Abstract Algebra with GAP. Saint Louis.

The GAP Group. (2013). GAP – Groups, Algorithms, and Programming, Version 4.6.5. http://www.gap-

system.org.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

67

Analysis of Factors Affecting the Inflation in

Indonesia by Using Error Correction Model

Emah SURYAMAHa*, Sukonob, Satria BUANAc

a,b,cDepartment of Mathematics, FMIPA, Universitas Padjadjaran a*[email protected]

Abstract: Inflation is one of economic ills that could not be ignored, because it can bring

economic instability, slow economic growth and increase the number of unemployment.

Usually, inflation become a target of government’s policy. Failing or shocking will make price

market fluctuation in domestic country and its end with inflation in economic (Baasir,

2003:265). There are some factors that cause inflation such as money supply, fuel price,

exchange rate, and BI rate. Controlling inflation is hard to do for maintaining economic stability

because of the inflation is fluctuated. Looking at the most influential factor of inflation is one

of the method to control it. Theory of inflation and Keynes theory are used to analyze inflation

factors. Error correction model method is used to find the most influential factors of inflation,

that factor is fuel price.

Keywords: Inflation, Error Correction Model, Inflation Factors, economic stability

1. Introduction

Inflation is the general increase in prices and continuously associated with the market mechanism that

can be caused by various factors. The process of decreasing the value of the currency continuously is

also called the inflation. Inflation is an economic disease that can not be ignored, because it can cause

economic instability, slow economic growth and ever-rising unemployment.

Not infrequently inflation to government policy targets. Failure or shock in the country will

lead to price fluctuations in the domestic market and end up with inflation in the economy (Baasir,

2003:265).

Fluctuation in the rate of inflation in Indonesia with a variety of factors that affect the result in

the more difficult it is to control inflation, so in control of government must know the factors forming

inflation. Inflation in Indonesia is not only a short term phenomenon, as the quantity theory and the

theory of inflation Keynes, but also a long term phenomenon (Baasir, 2003:267).

Inflation fluctuates, causing inflation control is very difficult to maintain stability in the

economy. Therefore, an attempt to control for stable so important to do. One way to control it by looking

at the factors that most influence on inflation. That requires further analysis in the search for the causes

of inflation that occurred through study whether factors affecting inflation in Indonesia and the

influence of these factors in the long term.

This paper can be seen through the relationship between the factors that cause inflation as a

variable in the money supply, national income variable, the variable rate, variable interest rates, with

inflation in the long jangaka. So as to know which are the most influential factor in inflation.

2. Methodology

2.1 Data types

This paper uses the data in the form of a monthly time series of the year from 2005 to 2012.

The data used in this thesis is a secondary data obtained from the institutions or agencies,

among others, Bank Indonesia (BI) and the Central Statistics Agency (BPS). The data used are: 1. Data of inflation in Indonesia in 2005 – 2012 years.

2. Data of the money supply (M2) in Indonesia in 2005-2012 years.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

68

3. Data of the fuel price (BBM) in Indonesia in 2005-2012 years.

4. Data of the interest rate in Indonesia in 2005 – 2012 years.

5. Data of the U.S. dollar exchange rate in 2005 – 2012 years.

2.2 Analysis of Error Correction Model

ECM econometric methods used in the analysis of time series data. ECM involves the use of

econometric measurement method called cointegration. ECM method is used to look at the short term

dynamic movement, so that the balance can be seen in the short term. While cointegration is used to

look at the long term equilibrium. Before discussing the ECM, first discussed the concept of stationarity.

2.2.1 Stationarity Test

Stationary test on the data, performed the unit root tests (unit root) to see whether the time series data

used stationary or not. Stationary test data in this study using the Augmented Dickey Fuller test (ADF).

This test by comparing the value ADFtest with Mackinnon Critical Value 1%, 5%, 10% by the

following equation (Gujarati, 2003:817):

m

i

tititt ZZTZ

1

10 (1)

where 0 is a deterministic function of the time index t and 1 ttt ZZZ is first difference. in

practice 0 can be zero or constant. Statistics of the t-ratio is

)ˆ(std

ˆtestADF

(2)

where is estimator of , which is referred to as Augmented Dickey Fuller on the unit root test. The

hypothesis used in this study is 0:0 H time series data contain a unit root and 0:0 H time series

data does not contain a unit root. If ADFtest<MacKinnon critical value then reject 0H . Whereas if

ADFtest>MacKinnon critical value then accept 0H .

Data Transformation To Become Stationary. To transform the data into stationary using

stationary differentiation process. Equation that used, in general is

ttt ZZ 1321 (3)

If 01 , 02 , and 13 then the model becomes a random walk without intercept:

Zt = Z tt 1 or tZ = t (4)

So that, E( tZ ) = 0 and Var ( tZ ) = 2 which means that the model be stationary through the

differentiation process first.

If 01 , 02 , and 13 then the model becomes a random walk with an intercept that not

stationary:

ttt ZZ 11 (5)

Zt – Zt-1= βt-1 + εt or tZ = β1 + t

then:

E( tZ ) = E(β1 + t ) = β1 and Var( tZ ) = Var (β1 + t ) = σ2

Mean and variance becomes constant, which means that the model be stationary through the

differentiation process of the first.

If 01 , 02 , 13 , then even become a random walk models with trend or deterministic

trend:

tt tZ 21 (6)

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

69

Where the mean : E (Zt) = β1 + β2 and variance : Var (Zt) = σ2

If Zt is reduced by average - ratanya then be stationary time series data. This process is called a

trend stationary process.

2.2.2 Cointegration test

Cointegration test popularized by Engle and Granger (1987) (Gujarati, 2009). Cointegration approach

is closely related to the possibility of testing the long-term equilibrium relationship between economic

variables as required by economic theory. Cointegration approach can also be seen as a test of the theory

and is an important part in the formulation and estimation of a dynamic model (Engle and Granger,

1987).

In the concept of cointegration, two or more variables are not stationary time series will be

cointegrated if a linear combination is also in line with the passage of time, although it can occur each

variable is not stationary. When the time series variables are cointegrated then there is a stable

relationship in the long run, if the two series are not stationary consisting of Xt and Zt cointegrated,

then there is a special representation as follows:

Zt = β0 + β1 Xt + εt (7)

The hypothesis used to test cointegration according to equation (7) is as follows:

H0 : = 0, meaning that the data of the time series contain unit roots

H1 : ≠ 0, meaning that the data of the time series does not contain unit roots

H0 rejected if = 0 the time series data do not contain unit roots. And H0 is accepted if ≠ 0 the

time series data contain unit roots.

2.2.3 Error Correction Model

In the long term time series model can be shown to be cointegrated regression or an equilibrium

(stable) in the long term, but in the short-term time series models are probably not experiencing balance

disturbance caused by the error term (εt). Adjustments to the deviation of the short-term real money

demand is done by inserting the error correction term derived from the long-term residual equation. To

correct imbalances in the short term to the long-term equilibrium is called the Error Correction

Mechanism.

ECM model of the relationship between the independent variable (X) and dependent variable

(Y), in the form of:

tttt eaxaaY 1010 (10)

εt-1 an error cointegration of lag 1. When εt-1 not zero then the model has no equilibrium. if εt-1

positive, a2 εt-1 negative, will cause tY negative so that Yt rose again to correct the error of balance.

Whereas, if εt-1 negative, a2 εt-1 positive, will lead to tY positive so that Yt rise in period t to correct

balance errors. The absolute value of a2 describes how quickly the value of the balance can be achieved

again.

2.2 Diagnostic Test of Ordinary Least Square (OLS)

One important step in estimating a model is to test whether the model has been estimated that it

deserves to be used or not. This feasibility testing or diagnostic tests that test for serial correlation

between the residuals at some lag. In this study, a diagnostic test used is the Portmanteau test, the test

statistic as follows (Anastia, 2012):

m

k

k

kn

rnnQ

1

2ˆ2 (11)

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

70

which is distribution 2

pm with kr

is the residual correlation, m is the number of lagged residuals, p is

the order of the VAR model, and n is the number of residual.

The hypothesis used in this stage are:

:0H meaning that was homoscedastisity

:1H meaning that was heteroscedastisity

If 2

pmQ the reject 0H . However, if

2pmQ then accept 0H .

3. Result and Analysis

3.1 Stationarity Test Results

Testing of stationary time series data for variable levels of inflation, the money supply, the

price of raw fuel, U.S. dollas rate, the interest rate used in this thesis using graphs and

Augmented Dickey Fuller test (ADF) in equation (2) with the help of software Eviews 6. The

hypothesis used is: :0H ADFtest> MacKinnon Critical Value (there is a unit root at the level) :1H ADFtest< MacKinnon Critical Value (there is no unit root at the level)

Table 1 Stationary test result at the level Variable Critical Value at level: Value of

ADFtest

Prob. Conclusion

1% 5% 10%

Inflation rate -2.595340 -1.945081 -1.614017 -6.451073 0.0000

0H rejected

Amount of

money supply

-2.600471 -1.945823 -1.613589 -0.161921 0.6240

0H rejected

Price of BBM -2.595340 -1.945081 -1.614017 -8.402825 0.0000

0H rejected

USD exchange

rate

-2.595340 -1.945081 -1.614017 -7.906546 0.0000

0H rejected

Interest rate -2.595340 -1.945081 -1.614017 -3.308546 0.0012

0H rejected

Based on Table 1 it can be seen that the results ADFtest> MacKinnon Critical Value only

variable in the money supply, the money supply variable is not stationary in level at significance level

of 1%, 5%, and 10%. As for the results ADFtest <MacKinnon Critical Value on a variable rate of

inflation, fuel prices, U.S. dollar exchange rate, and interest rates, then the variable is already stationary

in level at significance level of 1%, 5%, and 10%.

3.2 Test Results Data Transformation Becomes Stationary

Due to the variable in the money supply is not stationary at level, so the data is transformed into

stationary and re-tested using the Augmented Dickey Fuller test (ADF) in equation (3.7) with the help

of software EViews 6 on 1st difference. Hypotheses used are:

:0H ADFtest> MacKinnon Critical Value (there is a unit root 1st difference)

:1H ADFtest <MacKinnon Critical Value (not there 1st difference)

Table 2 Test Results In 1st Difference Stationary Variable Critical Value at 1st difference : Value of

ADFtest

Prob. Conclusion

1% 5% 10%

Amount of

money supply

-2.600471 -1.945823 -1.613589 -6.902877 0.0000 0H rejected

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

71

Based on Table 2 it can be seen that the results ADFtest <MacKinnon Critical Value on a

variable in the money supply, then the variable in the money supply has been stationary in 1st difference.

3.3 Cointegration Test Results

The variables are not stationary at level but stationary at 1st difference, cointegration likely will

occur, which means there is a long-term relationship between the variables. To find out if it is true

berkointegrasi variables, then tested with the Augmented Engle-Granger test using equation (7) with

the help of software EViews 6. That will get the long-term cointegration models.

Table 3 Results of Cointegration Test Null Hypothesis: RESID01 has a unit root

Exogenous: None

Lag Length: 0 (Automatic based on SIC,

MAXLAG=1)

t-Statistic Prob.*

Augmented Dickey-Fuller test statistic -7.382618 0

Test critical values: 1% level -2.59534

5% level -1.945081

10% level -1.614017

*MacKinnon (1996) one-sided p-values.

Augmented Dickey-Fuller Test Equation

Dependent Variable: D(RESID01)

Method: Least Squares

Date: 08/15/13 Time: 09:16

Sample (adjusted): 7/04/2005 10/18/2005

Included observations: 77 after adjustments

Variable Coefficient

Std.

Error t-Statistic Prob.

RESID01(-1) -0.83549 0.11317 -7.382618 0

R-squared 0.41761 Mean dependent var -0.00120

Adjusted R-squared 0.41761 S.D. dependent var 0.17373

S.E. of regression 0.132578 Akaike info criterion -1.19038

Sum squared resid 1.335853 Schwarz criterion -1.15995

Log likelihood 46.82979 Hannan-Quinn criter. -1.17821

Durbin-Watson stat 1.918074

Table 3 shows the null hypothesis, the results ADFtest <MacKinnon Critical Value at significance level

of 1%, 5%, and 10%. It can be concluded variable inflation, money supply, fuel prices, U.S. dollar

exchange rate, and interest rates are cointegrated.

3.2 Results of Error Correction Model

Error correction model proposed by Engle-Granger require two stages, so called EG two steps.

The first phase calculates the residual value of cointegration regression results in Table 3. The

second stage regression analysis by including the residuals from the first step. The results of

the first phase of the data in the form of residuals from cointegration.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

72

Table 4 Results of the Second Stage Dependent Variable: D(INFLASI)

Method: Least Squares

Date: 07/24/13 Time: 17:58

Sample (adjusted): 2005M08 2011M12

Included observations: 77 after adjustments

Variable Coefficient Std. Error t-Statistic Prob.

C -0.00089 0.015498 -0.057655 0.9542

D(B1) -0.78552 0.574968 -1.366192 0.1762

D(B2) 0.855093 0.138204 6.187179 0.0000

D(B3) -0.23725 0.314182 -0.755122 0.4527

D(B4) 0.950098 0.681905 1.393299 0.1679

RESID01(-1) -0.81397 0.11981 -6.793862 0.0000

R-squared 0.54037 Mean dependent var -0.00129

Adjusted R-squared 0.508002 S.D. dependent var 0.193813

S.E. of regression 0.135946 Akaike info criterion -1.0784

Sum squared resid 1.312167 Schwarz criterion -0.89577

Log likelihood 47.51855 Hannan-Quinn criter. -1.00535

F-statistic 16.69442 Durbin-Watson stat 1.927857

In Table 4 it can be seen some of the variables were not significant. Furthermore, by removing

non-significant variables and the model re-estimation performed, starting from the least significant

variable (the biggest prob value), and greater than the significant level α = 5% , Which is constant, and

then the variable d (b3), d (b1), and d (b4). Eliminate the constant re-estimation results with the help of

software EViews 6 are given in Table 5.

Table 5 Estimation Results Reset By Removing Variables C Dependent Variable: INFLASI

Method: Least Squares

Date: 08/20/13 Time: 19:25

Sample (adjusted): 7/04/2005 10/18/2005

Included observations: 77 after adjustments

Variable Coefficient Std. Error t-Statistic Prob.

D(B1) -0.6415 0.652986 -0.982414 0.3292

D(B2) 0.467592 0.15699 2.978486 0.0039

D(B3) -0.18125 0.356888 -0.507857 0.6131

D(B4) 0.084239 0.774466 0.10877 0.9137

RESID01(-1) 0.222794 0.136092 1.637089 0.106

R-squared 0.145509 Mean dependent var -0.0107

Adjusted R-squared 0.098037 S.D. dependent var 0.162601

S.E. of regression 0.154425 Akaike info criterion -0.83549

Sum squared resid 1.716984 Schwarz criterion -0.68329

Log likelihood 37.16632 Hannan-Quinn criter. -0.77461

Durbin-Watson stat 1.573165

In Table 5 the variable d (b1), d (b3), d (b4) is not significant so that the model should

be re-estimated by eliminating variable d (b3) the least significant (prob the greatest value).

The result of re-estimation by eliminating variable d (b3) with the help of software EViews 6

are given in Table 6.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

73

Table 6 Estimation Results Reset By Removing variable d(B4)

Dependent Variable: INFLATION

Dependent Variable: INFLASI

Method: Least Squares

Date: 08/20/13 Time: 19:34

Sample (adjusted): 7/04/2005 10/18/2005

Included observations: 77 after adjustments

Variable Coefficient Std. Error t-Statistic Prob.

D(B1) -0.64514 0.647698 -0.996056 0.3225

D(B2) 0.46522 0.154412 3.012849 0.0036

D(B3) -0.17732 0.352649 -0.502835 0.6166

RESID01(-1) 0.22611 0.131732 1.716444 0.0903

R-squared 0.145369 Mean dependent var -0.0107

Adjusted R-squared 0.110247 S.D. dependent var 0.162601

S.E. of regression 0.153376 Akaike info criterion -0.8613

Sum squared resid 1.717266 Schwarz criterion -0.73954

Log likelihood 37.16 Hannan-Quinn criter. -0.8126

Durbin-Watson stat 1.573502

In Table 6 the variable d (b1) and d (b4) is not significant so that the model should be re-estimated by

eliminating variable d (b4) the least significant (prob the greatest value). The result of re-estimation by

eliminating variable d (b4) with the help of software EViews 6 are given in Table 7.

Table 7 Estimation Results of Birthday By Removing variable d(B3)

Dependent Variable: INFLASI

Method: Least Squares

Date: 08/20/13 Time: 19:38

Sample (adjusted): 7/04/2005 10/18/2005

Included observations: 77 after adjustments

Variable Coefficient Std. Error t-Statistic Prob.

D(B1) -0.68294 0.640067 -1.066977 0.2895

D(B2) 0.457851 0.152937 2.993721 0.0037

RESID01(-1) 0.220293 0.130559 1.687309 0.0958

R-squared 0.142409 Mean dependent var -0.0107

Adjusted R-squared 0.11923 S.D. dependent var 0.162601

S.E. of regression 0.1526 Akaike info criterion -0.88382

Sum squared resid 1.723214 Schwarz criterion -0.7925

Log likelihood 37.02688 Hannan-Quinn criter. -0.84729

Durbin-Watson stat 1.576709

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

74

In Table 7 the variable d (b1) was not significant so that the model should be re-estimated by

eliminating variable d (b1). The result of re-estimation by eliminating variable d (b1) with the help of

software EViews 6 are given in Table 8.

Table 8 Estimation Results of Reset By Removing Variables d(B1) Dependent Variable: D(INFLASI)

Method: Least Squares

Date: 08/15/13 Time: 09:49

Sample (adjusted): 7/04/2005 10/18/2005

Included observations: 77 after adjustments

Variable Coefficient Std. Error t-Statistic Prob.

D(B2) 0.799148 0.136112 5.871237 0

RESID01(-1) -0.77097 0.116539 -6.615506 0

R-squared 0.510323 Mean dependent var -0.00129

Adjusted R-squared 0.503794 S.D. dependent var 0.193813

S.E. of regression 0.136526 Akaike info criterion -1.11898

Sum squared resid 1.397948 Schwarz criterion -1.0581

Log likelihood 45.08054 Hannan-Quinn criter. -1.09462

Durbin-Watson stat 1.965013

From Table 8, the model obtained as follows:

12 77097.0-)( 0.799148)( tXdYd

The model above explains that if the variable inflation )(Yd increase 1%, the variable price of

crude fuel ()( 2Xd)will increase 0.799148 %. And variable 1t showed large residual error correction

or lag 1 at 0.77097.

To find out if there is a deviation from the model above requirements classical

assumptions on the regression model, where the regression model must meet or absence of

heteroscedasticity is homoscedasticity that residual value is constant, tested using a diagnostic

test. Hypotheses used are:

:0H are homoscedastisity

:1H are heteroscedasticity

If 2pmQ then reject 0H . However, if 2

pmQ then accept 0H.

The results of the diagnostic test with the help of software EViews 6 are given in Table 9.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

75

Table 9 Diagnostic Test Results (Output EViews 6) Heteroskedasticity Test: White

F-statistic 0.03552 Prob. F(3,73) 0.991

Obs*R-squared 0.112236 Prob. Chi-Square(3) 0.9903

Scaled explained SS 0.781481 Prob. Chi-Square(3) 0.8539

Test Equation:

Dependent Variable: RESID^2

Method: Least Squares

Date: 09/22/13 Time: 14:18

Sample: 2005M08 2011M12

Included observations: 77

Variable Coefficient Std. Error t-Statistic Prob.

C 0.018956 0.008708 2.17686 0.0327

(D(B2))^2 -0.02705 0.163647 -0.165294 0.8692

(D(B2))*RESID01(-1) 0.054855 1.031982 0.053155 0.9578

RESID01(-1)^2 -0.02517 0.123145 -0.204395 0.8386

R-squared 0.001458 Mean dependent var 0.018155

Adjusted R-squared -0.03958 S.D. dependent var 0.070013

S.E. of regression 0.071385 Akaike info criterion -2.39091

Sum squared resid 0.371993 Schwarz criterion -2.26916

Log likelihood 96.05018 Hannan-Quinn criter. -2.34221

F-statistic 0.03552 Durbin-Watson stat 2.009134

Prob(F-statistic) 0.990959

By looking at the results of Obs * R-Squared of 0,112236 < 9,48773 (By looking at the results

of Obs * R-Squared value of the critical Chi square )( 2Xd at α = 5%), it can be concluded that the

estimation occurs homoscedasticity. Another way is to look at the probability of the value of chi squares.

In the above result the probability value of 0.9903 means heteroscedasticity not occur at significant

level of 1%, 5%, 10%. The greater the probability value means that the heteroscedasticity is not

happened.

The model above explains that if the variable inflation ))(( Yd increase 1%, the variable price of

crude fuel ( )( 2Xd ) will increase by 0.799148%. and variable 1t showed large residual error

correction or lag 1 for 0.770966 which means a short-term equilibrium will be reached.

4. Conclusion

The conclusion that can be drawn in this study are as follows: For the variable )( 2Xd , if the increase

in inflation of 1%, the price of raw fuel will increase by 0.799148% in the short term and long term. As

for the variable )1(Xd , )( 3Xd , )( 4Xd , if the increase in inflation by 1%, then the money supply,

U.S. dollar exchange rate, and interest rates will affect the long term but had no effect in the short term.

Variables 1t showed large residual error correction or lag 1 is 0.77097. Due to the variable d(b1),

d(b3), d(b4) has been removed, so that the money supply, U.S. dollar exchange rate, and interest rates

are not significant to be used as a model. Only the price of raw fuel that is significant to be used as a

model so that the price of raw fuel that is most influential on inflation.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

76

5. References

Agustina, R. 2012. Analisis Hubungan Kausalitas dan Keseimbangan Jangka Panjang Pertumbuhan

Penduduk dan Pertumbuhan Ekonomi Jawa Barat Menggunakan Pendekatan Model Vector

Autoregressive (VAR). Skripsi Tidak Dipublikasikan. Bandung : Jurusan Matematika Fakultas

Matematika dan Ilmu Pengetahuan Alam Universitas Padjadjaran.

Anastia, J. N. 2012. Perbandingan Tiga Uji Statistik Dalam Verifikasi Model Runtun Waktu. Skripsi

Tidak Diterbitkan. Bandung : Jurusan Matematika Universitas Pendidikan Indonesia.

Cryer, J.D. 1986. Time Series Analysis. Boston: PWS-KENT Publishing Company.

Gujarati, D. 2003. Basic Econometric. Second Edition. New york : Mcgraw-Hill.

Rosadi, D. 2012. Ekonometrika & Analisis Runtun Waktu Terapan dengan EViews. Yogyakarta: Andi.

Wei, W.W.S. 2006. Time Series Analisys Univariate and Multivariate Methods. Second Edition. USA:

Addison-Wesley Publishing Company.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

77

Network Flows and Integer Programming

Models for The Two Commodities Problem

Lesmana E.

Department of Mathematics Faculty of Mathematics and Natural Science, Universitas Padjadjaran

Jl. Raya Bandung-Sumedang km 21 Jatinangor

[email protected]

Abstract :Network Flows models generally explain the problems with the assumption that the

commodity is sent through a network. Sometimes the network can carry different types of

commodities. Multi-commodity problem aims to minimize the total cost when different types

of goods shipped through the same network. Commodities can be distinguished based on their

physical characteristics or only with certain attributes.

This paper focuses on network flows and integer programming models for the two commodities

Keyword : network , integer programming, commodities

1. Introduction

Network problems are often used in transportation, electricity, telephone, and communication.

Networks can also be used in the production, distribution, project planning, layout planning, resource

management, and financial planning. One model of network optimization is the minimum cost flow

problem. This problem is related to the flow through the network with arc capacities are limited. Such

as the shortest path problem that takes into account the cost or the shortest distance through the arc.

a. Networks

A network is defined as a collection of points (nodes) and collection of lines (arcs) which joining

these points. There is normally some flow along these lines, going from one point (node) to another.

arcs

nodes

Figure 1.1 Basic elements of network

b. Nodes and Arcs

In a network , the points ( circles ) are called nodes and the lines are referred to as arcs , links

or arrows.

c. Flow

The amount of goods , vehicles, flights, passengers and so on that move from one node to another.

x i j = 100 passengers

Figure 1.2 Flow between two nodes

i j

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

78

If the flow through an arc is allowed only in one direction, then the arc is said to be a directed arc.

Directed arcs are graphically with arrows in the direction of the flow.

Figure 1.3 Directed flow

When the flow on an arc ( between two nodes ) can move in either direction, it is called an

undirected arc. Undirected arcs are graphically represented by a single line ( without arrows )

connecting the two nodes.

d. Arc capacity

Arc capacity is the maximum amount of flow on an arc. Example include restrictions on the number

of flights between two cities.

e. Supply Nodes

Supply nodes are nodes with the amount of flow coming to them greater than the amount of flow

leaving them or nodes with positive net flow.

115

Figure 1.4 Supply nodes

f. Demand Nodes

Demand nodes are nodes with negative net flow or outflow greater than inflow.

-50

Figure 1.5 Demand nodes

g. Transshipment Nodes

Transshipment nodes are nodes with the same amount of flow arriving and leaving or nodes with

zero net flow.

i

i j

i

i

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

79

0

Figure 1.6 Transshipment nodes

h. Path

A Path is a sequence of distinct arc that connect two nodes in this fashion.

figure 1.7 A network with 3 path A to G

Node A is source ( starting node )

Node G is destination ( last node )

i. Cycle

Cycle is a sequence of directed arcs that begins and end at the same node.

Figure 1.8 A Cycle

j. Connected Network

Connected network is a network in which every two nodes are linked by at least one path.

Figure 1.9 Connected Network

i

A

i

B

A

i D

A

i C

A

i

F

A

i

E

A

i G

A

i

1

A

i

2

A

i

4

A

i

3

A

i

1

A

i

2

A

i

4

A

i

3

A

i

5

4

A

i

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

80

2. Model Formulation

In this section, it will first explain the basic assumption of network problems. Then the list of input

parameters and the decision variables of the minimum cost flow problem. Finally a mathematical

formulation built in stages.

a. Minimum Cost Flow Problem

Minimum cost flow problem on a network attempt to minimize the total cost of shipping supplies are

available through the network to meet the demand. It often occurs in the transportation problem,

transshipment, and shortest path problems. This problem assumes that we know the cost per unit of

flow and capacity associated with each arc. In general, the minimum cost flow problem can be described

as follows :

Decision variables are as follows :

1. Network is a network that directed and connected.

2. At least one node is a supply node

3. At least one other node is demand node

4. All remaining nodes are delivery nodes

5. Flow through the arc should only be in the direction indicated by the arrows, and the maximum

amount of flow indicated by the arc capacity.

6. Network has a bow with enough capacity to direct of flow generated at a supply node to meet

demand node.

7. Costs flow through each arc is proportional to the cost per unit is known

8. The goal is to minimize total cost of shipping.

x ij = the flow through arc i j

known information includes :

cij = cost per unit of flow through arc i j

uij = arc capacity for arc i j

bi = net flow generated at node i

bi depends on the value of the node, where :

bi > 0 if i is a supply node

bi < 0 if i is a demand node

bi = 0 if i is the sending node

With the aim of minimizing the total cost of delivery through the network to meet the demand, then

the mathematical models are as follows:

Minimize Z = ij

n

i

n

j

ij xc 1 1

(2-1)

s.t i

n

j

ij

n

i

ij bxx 11

for each nodes i

And ijij ux 0 for each arc i j

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

81

The first summation on the constraint node shows the total flow out of node i, while the second shows

the sum of the total flow into node i, so the difference between them is the net flow generated at this

node. In practice bi and uij are integers, as well as all the basic variables in every basic feasible solutions

include an optimal solution must be an integer. So in general find the optimal solution to this problem

typically use integer programming.

Example :

Consider the following network presented in figure 2.1 (adapted from Anderson et al 2003). An

airlane is tasked with transporting goods from nodes 1 and 2 to nodes 5,6, and 7. The airlane does not

have direct flights from the source nodes to the destination nodes. Instead they are connected trough its

hubs in nodes 3,4. The numbers next to the nodes represent the demand supply in tons. The numbers on

the arcs represent the unit cost of transport the goods from sources to destinations so that the total cost

is minimized. The aircraft flying to and from node 4 can carry a maximum of 50 tons of cargo.

5 1 50

75 5

8 8

60

7 3

4

75 4

4

40

Figure 2.1 Network presentation for Minimum cost flow

To formulate this problem, consider the following decision variable :

xij = amount of flow from node i to node j

The objective function is :

Minimize Z = 5 x1,3 + 8 x1,4 + 7 x2,3 + 4x2,4

+ 1 x3,5 + 5 x3,6 + 8 x3,7 +3x4,5

+ 4 x4,6 + 4 x4,7

We need to write one constraint for each node. For example, for node 1 we have :

x1,3 + x1,4 ≤ 75

for node 2 we have :

x2,3 + x2,4 ≤ 75

Similarly, we write constraints for the other nodes. Note that the net flow for nodes 3 and 4 should be

zero as these are transshipment nodes.

All the flights to and from node 4 can carry a maximum of 50 tons. Therefore, all the flow to and from

this node must be limited to 50 as follow :

x1,4 ≤ 50

x2,4 ≤ 50

x4,5 ≤ 50

x4,6 ≤ 50

x4,7 ≤ 50

Non negative constraint is xij ≥ 0 , and xij is integer.

Solving this problem using software POM-QM for Windows , generates a total minimum cost of $

1,250

The solution for this problem is presented in figure 2.2.

3

2

A

i

4

A

i

5

4

A

i

1

A

i

2

A

i

6

2

A

i

7

2

A

i

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

82

50

5 1

75 5

8 8 60

7 3

4

75

4 4

40

Figure 2.2 Solution to minimum cost flow

The general model is mathematically expressed as follows (Bazaraa et al 1990) :

Sets

M = set of nodes

Index

i,j,k = index for nodes

Parameters

ci,j = unit cost of flow from node i to node j

bi = amount of supply/demand for node i

Li,j = lower bound on flow through arc (i,j)

Ui,j = upper bound on flow through arc (i,j)

Decision variable

xi,j = amount of flow from node i to node j

Objective function

Minimize Z = ji

Mi Mj

ji cx ,, .

……. (2.2)

Subject to

Mibxx i

Mk

ik

Mj

ji ,...,3,2,1,,,

jijiji UxL ,,, ……………….. (2.3)

The objective function (2.2) attempts to minimize the total cost of the network. The constraints (2.3)

satisfy the requirements of each node by determining the amount of in flow and out flow from that node,

and impose the lower and upper bound restriction along the arcs.

3.Minimum Cost Flow for Two Commodities

In general network model to explain the problem with the assumption that one type of commodity or

entity is sent through a network. Sometimes the network can carry different types of commodities.

Minimum cost flow problem for the two commodities are trying to minimize the total cost when

different types of goods shipped through the same network. Both commodities can be distinguished

based on their physical characteristics or only with certain attributes. Two issues are widely used

commodity in the transportation industry. In the airline industry, the model adopted two commodities

to formulate models pair the crew and fleet assignment models.

3

2

A

i

4

A

i

5

4

A

i

1

A

i

2

A

i

6

2

A

i

7

2

A

i

75

50

25

40

50

50

10

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

83

Example

We modify the example that was presented for the minimum cost flow problem discussed earlier to

explain the two commodities model formulation. Figure 3.1 presents the modified example.

30

40 5 1 20

35

5

8 8 30

30

7 3

4

50

25 4 4 30

10

Figure 3.1 Network presentation for two commodities problem

As we see in this figure the scenario is very similar to the earlier case. The only difference is that instead

of having only one type cargo, in this case we have two types ( two commodities ). The numbers next

to each node represent the supply/demand for each cargo at that node. As an example, node 1 supplies

40 and 35 tons of cargo 1 and 2 respectively. The transportation costs per ton are also similar. We want

to determine how much from each cargo should be routed on each arc so that the total transportation

cost is minimum.

To formulate this problem we assume the following decision variable :

xi,j,k = amount of flow from node i to node j for commodity k

In this decision variable the indices i and j represent the nodes ( i,j = 1,2,3, … , 7 ) and k represents the

type of commodity (k =1,2 ) The objective function is therefore :

Minimize Z = 5x1,3,1 + 5x1,3,2 + 8x1,4,1 +

+ 8x1,4,2 + 7x2,3,1 + 7x2,3,2 +

+ 4x2,4,1 + 4x2,4,2 + 1x3,5,1 +

+ 1x3,5,2 + 5x3,6,1 + 5x3,6,2 +

+ 8x3,7,1 + 8x3,7,2 + 3x4,7,1 +

+ 3x4,7,2 + 4x4,6,1 + 4x4,6,2 +

+ 4x4,7,1 + 4x4,7,2

Subject to :

x1,3,1 + x1,4,1 ≤ 40

x1,3,2 + x1,4,2 ≤ 35

x2,3,1 + x2,4,1 ≤ 50

x2,3,2 + x2,4,2 ≤ 25

x3,5,1 + x4,5,1 ≤ 30

x3,5,2 + x4,5,2 ≤ 20

x3,6,1 + x4,6,1 ≤ 30

x3,6,2 + x4,6,2 ≤ 30

x3,7,1 + x4,7,1 ≤ 30

x3,7,2 + x4,7,2 ≤ 10

Recall that all the flights to end from node 4 can carry a maximum 0f 50 tons. Therefore :

3

2

A

i

4

A

i

5

4

A

i

1

A

i

2

A

i

6

2

A

i

7

2

A

i

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

84

x1,4,1 + x1,4,2 ≤ 50

x2,4,1 + x2,4,2 ≤ 50

x4,5,1 + x4,5,2 ≤ 50

x4,6,1 + x4,6,2 ≤ 50

x4,7,1 + x4,7,2 ≤ 50

xi,j,k ≥ 0 and xi,j,k integer

Solving this problem using software POM-QM ( Production and Operation Management Quantitative

Methods ) version 3.0 generates a total minimum cost of $ 1,150. The solution for this problem is

presented in figure 3.2.

30

40 5 1 20

35 5

8 8 30

30

7 3

4

50

25 4 4 30

10

Figure 3.2 Solution of minimum cost flow for two Commodities problem

The general model is mathematically expressed as follows (Ahuja et al. 1993) :

Sets

M = set of nodes

K = set of commodities

Indeces

i,j = index for nodes

k = index for commodities

Parameters

ci,j,k = unit cost of flow from node i to j for

commodity k

bi,k = amount of supply / demand at node i

for commodity k

ui,j = flow capacity on arc (i,j)

Decision variable

xi,j,k = amount of flow from node i to node j

for commodity k

Objective function

3

2

A

i

4

A

i

5

4

A

i

1

A

i

2

A

i

6

2

A

i

7

2

A

i

40 35

30

20

30

20

30

20

30

10

00

10

20 05

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

85

Minimize Z = kji

Kk Mi Mj

kji xc ,,,, .

(3.1)

Subject to :

ki

Mt

kit

Mt

kti bxx ,,,,,

(3.2)

For all i M and k K

ji

Kk

kji ux ,,,

(3.3)

For all i M and j M

xi,j,k ≥ 0 , and xi,j,k integer

References Ahuya, R., Magnanti, T., and Orlin, J. 1993. Network Flows, Theory, Algorithm and Application. Prentice Hall.

Anderson,D.,Sweeney D.,and Williams,T. 2003. Quantitative Methods for Business. 9th Edition. South-Western

Bazaraa, M., Jarvis, J., and Sherali, H. 1990. Linear Programming and Network Flows. John Wiley

Bazargan, M. 2010. Airline Operations and Scheduling. 2nd Edition. MPG Books Group

Hillier, F. and Lieberman, G. 2001. Introduction to Operations Research. 7th Edition. McGraw-Hill.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

86

Necessary Conditions for Convergence

of Ratio Sequence of Generalized Fibonacci

Endang RUSYAMANa*& Kankan PARMIKANTIb

a,bDept of Mathematic FMIPA, Padjadjaran University , Indonesia

*[email protected]

Abstract: Generalized Fibonacci sequence is a sequence with the rule:

yn = α yn-1 + β yn-2

with the initial condition y0 and y1, and the real constants α and β are not zero.

In this paper, we discuss about problems relating to the convergence of ratio sequence of two

successive terms on generalized Fibonacci sequence. The first issue to be discussed is to

determine the initial conditions for the ratio sequence of generalized Fibonacci so that this ratio

sequence is well defined. Furthermore, we will discuss about how to determine the value of α

and β as well as the relationship between them as a necessary condition for the convergence of

ratio sequence of generalized Fibonacci. By using the above conditions, the last problem to be

the subject of this paper is to prove the convergence of ratio sequence of generalized Fibonacci

as well as determine the convergence point. In fact, convergence of this sequence will also

depend on the sign of the numbers α and β that will influence the sign of the terms on the ratio

sequence. Hence proving convergence sequence will be done through case by case basis,

including through the contractive sequence. Based on this study, the others results obtained that

the numbers are banned to be used as initial conditions for the ratio sequence, it will form a

sequence of numbers that this convergence point is the same as the convergence point of ratio

sequence.

Keywords: Generalized Fibonacci, ratio sequence, convergence, contractive, necessary conditions.

1. Introduction

Such as the importance of continuity of a function in convergence of the functions sequence [4], then

in any numbers sequence also, need to be studied about conditions that must be satisfied so that a

sequence is convergent. To support a research related to the convergence of sequence, projections,

continuity of functions, existence of point convergence, and fractional derivatives, it will be discussed

in this paper about the convergence of numbers sequence through geometric approach.

As it has been known that the Fibonacci sequence is a sequence (xn) which has a recursion

equation xn = xn-1 + xn-2 , with the initial condition x0 = 1 and x1 = 1. Several features of the

Fibonacci sequence, is that if the greatest common divisor of the numbers m and n is k, then the greatest

common divisor of the term-m ie xm and term-n ie xn is term-k ie xk. Similarly xk is always a divisor

factor of xnk for all natural numbers n [5]. Another feature is that any four consecutive Fibonacci

numbers: w, x, y, z are always forming Pythagorean triple, ie wz, 2xy, and (yz - xw) [6]. Besides those

three things above, also known that although Fibonacci sequence itself is not convergent, but the

Fibonacci ratio sequence is converging on a number called the Golden Ratio [5].

Furthermore the generalized of Fibonacci sequence is (yn) with rules

yn = α yn-1 + β yn-2 (1)

with real constants α and β are both non-zero, and the initial conditions y0 and y1. [2].

In [8] J. M. Tuwankotta have shown that the sequence (yn) with β = 1 - α and 0 < α < 2 is a

contractive sequence so that convergent on R (set of the real numbers) [2], and the convergence point

is 𝐿(𝛼) = 𝑦0 +𝑦1−𝑦0

2−𝛼 .

In this paper, the author discusses (𝑟𝑛) ie the ratio sequence of two successive term of the

generalized Fibonacci sequence (1) in the form

𝑟𝑛 =𝑦𝑛

𝑦𝑛−1 , (2)

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

87

where yn 0 for all n.

By substituting (1) into (2), we have ratio sequence in the form of recursion equation:

𝑟𝑛 = 𝛼 +𝛽

𝑟𝑛−1 ; n = 2, 3, 4, . . . . (3)

The first problem to be studied in this paper is the conditions for constants α and β so that the

sequence (𝑟𝑛) well-defined, and how to the relationship of α and β so that (𝑟𝑛) converges. Furthermore,

in addition to evidence of convergence also will show convergence point of the sequence.

2. Condition that (𝒓𝒏) well defined.

Suppose given a ratio sequence as in (3). The sequence (𝑟𝑛) will be well-defined if rn 0 for all n.

Thus we must choose initial conditions r1 such that r2 0, or r3 0, or r4 0, and so on.

From (3) rn can also be expressed as

𝑟𝑛 = 𝛼 + 𝛽

𝛼+𝛽

𝛼+𝛽

𝛼+𝛽

⋯+⋯

𝛼+𝛽𝑟1

Hence

1. 𝑟1 =−𝛽

𝛼 will result in r2 = 0

2. 𝑟2 = −𝛽

𝛼+𝛽

𝛼

will result in r3 = 0

3. 𝑟2 = −𝛽

𝛼+𝛽

𝛼+𝛽𝛼

will result in r4 = 0

and so on.

Thus (𝑟𝑛) will be well-defined, if the initial condition r1 CF (α , β) where CF (α , β) is

Continued Fraction

{ −𝛽

𝛼 ,

−𝛽

𝛼+𝛽

𝛼

, −𝛽

𝛼+𝛽

𝛼+𝛽𝛼

, . . . }.

In particular, if the sequence CF(α , β) = (fn) is a sequence converging to f , then

lim𝑛→∞

𝑓𝑛 = 𝑓 = −𝛽

𝛼 − lim𝑛→∞

𝑓𝑛=

−𝛽

𝛼 − 𝑓 ,

thus obtained: f 2 – α f – β = 0 .

Hence

𝑓 =𝛼±√𝛼2+4𝛽

2 .

In the case α > 0 and β > 0 , then fn < 0 for all n, so the value that satisfies is

f = 2

42 < 0 .

3. Necessary Condition for Convergence

Necessary condition for convergence sequence (𝑟𝑛) is determined by the relationship between α and

β. If it is assumed that the sequence (𝑟𝑛) converges to a number r, then from equation (3) is obtained:

n-1 times the division

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

88

lim𝑛→∞

𝑟𝑛 = lim𝑛→∞

(𝛼 +𝛽

𝑟𝑛−1) ,

resulting equation

r = α + 𝛽

𝑟 or r2 – α r – β = 0 . (4)

In this case, r has a real value if it satisfies α2 + 4 β 0.

Hence, necessary condition for convergence of (𝑟𝑛) is α2 + 4 β 0 .

4. Evidence of convergence.

To prove that (𝑟𝑛) is a convergent sequence, it will be shown that (𝑟𝑛) is a contractive sequence, ie

there is a constant C with 0 < C < 1 so that for every n holds:

|𝑟𝑛 − 𝑟𝑛−1| < 𝐶 |𝑟𝑛−1 − 𝑟𝑛−2| . By using the equation (3) obtained

|𝑟𝑛 − 𝑟𝑛−1| = |(𝛼 +𝛽

𝑟𝑛−1) − (𝛼 +

𝛽

𝑟𝑛−2) |

= |𝛽

𝑟𝑛−1−

𝛽

𝑟𝑛−2|

= |𝛽|

|𝑟𝑛−1||𝑟𝑛−2| |𝑟𝑛−1 − 𝑟𝑛−2| , (5)

where

|𝑟𝑛−1||𝑟𝑛−2| = 3

2

2

1 .

n

n

n

n

y

y

y

y

= 3

1

n

n

y

y

= 3

32

n

nn

y

yy

= α 3

2

n

n

y

y + β

= α rn-2 + β ,

So that (5) becomes

|𝑟𝑛 − 𝑟𝑛−1| =

2nr . |𝑟𝑛−1 − 𝑟𝑛−2|

(6)

Thus, whether or not contractive sequence (rn) will depend on α and β. In this case the author

divide it in some cases.

Case -1: α > 0 , β > 0 , dan rn > 0 n .

Case -2: α > 0 , β > 0 , dan rn < 0 n .

Case -3: α > 0 , β < 0 , dan rn > 0 n .

Case -4: α > 0 , β < 0 , dan rn < 0 n .

Case -5: α < 0 , β > 0 , dan rn > 0 n .

Case -6: α < 0 , β > 0 , dan rn < 0 n .

Case -7: α < 0 , β < 0 , dan rn > 0 n .

Case -8: α < 0 , β < 0 , dan rn < 0 n .

For Case-1 and Case-6 above, will be obtained α rn-2 + β > β , so if we let

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

89

2nr = C , then 0 < C < 1.

This proves that (rn) is contractive. According to the theorem, contractive sequence is Cauchy, and

Cauchy sequence in R is convergent [2], thus proving that (rn) is convergent.

For Case-4, will apply α rn-2 + β > β if rn-2 < 0 n, but this is not possible, because

if ri < 0 for some i, then ri+1 = α + 𝛽

𝑟𝑖 > 0 so that (rn) is not contractive.

Similarly to the Case-7, the inequality α rn-2 + β > β will apply if rn-2 > 0 n, but this is

not possible because if ri > 0 for some i, then ri+1 = α + 𝛽

𝑟𝑖 < 0 , so that (rn) is not contractive.

While for the Case: 2, 3, 5, and 8, shall apply α rn-2 + β < β so

2nr = C > 1. Thus

(rn) is not contractive and so not guaranteed covergent.

5. Convergence point

Furthermore is to determine the value of convergence point of the sequence (rn). If (rn) converges to

r, then from equation (4) is obtained

r = 2

42 ,

so that we have two possible values of r, namely:

r* = 2

42 or r** =

2

42

(7)

In the Case-1 of the above, where α > 0 and β > 0, then the value will be r * > 0 and r ** < 0. But

since rn > 0 for all n, then lim𝑛→∞

𝑟𝑛 > 0 [1], and this means that (rn) converges to r *.

Similarly in the Case-6, where α < 0 and β > 0, then the value of r * > 0 and r ** < 0. But because

rn < 0 for all n, then lim𝑛→∞

𝑟𝑛 < 0 [1], and this means that (rn) converges to r **.

For other cases, convergence of (rn) still needs to be further investigated, including geometry approach.

6. Geometry approach

As noted earer, the convergence point (rn) ie r * or r ** will depend on the value of α and β, so

that in this geometry approach, eight cases of the above can be simplified into four cases, namely:

case-1: α > 0 β > 0, case-2: α < 0, β > 0,

case-3: α > 0 β < 0, and case-4: α < 0, β < 0.

A geometric approach can be used to see the convergence of (rn), where we can compare the

recurrence relation in (3) with the hyperbolic function:

𝑦 = 𝛼 + 𝛽

𝑥 or y =

𝛼 𝑥 + 𝛽

𝑥 . (8)

If we call r1 = x, then by substituting r1 to (3) or x to (2) will be obtained r2 = y. Furthermore, by

way of projecting the value of y = r2 to the x-axis through the line y = x, we call call r2 = x. Then by

substituting r2 to (3) or x to (8) will be obtained r3 = y. Process goes on so that r1, r2, r3, . . . and so

on, are all located on the x-axis and toward to abscissa of convergence point on the curve (8).

Because there are two points, ie r* and r** , where one of which may be the point of

convergence, then in the graph will look rn value changes that would toward one of the two points

above. These changes depend on α and β.

In the case-1 α > 0 and β > 0, from (7) we have r * is positive and r ** is negative number.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

90

From (3) rn = α + 𝛽

𝑟𝑛−1 for n 2. So if r1 > 0 then rn > 0 n , and if r1 < 0 then there exist

a natural number k so that rn > 0 for all n > k . Hence for case-1, (rn) will converge to r* > 0.

In Figure-1, hyperbole in equation (8) has a horizontal asymptote y = α > 0 and the intersection point

with the x-axis is x = −

< 0 where r1 , r2 , r3 , . . . move towards r *, which means that (rn)

converges to r *.

Figure-1 Figure-2

In the case-2, α < 0 and β > 0 from (7) will be obtained r * is positive, and r ** is negative,

so that (rn) will converges to r ** < 0. Hyperbole has a horizontal asymptote y = α < 0 and the

intersection point with the x-axis is x = −

> 0. Graph as in Figure-2 above.

In the case-3 α > 0 and β < 0 will be obtained r * > r ** both are positive, so that (rn) will converges

to r * <0. Hyperbole has a horizontal asymptote y = α > 0 where the intersection point with the x-axis

is x = −

> 0. Graph as in Figure-3 below.

Figure-3 Figure-4

In the case-4 α < 0 and β < 0 will be obtained r * > r ** both are negative, so that (rn) will converge

to r ** < 0. The Hyperbole has a horizontal asymptote y = α < 0 where the intersection point with

the x-axis is x = −

< 0. Graphs as in Figure-4 above.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

91

7. Conclusion

Based on the description above, it turns out that a necessary condition for convergence of ratio

sequence of generalized Fibonacci (rn) = (𝑦𝑛

𝑦𝑛−1) is α 2 + 4 β 0 where a convergence points are

r* = 2

42 for α > 0 β > 0 or α > 0 β < 0 , and

r** = 2

42 for α < 0 β > 0 or α < 0 β > 0 .

One interesting thing is that the sequence of numbers that exist in the CF (α, β) that the

numbers are being "ban" for r1 as the initial condition, it converges to a number f which is one

point of convergence of (rn). Even for the case α < 0 and β > 0 the sequence (rn) and CF (α, β) has

the same convergence point,ie r ** = f .

From the above discussion it is also seen that the convergence point does not depend on rn , but

only depending on α and β only. Similarly, the graph shows that the convergence point is a point on a

curve that have slope ramps, meaning that

if f (r *) < f (r **) then (rn) converges r *, and

if f (r *) > f (r **) then (rn) converges r **.

8. Acknowledgement

This work is fully supported by Universitas Padjadjaran under the Program of Penelitian

Unggulan Perguruan Tinggi Program Hibah Desentralisasi No. 2002/UN6.RKT/KU/2013.

9. References

[1] Apostol, Introduction to Mathematical Analysis, Addison-Weslley,1974

[2] Bartle, R.G & Sherbert, Introduction to Real Analysis, second ed, John Wiley & sons, Inc.1992

[3] Dominic & Vella Alfred, When is Member of Phitagorean Triple, [email protected], 2002

[4] Endang Rusyaman, Kankan Parmikanti, Eddy Djauhari, dan Ema Carnia , Syarat Kekontinuan

Fungsi Konvergensi Pada Barisan Fungsi Turunan Berorde Fraksional, Seminar Nasional Sains

dan Teknologi Nuklir, Bandung, 2013

[5] Kalman & Menna Robert, The Fibonacci Numbers Expossed, Mat Magazine, 2003, (3:167-

181)

[6] Parmikanti. K, Pendekatan Geometri Untuk Masalah Konvergensi Barisan, Seminar Nasional

Matematika, Unpad, 2006

[7] Rusyaman. E, Konvergensi Barisan Barisan Fibonacci yang Diperumum, Seminar Nasional

Matematika, Unpad, 2006

[8] Tuwankotta. J.M, Contractive Sequence, ITB, 2005

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

92

Mean-Variance Portfolio Optimization on Some

Islamic Stocks by Using Non Constant Mean

and Volatility Models Approaches

Endang SOERYANAa*, Ismail BIN MOHDb, Mustafa MAMATc,

Sukonod, Endang RUSYAMANe a,d,eDepartment of Mathematics FMIPA Universitas Padjadjaran, Indonesia

b,cDepartment of Mathematics FST Universiti Malaysia Terengganu, Malaysia *Email : [email protected]

Abstract: Investment in Islamic stocks investors are also faced with the issue of risk, due to

daily price of Islamic stock also fluctuate. To minimize the level of risk, investors usually

forming an investment portfolio. Establishment of a portfolio consisting of several Islamic

stocks are intended to get the optimal composition of the investment portfolio. This paper

discussed about optimizing investment portfolio of Mean-Variance to Islamic stocks by using

mean and volatility is not constant approaches. Non constant mean analyzed using models

Autoregressive Moving Average (ARMA), while non constant volatility models are analyzed

using the Generalized Autoregressive Conditional heteroscedastic (GARCH). Optimization

process is performed by using the Lagrangian multiplier technique. As a numerical illustration,

the method is used to analyze some Islamic stocks in Indonesia. The expected result is to get

the proportion of investment in each Islamic stock analyzed.

Keywords: Investment risk, portfolio Mean-Variance, ARMA models, GARCH models,

Lagrangian multiplier.

1. Introduction

Investment is basically invest some capital into some form of instrument (asset), can be either fixed

assets or financial assets. Investing in financial assets can generally be done by buying shares in the

stock market. Investing in stocks, investors will be exposed to the risk that the magnitude of the problem

along with the magnitude of the expected return (Kheirollah & Bjarnbo, 2007). The greater the expected

return, generally the greater the risk to be faced. Investment risk is describing rise and fall stock price

changes at any time can be measured by the value of variance (Sukono, et al., 2011).

The strategy is often used by investors in the face of the risks of investing is to form an

investment portfolio. Establishment of an investment portfolio is essentially allocates capital in a few

selected stocks, or often referred to diversify investments (Panjer et al., 1998). The purpose of the

establishment of the investment portfolio is to get a certain return with minimum risk levels, or to get

maximum returns with limited risk. To achieve these objectives, the investor is deemed necessary to

conduct analysis of optimal portfolio selection. Analysis of portfolio selection can be done with

optimum investment portfolio optimization techniques (Shi-Jie Deng, 2004).

Therefore, this paper studied the paper on portfolio optimization model of Mean-Variance,

where the average (mean) and volatility (variance) assumed the value is not constant, which is analyzed

using time series model approach (time series). Non constant mean analyzed using models

Autoregressive Moving Average (ARMA), whereas non constant volatility analyzed using models of

the Generalized Autoregressive Conditional Hetroscedasticity (GARCH) (Shi-Jie Deng, 2004).

Methods such analysis is then used to analyze a Islamic stock in Indonesia. the purpose of this analysis

is to obtain the proportion of investment capital allocation in some Islamic stocks are analyzed, which

can provide a maximum return with a certain level of risk.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

93

2. Methodology

In this section will discuss the stages of analysis includes the calculation of stock returns, mean

modeling, volatility modeling and portfolio optimization.

2.1 Stocks Return

Suppose itP Islamic stock price i at time t, and itr

Islamic stock return i at time t. The value of itr can be calculated using the following equation.

1ln

it

itit

P

Pr , (1)

where Ni ,...,1 with N number of stocks that were analyzed, and Tt ,...,1 with T the number of stock

price data observed (Tsay, 2005; Sukono et al., 2011).

2.2 Mean Models

Suppose }{ itr is Islamic stock return i at time t are stationary, will follow the model of ARMA( qp,

)if for every t have the following equation:

qitqititpitpitit aaarrr ...... 1111 ,

or can be written as

qitqititpitpitit aaarrr ...... 1111 , (2)

where }{ ita ~WN(0, 2i ), which means sequence of residual

}{ ita normally distributed white noise

with mean 0 and variance 2i . Sequence }{ itr is a model ARMA( qp, ) with mean it , if }{ ittir is

a model ARMA( qp, ) (Gujarati, 2004; Shewhart et al., 2004).

Stages of the process modeling the mean include: (i) identification of the model, (ii) parameter

estimation, (iii) diagnostic test, and (iv) Prediction (Tsay, 2005).

2.3 Volatility Models

Volatility models in time series data in general can be analyzed using GARCH models. Suppose }{ itr

is Islamic stock returns i at time t is stationary, the residuals of the mean model for Islamic stock i at

time t is itttit ra . Residual sequence }{ ita follow the model GARCH( sg, )when for each has

the following equation:

ititita ,

g

k

s

j

itjitijkitikiit a

1 1

220

2 , (3)

with }{ it is a sequence of residual volatility models, namely the sequence of random variables are

independent and identically distributed (IID) with mean 0 and variance 1. Parameter coefficients satisfy

the property that 00 i , 0ik , 0ij , and

),max(

11

sg

k ijik (Shi-Jie Deng, 2004; Tsay,

2005).

Volatility modeling process steps include: (i) The estimated mean model, (ii) Test of ARCH

effects, (iii) Identification of the model, (iv) The estimated volatility models, (v) Test of diagnosis, and

(vi) Prediction (Tsay, 2005).

2.4 Prediction of l –Step Ahead

Using the mean and volatility models, aiming to calculate the prediction of mean )(ˆˆ lrihit and

volatility )(ˆˆ 22 lihit , for l -period ahead of the starting point prediction h (Tsay, 2005; Febrian &

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

94

Herwany, 2009). The prediction results of mean )(ˆˆ lriht dan volatility )(ˆˆ 22 lihit , will then be

used for portfolio optimization calculations below.

2.5 Portfolio Optimization Model

Suppose itr Islamic stock return i at time t , where Ni ,...,1 with N the number of stocks that were

analyzed, and Tt ,...,1 with T the number of Islamic stock price data observed. Suppose also

),...,( 1 Nwww' weight vector, ),...,( 1 Ntt rrr' vector stock returns, and )1,...,1(e' unit vector.

Portfolio return can be expressed as

rw'pr with 1ew' ( Zhang, 2006; Panjer et al., 1998). Suppose

),...,( 1 Ntt μ' , expectations of portfolio p can be expressed as:

μw'][ pp rE . (4)

Suppose given covariance matrix Njiij ,...,1,)( Σ , where ),( jtitij rrCov . Variance of the

portfolio return can be expressed as follows:

Σww'2

p . (5)

Definition 1. (Panjer et al., 1998). A portfolio *p called (Mean-variance) efficient if there is

no portfolio p with *p p and

2 2*p p (Panjer et al., 1998).

To get efficient portfolio, typically using an objective function to maximize

22 p p , 0

where the parameters of the investor's risk tolerance. Means, for investors with risk tolerance ( 0)

need to resolve the problem of portfolio

Σww'μw' -2 Maximize (6)

the condition 1ew'

Please note that the completion of (6), for all [0, ) form a complete set of efficient portfolios. Set

of all points in the diagram-2( , )p p related to efficient portfolio so-called surface efficient (efficient

prontier) (Goto & Yan Xu, 2012: Yoshimoto, 1996).

Equation (6) is the optimization problem of quadratic convex (Panjer et al., 1998). Lagrangian

multiplier function of the portfolio optimization problem is given by

)1-(-2),( ew'Σww'μw'w L . (7)

Based on the Kuhn-Tucker theorem, the optimality condition of equation (7) is 0∂∂ w

L and 0∂∂ L .

Completed two conditions of optimality mentioned equation, the equation would be the optimal

portfolio weights as follows:

}-{1 1-

1-

-11-1-

1eΣ

eΣe'

μΣe'μΣeΣ

eΣe'w*

. (8)

Furthermore, with substituting *w into equation (4) and (5), respectively obtained the values of the

expectation and variance of the portfolio (Sukono et al., 2011). As a numerical illustration, will be

analyzed some Islamic stocks as the following.

3. Illustrations

In this section will discuss the application of the method and the results of the analysis stage of the

observation that includes Islamic stocks data, the calculation of Islamic stock returns, modeling the

mean of Islamic stocks, volatility modeling, prediction of the mean and variance values, the process of

optimization.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

95

3.1 Islamic Stocks Data

The data used in this study is secondary data, in the form of time series data (time series) of some of

the daily stock price Islamic, which includes the names of Islamic stocks: AKRA, CPIN, ITMG,

MYOR, and TLKM. Islamic stock price data, the data used is the closing Islamic stock price, for over

1360 days starting from January 1, 2008 till June 30, 2013 were downloaded from

www.finance.yahoo.com (Rifqiawan, 2008). Stock prices data will then be processed by using

statistical software of Eviews-6 and Matlab.

3.2 Islamic Stocks Return and Stationarity

Islamic stock returns of five firms in this study was calculated using equation (1). In Figure-1 below

indicates the five Islamic stock chart returns are analyzed.

AKRA CPIN ITMG MYOR TLKM

Figure-1 Five Islamic Stock Charts

Can be seen by naked eye chart in Figure-1 shows that the five Islamic stock return data were analyzed

(have stationary). For stationary testing is done using the ADF test statistic results respectively values

are: -34.24848; -33.79008; -30.20451; -40.04979; and-28.36974. Further, if the specified level of

significance = 5%, can be obtained by a standard normal distribution critical value is -2.863461. It

is clear that the value of the test statistic for all ADF of Islamic stocks are analyzed located in the

rejection region, so that everything is stationary.

3.3 Mean Modeling of Islamic Stocks Return

Islamic stock return data will be used to estimate mean model using the software of Eviews-6. First, the

identification and estimation of mean model, carried out through the sample autocorrelation function

(ACF) and partial autocorrelation function (PACF). Based on the pattern of ACF and PACF, tentative

models can be specified from each Islamic stock returns. The estimation results indicated that the best

models are respectively ARMA (1,0), ARMA (1,0), ARMA (1,0), ARMA (7,0), and ARMA (2,0).

Second, the significance test for parameters and significance tests for models indicate that the mean

model for all Islamic stocks analyzed have significance. Third, diagnostic tests for these models is done

by using the data residual correlogram and Ljung-Box test hypotheses. The test results show the

residuals of the models are white noise. Results residual normality test showed normal distribution.

Therefore residuals for all Islamic stock has been analyzed is white noise.

Equations of the results mean modeling for the five Islamic stock will be written simultaneously with

the volatility models each of Islamic stocks, which will be estimated following.

3.4 Volatility Modeling of Islamic Stocks Return

Modeling of volatility in this section is done by using statistical software of Eviews-6. First, carried out

the detection elements of autoregressive conditional heteroscedasticity (ARCH) to the residual ta ,

using the ARCH-LM test statistic. Statistical value of the results obtained

2 (obs*R-Square) each of

Islamic stock returns AKRA, CPIN, ITMG, MYOR, and TLKM respectively are: 31.76757; 76.75431;

40.55526; 48.58576; 9.270781, and 125.2410 by probability each of 12:05 0.0000 or smaller, which

means that there are elements of ARCH.

Second, the identification and estimation of volatility models. This study used models of

generalized autoregressive conditional heterscedasticity (GARCH) which refers to equation (3). Based

-.20

-.15

-.10

-.05

.00

.05

.10

.15

.20

250 500 750 1000 1250

AKRA

-.20

-.15

-.10

-.05

.00

.05

.10

.15

.20

250 500 750 1000 1250

CPIN

-.3

-.2

-.1

.0

.1

.2

250 500 750 1000 1250

ITMG

-.3

-.2

-.1

.0

.1

.2

.3

250 500 750 1000 1250

MYOR

-.12

-.08

-.04

.00

.04

.08

.12

250 500 750 1000 1250

TLKM

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

96

on squared residuals correlogram 2

ta , the ACF and PACF graphs of each, selected models of volatility

that might be tentative. Volatility model estimation each of Islamic stock return performed

simultaneously (synchronously) with mean models. After going through tests of significance for

parameters and significance tests for models, all equations are written below have been significant. The

result, obtained the best model are respectively:

Islamic stock AKRA follow the model ARMA(1,0)-GARCH(1,1) with equation:

ttt arr 1073891.0 and tttt

21

21

2 9404318.0040015.0000014.0

Islamic stock CPIN follow the model ARMA(1,0)-GARCH(1,1) with equation:

ttt arr 1089639.0 and tttt

2

12

12 820716.0134049.0000052.0

Islamic stock ITMG follow the model ARMA(1,0)-GARCH(1,1) with equation:

ttt arr 1193825.0 and tttt

2

12

12 923108.0066024.0000012.0

Islamic stock MYOR follow the model ARMA(7,0)-GARCH(1,1) with equation:

ttt arr 7102007.0 and tttt

2

12

12 945801.0044332.0000009.0

Islamic stock TLKM follow the model ARMA(2,0)-GARCH(1,1) with equation:

ttt arr 2084289.0 and tttt

2

12

12 824540.0139166.0000019.0

Based on the ARCH-LM test statistics, the residuals of the models for Islamic stock AKRA,

CPIN, IMTG, MYOR, and TLKM there is no element of ARCH, and also has white noise. Mean and

volatility models are then used to calculate the values )(ˆˆ lrtt = and )(ˆ ltt22 = recursively.

3.5 Prediction of Mean and Variance Values

Models of mean and volatility estimation results of five Islamic stocks in the previous stage, is then

used to perform one step ahead prediction for the mean and variance values. Results predicted mean

and volatility values are given in Table-1 below.

Table-1. Predictive Values of Mean and Variance One Period Ahead

For Each Islamic Stocks Islamic

Stocks

Model of

Mean-Volatility

Mean

( t )

Variance

(2ˆt )

AKRA ARMA(1,0)-GARCH(1,1) 0.001407 0.001200

CPIN ARMA(1,0)-GARCH(1,1) 0.004919 0.001840

IMTG ARMA(1,0)-GARCH(1,1) 0.014656 0.001078

MYOR ARMA(7,0)-GARCH(1,1) 0.004940 0.000956

TLKM ARMA(2,0)-GARCH(1,1) 0.004406 0.001399

3.6 Mean-Variance Portfolio Optimization

In this part done the process of portfolio optimization calculations. Portfolio optimization done with

referring to the equation (6). The data used for process optimization are the values of the mean and

variance are given in Table 2. Using the values of mean in Table 2, column t , used to form the mean

vector as: 0.004406) 0.004940 0.014656 0.004919 001407.0(Tμ , amount of the Islamic stock

that were analyzed were 1) 1 1 1 1(Te .

Furthermore, by using the values of variance in Table-2, column 2ˆt , and together with the

values of the covariance between Islamic stocks, used to form the covariance matrix Σ and the inverse

matrix 1Σ as follows:

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

97

001399.0000075.0000133.0000225.0000401.0

000075.0000956.0000512.0000315.0000113.0

000133.0000512.0001078.0000092.0000251.0

000225.0000315.0000092.0001840.0000136.0

000401.0000113.0000251.0000136.0001200.0

Σ

and

8030.0 0150.0 0406.00801.02522.0

0150.0 4880.1 6955.02237.00257.0

0406.06955.03037.1 0738.0 2020.0

0801.02237.00738.0 5904.0 0345.0

2522.00257.0 2020.00345.09613.0

1031Σ

Optimization done in order to determine the composition of the portfolio weights, and thus the

portfolio weight vector is determined by using equation (8). The weight vector calculation process, the

values of risk tolerance determined by the simulation begins value = 0.000 with an increase of

0.001. If it is assumed that short sales are not allowed, then the simulation is stopped when the value of

= 0.036, because it has resulted in a portfolio weight at least there is a negative value. The portfolio

weights calculation results are given in Table-2.

Table-2. Process of Mean-Variance Portfolio Optimization

1w 2w 3w 4w 5w ewT p 2ˆ p 2ˆˆ pp 2ˆ/ˆ pp

AKRA CPIN IMTG MYOR TLKM Sum Mean Variance Maximum Ratio

0.000 0.2150 0.1406 0.1895 0.2629 0.1920 1 0.0059 0.00043136 0.00546864 13.7161

0.001 0.2093 0.1411 0.2025 0.2555 0.1916 1 0.0061 0.00043151 0.00566849 14.0507

0.002 0.2036 0.1417 0.2155 0.2480 0.1913 1 0.0062 0.00043195 0.00576805 14.6893

0.003 0.1978 0.1422 0.2285 0.2406 0.1909 1 0.0064 0.00043268 0.00596732 14.6893

0.004 0.1921 0.1428 0.2414 0.2331 0.1905 1 0.0065 0.00043370 0.00606630 14.9921

0.005 0.1864 0.1433 0.2544 0.2257 0.1902 1 0.0066 0.00043502 0.00616498 15.2832

0.006 0.1807 0.1439 0.2674 0.2182 0.1898 1 0.0068 0.00043663 0.00636337 15.5621

0.007 0.1750 0.1444 0.2803 0.2108 0.1894 1 0.0069 0.00043853 0.00646147 15.8284

0.008 0.1693 0.1450 0.2933 0.2033 0.1891 1 0.0071 0.00044073 0.00665927 16.0817

0.009 0.1636 0.1455 0.3063 0.1959 0.1887 1 0.0072 0.00044322 0.00675678 16.3217

0.010 0.1579 0.1461 0.3193 0.1884 0.1883 1 0.0074 0.00044600 0.00695400 16.5481

0.011 0.1522 0.1466 0.3322 0.1810 0.1880 1 0.0075 0.00044907 0.00705093 16.7608

0.012 0.1465 0.1472 0.3452 0.1736 0.1876 1 0.0077 0.00045344 0.00724656 16.9597

0.013 0.1407 0.1477 0.3582 0.1661 0.1872 1 0.0078 0.00045610 0.00734390 17.1445

0.014 0.1350 0.1483 0.3711 0.1587 0.1587 1 0.0080 0.00046005 0.00753995 17.3154

0.015 0.1293 0.1488 0.3841 0.1512 0.1865 1 0.0081 0.00046430 0.00763570 17.4724

0.016 0.1236 0.1494 0.3971 0.1438 0.1461 1 0.0083 0.00046884 0.00783116 17.6155

0.017 0.1179 0.1499 0.4101 0.1363 0.1858 1 0.0084 0.00047367 0.00792633 17.7449

0.018 0.1122 0.1505 0.4230 0.1289 0.1854 1 0.0086 0.00047879 0.00812121 17.8608

0.019 0.1065 0.1510 0.4360 0.1214 0.1850 1 0.0087 0.00048421 0.00821579 17.9633

0.020 0.1008 0.1516 0.4490 0.1140 0.1847 1 0.0088 0.00048992 0.00831008 18.0528

0.021 0.0951 0.1521 0.4619 0.1065 0.1843 1 0.0090 0.00049592 0.00850408 18.1295

0.022 0.0893 0.1527 0.4749 0.0991 0.1840 1 0.0091 0.00050221 0.00859779 18.1937

0.023 0.0836 0.1533 0.4879 0.0916 0.1836 1 0.0093 0.00050880 0.00879120 18.2459

0.024 0.0779 0.1538 0.5009 0.0842 0.1832 1 0.0094 0.00051568 0.00888432 18.2863

0.025 0.0722 0.1544 0.5138 0.0767 0.1829 1 0.0096 0.00052285 0.00907715 18.3154

0.026 0.0665 0.1549 0.5268 0.0693 0.1825 1 0.0097 0.00053032 0.00916968 18.3336

0.027 0.0608 0.1555 0.5398 0.0619 0.1821 1 0.0099 0.00053808 0.00936192 18.3413

0.028 0.0551 0.1560 0.5527 0.0544 0.1818 1 0.0100 0.00054613 0.00945387 18.3390

0.029 0.0494 0.1566 0.5657 0.0470 0.1814 1 0.0102 0.00055447 0.00964553 18.3270

0.030 0.0437 0.1571 0.5787 0.0395 0.1810 1 0.0103 0.00056311 0.00973689 18.3059

0.031 0.0380 0.1577 0.5917 0.0321 0.1807 1 0.0105 0.00057204 0.00992796 18.2760

0.032 0.0322 0.1582 0.6046 0.0246 0.1803 1 0.0106 0.00058126 0.01001874 18.2379

0.033 0.0265 0.1588 0.6176 0.0172 0.1799 1 0.0107 0.00059078 0.01010922 18.1919

0.034 0.0208 0.1593 0.6306 0.0097 0.1796 1 0.0109 0.00060059 0.01029941 18.1386

0.035 0.0151 0.1599 0.6435 0.0023 0.1792 1 0.0110 0.00061069 0.01038931 18.0783

0.036 0.0094 0.1604 0.6565 -0.0052 0.1788 1 0.0112 0.00062108 0.01057892 18.0115

Based on the results of the optimization process are given in Table-2, the pair of points ( p ,

2ˆ p )efficient portfolio can be formed or the so-called efficient frontier as given in Figure-2.a. This graph

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

98

shows the efficient frontier decent area for investors with different levels of risk tolerance, to make an

investment. Also by using the optimization process results in Table-2, can be calculated ratio value

p towards 2ˆ p for each level of risk tolerance. The ratio calculation results can be shown as in Figure

2.b. This ratio shows the relationship between the optimum portfolio return expected with variance as

a measure of risk.

Figure-2.a Efficien Frontier Figure-2.b. Ratio of Mean-Variance

Based on the results of the calculation of portfolio optimization, the optimum value is achieved

when the value of the portfolio's risk tolerance = 0.027. The portfolio produces mean value of p =

0.0099 with the value of risk as the variance 2ˆ p = 0.00053808.

Composition weight of the maximum portfolio respectively are: 0.0608, 0.1555, 0.5398,

0.0619, and 0.1821. This provides reference to investors that invest in Islamic stocks of AKRA, CPIN,

ITMG, MYOR, and TLKM, in order to achieve the maximum value of the portfolio, the composition

of the portfolio weights are as mentioned above.

4. Conclusions

In this paper we analyzed the Mean-Variance portfolio optimization on some Islamic stocks by using

non constant mean and volatility models approaches, in some Islamic stocks are traded in the Islamic

capital market in Indonesia. The analysis showed that be some of Islamic stocks which analyzed all

follow the ARMA( qp, )-GARCH( sg, ) models. Whereas, Based on the results of the calculation of

portfolio optimization, produced that the optimum is achieved when the composition of the portfolio

investment weights in Islamic stocks of AKRA, CPIN, ITMG, MYOR, and TLKM, respectively are:

0.0608, 0.1555, 0.5398, 0.0619, and 0.1821. The composition of the portfolio weights thereby will

produces a portfolio with mean value of 0.0099 and the value of risk, measured as the variance of

0.00053808.

References Febrian, E. & Herwany, A. (2009). Volatility Forecasting Models and Market Co-Integration: A Study on South-

East Asian Markets. Working Paper in Economics and Development Studies. Department of Ekonomics,

Padjadjaran University.

Goto, S. & Yan Xu. (2012). On Mean Variance Portfolio Optimization: Improving Performance Through Better

Use of Hedging Relations. Working Paper. Moore School of Business, University of South Carolina. email:

[email protected].

Gujarati, D.N. (2004). Basic Econometrics. Fourth Edition. The McGraw−Hill Companies, Arizona.

Kheirollah, A. & Bjarnbo, O., (2007). A Quantitative Risk Optimization of Markowitz Model: An Empirical

Investigation on Swedish Large Cap List, Master Thesis, in Mathematics/Applied Mathematics, University

Sweden, Department of Mathematics and Physics, www.mdh.se/polopoly_fs/ 1.16205!MasterTheses.pdf.

Variance

Me

an

0.000600.000550.000500.00045

0.011

0.010

0.009

0.008

0.007

0.006

Efficient Frontier

Variance

Ra

tio

0.000600.000550.000500.00045

19

18

17

16

15

14

Ratio Mean and Variance

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

99

Panjer, H.H. Ed., et al. (1998). Financial Economics: With Applicationsto Investments, Insurance, and Pensions.

Schaumburg, Ill.: The Actuarial Foundation.

Rifqiawan, R.A. (2008). Analisis Perbedaan Volume Perdagangan Saham-Saham yang Optimal Pada Jakarta

Islamic Index (JII) di Bursa Efek Indonesia (BEI). Tesis Program Magister. Program Stdi Magister Sains

Akuntansi, Program Pascasarjana, Universitas Diponegoro, Semarang, 2008.

Shewhart, Walter A and Samuel S. Wilks. (2004). Applied Econometric Time Series. John Wiley &Sons, Inc.

United States of America.

Shi-Jie Deng. (2004). Heavy-Tailed GARCH models: Pricing and Risk Management Applications in Power

Market, IMA Control & Pricing in Communication & Power Networks. 7-17 Mar

http://www.ima.umn.edu/talks/.../deng/power_ workshop_ ima032004-deng.pdf.

Sukono, Subanar & Dedi Rosadi. (2011). Pengukuran VaR Dengan Volatilitas Tak Konstan dan Efek Long

Memory. Disertasi. Prtogram Studi S3 Statistika, Jurusan Matematika, Fakultas Matematika dan Ilmu

Pengetahuan Alam, Universitas Gajah Mada, Yogyakarta, 2011.

Tsay, R.S. (2005). Analysis of Financial Time Serie, Second Edition, USA: John Wiley & Sons, Inc.

Yoshimoto, A. (1996). The Mean-Variance Approach To Portfolio Optimization Subject To Transaction Costs.

Journal of the Operations Research Society of Japan, Vol. 39, No. 1, March 1996

Zhang, D., (2006). Portfolio Optimization with Liquidity Impact, Working Paper, Center for Computational

Finance and Economic Agents, University of Essex, www.orfe.princeton.edu/

oxfordprinceton5/slides/yu.pdf.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

100

Application of Robust Statistics

to Portfolio Optimization

Epha Diana SUPANDIa,b, Dedi ROSADIc, ABDURAKHMANd aMathematics Doctoral Student, FMIPA, Gadjah Mada University, Indonesia.

bMathematics Department, FSAINTEK, State Islamic University, Indonesia. c,dMathematics Department, FMIPA, Gadjah Mada University, Indonesia.

[email protected]. [email protected]. [email protected]

Abstract: The purpose of this article is to assess whether robust statistics in

construction of mean variance portfolios’(MVP) allows to achieve better investment

results in comparison with portfolios defined using classical MLE estimators. The

empirical analysis shows that the resulting robust portfolio compositions are more

stable than those of the classical portfolios. Moreover, this paper also confirm that the

robust portfolios are shown to be superior in improving portfolio performance.

Keywords: Portfolio Optimization, Robust Statistics

1. Introduction

The objective of portfolio selection is to find the right asset mix that provides the approriate

combination of return and risk that allows investors to achieve their financial goals. Portfolio selection

problems were firstly by Markowitz in 1952. In the proposed models, the return is measured by the

expected value of the random portfolio return, while the risk is quantified by the variance of the portfolio

(mean-variance portfolio).

The mean-variance portfolio (MVP) just requires estimation of mean 𝝁 and covariance matrix

𝚺 of asset returns. Traditionally, the sample mean and covariance matrix have been used for this

purpose. However, because of estimation error, policies constructed using these estimators are

extremely unstable. So, the resulting portfolio weights fluctuate substantially over time, see Chopra dan

Ziemba (1993), Broadie (1993), Bengtsson (2004) also Ceria and Stubbs (2006).

The instability of the mean-variance portfolios can be explained since sample mean and

covariance matrix are maximum likelihood estimators under normality. These estimators possess

desirable statistical properties under the true model. However, their asymptotic breakdown point is

equal to zero (Maronna et.al, 2006), i.e. that they are badly affected by atypical observations.

Several techniques have been suggested to reduce the sensitivity of mean-variance portfolio.

One of them is represented by robust statistics. The theory of robust statistics is concerned with the

construction of statistical procedures that are stable even when the empirical (sample) distribution

deviates from the assumed (normal) distribution. (see Huber 2004, Staudle and Sheather 1990, Maronna

et.al 2006). Other researchers have proposed portfolio policies based on robust estimation techniques,

see Lauprette (2002), Vaz-de Melo and Camara (2003), Perret-Gentil and Victoria-Feser (2004),

Welsch and Zhou (2007) , DeMiguel and Nogales (2009) also Hu (2012).

Based on the previous analysis, this paper examines portfolio policies using robust estimators.

These policies should be less sensitive to deviations of the empirical distribution of returns from

normality than the traditional policies. We focus on certain robust estimators known as Minimum

Volume Ellipsoid (MVE) and Fast Minimum Covariance Determinant (Fast-MCD), which have high

breakdown point. (see Rousseeuw and Van Driessen,1999).

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

101

2. Robust Statistics

Robust statistics is an extension of classical statistics that takes into account the possibility of

model misspesification (including outliers). In this case, the parametric model is the multivariate normal

model with parameters 𝝁 and 𝚺. Robust estimators for location of scale with multivariate data have

first been proposed by Gnanadesikan and Kettenring (1972). One of them is the property of affine

equivariance and is fulfilled by estimators ��(𝒓) of location and estimators ��(𝒓) of scale that satisfy

(see Maronna et.al. 2006):

��(𝑨𝒓 + 𝒃) = 𝑨��(𝒓) + 𝒃 (1)

��(𝑨𝒓 + 𝒃) = 𝑨′��(𝒓)𝑨 (2)

The most widely used estimators of this type are the minimum volume ellipsoid (MVE)

estimator of Rousseeuw (1985) and Fast Minimum Covariance Determinant was constructed by

Rousseeuw and Van Driessen (1999).

2.1 The Minimum Volume Ellipsoid Rousseeuw (1985) introduced a highly robust estimator, the minimum volume ellipsoid

estimator, (𝑻, 𝑪), where 𝑻 is taken to be the center of the minimum volume ellipsoid covering at least

half of the observations, and 𝑪 is a p by p matrix representing the shape of the ellipsoid.

This approach looks for the ellipsoid with smallest volume that covers h data points, where 𝑛

2≤

ℎ ≤ 𝑛. Consider a data set 𝒓 = (𝒓1, 𝒓2, … , 𝒓𝑛) of p-variate observations, Formally the estimate is

defined as these 𝑻𝑴𝑽𝑬, 𝑪𝑴𝑽𝑬 that minimize of |𝑪| subject to:

#{𝑖; (𝒓𝒊 − 𝑻)′𝑪−1(𝒓𝒊 − 𝑻) ≤ 𝑐2} ≥ [(𝑛 + 𝑝 + 1) 2⁄ ] (3)

The constant c is chosen as 𝜒𝑝,0.52 and denotes the cardinality.

2.2 The Minimum Covariance Determinant The minimum covariance determinant (MCD) method of Rousseeuw is a highly robust

estimator of multivariate location and scatter. Its objective is to find h observation (out of n) whose

covariance matrix has the lowest determinant. The MCD estimate of location is then the average of

these h points, and the MCD of scatter is their covariance matrix. The MCD location and scatter estimate

𝑻𝑴𝑪𝑫, 𝑪𝑴𝑪𝑫 are defined as follows:

𝑻𝑴𝑪𝑫 =1

ℎ∑ 𝒓𝑖ℎ𝑖=1 (4)

And

𝑪𝑴𝑪𝑫 =1

ℎ∑ (𝒓𝒊 − 𝑻𝑴𝑪𝑫)(𝒓𝒊 − 𝑻𝑴𝑪𝑫)

′ℎ𝑖=1 (5)

The computation of the MCD estimator is far from being trivial. The naive algorithm would

proceed by exhaustively investigating all subsets of size h out of n to find the subset with the smallest

determinant of its covariance matrix, but this will be feasible only for very small data sets. In 1999,

Rousseeuw and Van Diressen constructed a very fast algorithm to calculate the MCD estimator. The

new algorithm is called FAST-MCD and its based on the C-step.

Theorem 1. C-Step (Rousseeuw dan Van Diressen, 1999):

Consider the data set 𝒓 = (𝒓1, 𝑟2, … , 𝒓𝑛) of p-variate observations. Let 𝐻1 ⊂ {1,2,… , 𝑛} with |𝐻1| = ℎ,

and put 𝑻𝟏: =1

ℎ∑ 𝒓𝑖ℎ𝑖=1 dan 𝑪𝟏:=

1

ℎ∑ (𝒓𝒊 − 𝑻𝟏)(𝒓𝒊 − 𝑻𝟏)

′ℎ𝑖=1 . If 𝑑𝑒𝑡(𝑪1) ≠ 0, define the relative

distances: 𝑑1(𝑖) = √(𝒓𝒊 − 𝑻𝟏)′𝑪1−1(𝒓𝒊 − 𝑻𝟏) , i =1, 2,..., n. Now take 𝐻2 such that {𝑑1(𝑖); 𝑖 ∈ 𝐻2 } ≔

{(𝑑1)1:𝑛, (𝑑1)2:𝑛, … , (𝑑1)ℎ:𝑛}, where (𝑑1)1:𝑛 ≤ (𝑑1)2:𝑛 ≤ ⋯ ≤ (𝑑1)ℎ:𝑛 are the ordered distances, and

compute T2 dan C2 based on H2, then 𝑑𝑒𝑡(𝑪2) ≤ 𝑑𝑒𝑡(𝑪1) with equality if and only if T1 = T2 dan C1

= C2.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

102

The algorithm of FAST MCD looks as follows:

1. If n is small (say, 𝑛 ≤ 600), then:

a. Repeat 500 times:

- Contruct an initial h-subset 𝐻1

- Carry out two C-Steps

b. For the 10 results with lowest det(𝑪3) - Carry out C-Steps until convergence

c. Report the solution, with the lowest det(𝑪) 2. If n is large (say, 𝑛 > 600), then:

a. Construct up to five disjoint random subset of size 𝑛𝑠𝑢𝑏

b. Inside each subset, repeat 100 times:

- Contruct an initial subset 𝐻1 of size ℎ𝑠𝑢𝑏

- Carry out two C-Steps, using 𝑛𝑠𝑢𝑏 and ℎ𝑠𝑢𝑏

- Keep the best 10 results (𝑻𝒔𝒖𝒃, 𝑪𝒔𝒖𝒃); c. Pool the subset, yielding 𝑛𝑚𝑒𝑟𝑔𝑒𝑑

d. In the merged set, repeat for each of the 50 solutions (𝑻𝒔𝒖𝒃, 𝑪𝒔𝒖𝒃): - Carry out two C-Steps, using 𝑛𝑚𝑒𝑟𝑔𝑒𝑑 and ℎ𝑚𝑒𝑟𝑔𝑒𝑑

- Keep the best 10 results (𝑻𝒎𝒆𝒓𝒈𝒆𝒅, 𝑪𝒎𝒆𝒓𝒈𝒆𝒅);

e. In the full data set, repeat for the 𝑛𝑓𝑢𝑙𝑙 best results:

- Take several C-Steps, using n and h

- Keep the best final result (𝑻𝒇𝒖𝒍𝒍, 𝑪𝒇𝒖𝒍𝒍);

f. To obtain consistency when the data come from a multivaraite normal distribution, then:

𝑻𝑴𝑪𝑫 = 𝑻𝒇𝒖𝒍𝒍 and 𝑪𝑴𝑪𝑫 =𝑚𝑒𝑑𝑖 𝑑(𝑻𝒇𝒖𝒍𝒍,𝑪𝒇𝒖𝒍𝒍)

2 (𝑖)

𝜒𝑝,0.52 𝑪𝒇𝒖𝒍𝒍

g. A one-step reweighting estimate is obtained by:

𝑻𝟏 =∑ 𝑢𝑖𝒓𝒊𝑛𝑖=1

∑ 𝑢𝑖𝑛𝑖=1

and 𝑪𝟏 =∑ 𝑢𝑖(𝒓𝒊−𝑻𝟏)(𝒓𝒊−𝑻𝟏)′𝑛𝑖=1

∑ 𝑢𝑖𝑛𝑖=1 −1

Where: 𝑢𝑖 = {1, if 𝑑(𝑇𝑀𝐶𝐷 ,𝐶𝑀𝐶𝐷) (𝑖) ≤ √𝜒𝑝,0.975

2

0, otherwise

3. Optimal Portfolio

Let the random vector 𝒓 = (𝒓1, 𝒓2, … , 𝒓𝑁)′ denote random returns of the N risky assets with

mean vector mean 𝝁 and covariance matrix 𝚺, and 𝒘 = (𝒘1, 𝒘2, … ,𝒘𝑁)′ denote the proportion of the

portfolio to be invested in the N risky assets. Then the target of the investor is to choose an optimal

portfolio 𝒘 that lies on the mean-risk efficient frontier. In the Markowitz model, the “mean” of a

portfolio is defined as the expected value of the portfolio return, 𝒘𝑇𝒓, and the “risk” is defined as the

variance of the portfolio return, 𝒘𝑇𝚺𝒘

Mathematically, minimizing the variance subject to target and budget constraints leads to a

formulation like:

min 𝒘𝑻𝚺𝒘. (6)

Kendala: 𝒘𝑻𝝁 ≥ 𝜇0 (7)

𝒆𝑻𝒘 = 1. (8)

𝒘 > 0 (9)

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

103

Where 𝜇0 is the minimum expected return, 𝒆𝑻𝒘 = 1 is the budget and 𝒘 > 0 stand for no short-

selling.

In the above formula if parameters are known, then the optimization problem (6) - (9) can be

solved numerically. However, the parameters are never known in practice and they have to be estimated

from an unknown distribution with limited data. Traditionally, Maximum Likelihood Estimator (MLE

) have been used to estimate the sample mean and covariance matrix.

If the data are multivariate normal distribution then ��𝑴𝑳𝑬 and ��𝑴𝑳𝑬 are the optimal estimator

of the solution problem (6) – (9). But in actual finacial market, the Gaussian model may not completely

unsatisfactory since the empirical distribution of asset return may in fact be asymmetric, skewness and

have heavier tails.

Robust statistics can deal with a part of the data that is not fully compatible with the distribution

implied by the assumed model, i.e. when model misspecification exists, and in particular in the presence

of outlying observations. The optimal portfolio weight based on robust estimator then can be solved by

the following equation:

min 𝒘𝑻��𝐫𝐨𝐛𝒘. (10)

Kendala: 𝒘𝑻 ��𝐫𝐨𝐛 ≥ 𝜇0 (11)

𝒆𝑻𝒘 = 1. (12)

𝒘 > 0 (13)

4. Research Methodology

The research utilizes historical daily rates of return for 8 companies from the Jakarta Islamics

Index (JII). There are Alam Sutera Realty Tbk (ASRI), Indofood Sukses Makmur Tbk (INDF), Jasa

Marga Tbk (JSMR), Telekomunikasi Indonsesia (TLKM), Timah Tbk (TINS), Akr Corporindo Tbk

(AKRA), Charoen Pokhphan Indonesia Tbk (CPIN) dan XL Axiata Tbk (EXCL). The data is taken

from January 2012 to December 2012 (see www.finance.yahoo.com).

Classical portfolio establishment that using sample mean and covariance matrix will be

compared with the following robust methods: Minimum Volume Ellipsoid (MVE) and Fast-Minimum

Covariance Determinant (FMCD). For both robust estimators, the fraction of rejected observations is

set at 10% .

In this study, the Sharpe ratio is employed to evaluate the performance of the three portfolios.

This ratio focus on measuring the additional return (or risk premium) per unit of dispersion in

investment asset or trading startegy which is considered as risk, that is, a variance risk measure. The

definition of Sharpe Ratio in portfolio is:

𝑆𝑅 =𝐸(𝑅𝑝) − 𝑅𝑓

𝜎𝑝

Where 𝑅𝑝 is the portfolio return, 𝑅𝑓 is the risk free return, 𝜎𝑝 is the standard deviation of the

excess of the portfolio return. In practice, the higher Sharpe ratio it has, the better performance portfolio

will have.

5. Result

The data consisting of 259 historical daily arithmetic returns (January 3, 2012 – Desember 31,

2012) of eight stocks chosen from Jakarta Islamic Index. These data are used as scenario to compare

the performance of three portfolio (MV, MVE dan Fast-MCD). Table 1 present the mean of eight stocks,

including the standar deviation.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

104

Table 1. Mean and Standar Deviation of Eight Stocks

ASRI INDF JSMR TLKM TINS AKRA CPIN ECXL

Mean 0.001257 0.001925 0.001183 0.001062 0.001484 0.001348 0.002193 0.001139

Stdev 0.021540 0.018905 0.014681 0.016836 0.018956 0.021157 0.021050 0.024674

The performance of eight stocks can be directly viewed through the table 1. Here, CPIN has the

best performance compared to other stocks in view of mean, while TLKM has the smallest standard

deviation.

In oder to contruct a robust portfolio, the inital step that needs to be performed is to test the

normality of the data. Normality test results for the eight stocks presented in the table below:

Table 2. Kolmogorov-Smirnov test

ASRI INDF JSMR TLKM TINS AKRA CPIN ECXL

Kolmogorov-Smirnov Test 2.652 2.504 2.773 2.008 1.700 1.739 2.312 2.323

Sig. (2-tailed) 0.000 0.000 0.000 0.001 0.006 0.005 0.000 0.000

As expected, associating with the table 3, it can be observed that the eight return data are not

normally distributed. This is indicated by the Sig> 0.05, its mean that there is no sufficient evidence to

accept H0 (normal distribution of data).

5.1. Portfolio Composition

In this section, an analysis of the portfolio composition will be compared between the classic

portfolio and robust portfolios. Establishment of optimal portfolio conducted for various values of µ0 ,

that is 0.0013 - 0.0021. Table 3, 4 and 5 shows the composition of each portfolio.

Table 3. The Clasiccal Portfolio Composition

µ0 ASRI INDF JSMR TLKM TINS AKRA CPIN ECXL

0.0013 0.0618 0.1685 0.3097 0.2383 0.1079 0.047 0.0049 0.0619

0.0014 0.0507 0.1988 0.2781 0.2135 0.1078 0.0449 0.0536 0.0526

0.0015 0.0358 0.2398 0.2353 0.1799 0.1077 0.0419 0.1196 0.040

0.0016 0.0208 0.2807 0.1925 0.1464 0.1076 0.039 0.1855 0.0275

0.0017 0.0059 0.3217 0.1497 0.1129 0.1074 0.0361 0.2515 0.0149

0.0018 0 0.362 0.1045 0.0768 0.1047 0.0319 0.3179 0.0021

0.0019 0 0.4001 0.0542 0.0349 0.0976 0.0263 0.3868 0

0.0020 0 0.4373 0 0 0.0871 0.0165 0.459 0

0.0021 0 0.347 0 0 0 0 0.653 0

It can be noticed that increasing of µ0 causes increase in both assets of INDF and CPIN.

Meanwhile, the increasing µ0 cause decrease on assets of ASRI, JSMR, TLKM, TINS, AKRA and

EXCL.

Meanwhile, the MVE portfolio obtaining the following results:

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

105

Table 4. The MVE Portfolio Composition

µ0 ASRI INDF JSMR TLKM TINS AKRA CPIN ECXL

0.0013 0.0463 0.166 0.2747 0.181 0.0894 0.0436 0.0637 0.1353

0.0014 0.0382 0.1643 0.2513 0.1741 0.0772 0.0196 0.107 0.1684

0.0015 0.0292 0.1625 0.2264 0.1652 0.0635 0 0.151 0.2022

0.0016 0.0163 0.1605 0.1952 0.1475 0.0436 0 0.1977 0.2392

0.0017 0.0034 0.1585 0.1641 0.1298 0.0237 0 0.2444 0.2762

0.0018 0 0.1545 0.1291 0.1096 0 0 0.2923 0.3145

0.0019 0 0.1465 0.0806 0.0797 0 0 0.3405 0.3527

0.0020 0 0.1385 0.0321 0.0598 0 0 0.3887 0.3909

0.0021 0 0.0046 0 0 0 0 0.4889 0.5065

It can be seen that the formation of optimal MVE portfolios at µ0 = 0.001, the weight of each

asset as follows: ASRI is 4.63%, INDF is 16.6%, JSMR is 27.47%, TLKM is 18.1%, AKRA 4.36%,

CPIN is 6.37%, and ECXL is 13.53%. Fascinatingly the increasing of µ0 causes the rising in both assets

of CPIN and EXCL, respectively other assets decrease.

The establishment optimal portfolios through Fast-MCD is presented in the table below:

Table 5. The Fast-MCD Portfolio Composition

µ0 ASRI INDF JSMR TLKM TINS AKRA CPIN ECXL

0.0013 0 0 0.1166 0.0119 0 0 0.1488 0.7228

0.0014 0 0.1040 0.0821 0 0 0 0.2615 0.5524

0.0015 0 0.104 0.0821 0 0 0 0.2615 0.5524

0.0016 0 0.1638 0.0663 0 0 0 0.3125 0.4575

0.0017 0 0.2236 0.0505 0 0 0 0.3634 0.3625

0.0018 0 0.2943 0.0465 0 0.0559 0 0.3875 0.2158

0.0019 0 0.3719 0.0434 0 0.1188 0.0213 0.3998 0.0446

0.002 0 0.4107 0 0 0.1109 0.0049 0.4734 0

0.0021 0 0.3470 0 0 0 0 0.653 0

The table 5 present an optimal Fast MCD portfolio for various expected return. It can be

observed that the ASRI never involved in the formation of portfolio, indicated by 0% of weight. As

well as happens to TLKM, TINS and AKRA. The table 5. also shown that CPIN, EXCL and INDF

stocks contributed the dominant share compared to other stocks.

Based on the analysis of the performance of portfolio weight, we can conclude that these three

approaches earned different portfolio weights. At various levels of expected return the classical model

diversivied portfolio contrast with robust models. However, the difference between classic and robust

portfolio composition become equal when the given return increases.

5.2. Analysis Performance

In this section, the analysis performance of risk and sharpe ratio will be compared between

classical and robust models. The results are presented in the following table:

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

106

Table 6. Standard Deviation and Sharpe Ratio for Given Expected Return

µ0

MV MVE FastMCD

Stdev Sharpe Stdev Sharpe Stdev Sharpe

0.0013 0.009274 0.010783 0.009165 0.010911 0.006633 0.015080

0.0014 0.009381 0.021320 0.009487 0.021081 0.007483 0.026730

0.0015 0.009695 0.030944 0.009798 0.030618 0.008367 0.035860

0.0016 0.010392 0.038491 0.010250 0.039024 0.009487 0.042160

0.0017 0.011225 0.044543 0.010954 0.045645 0.010583 0.047250

0.0018 0.012247 0.048992 0.011662 0.051449 0.011747 0.051077

0.0019 0.013191 0.053066 0.012490 0.056045 0.012961 0.054008

0.0020 0.014142 0.056569 0.013416 0.059630 0.014142 0.056569

0.0021 0.016125 0.058140 0,015492 0.058090 0.015492 0.058090

Table 6. present the risk and sharpe ratio at different expected return. Its shown that the risk of

portfolio Fast-MCD gives the smallest risk than the others. Similarly, the performance of sharpe ratio

of Fast-MCD was the highest. The greater the value of Sharpe Ratio, the better the portfolio, since

Sharpe ratio measures the expected return per unit of risk. Therefore, in the contex of risk and sharpe

ratio, we can conclude that portfolio Fast-MCD is superior compared to classical and MVE portfolio.

Another way to look at the performance of the portfolio is to make the efficient frontier. An

effcient frontier is the curve that shows all efficient portfolios in a risk-return framework. An effcient

portfolio is defined as the portfolio that maximizes the expected return for a given of risk (standard

deviation), or the portfolio that minimizes the risk subject to a given expected return. The following

figure shows the behaviour of the efficient frontier for each portfolio.

Figure 1. Efficient frontier for three portfolios

Based on Figure 1, it can be observed that the Fast-MCD efficient frontier is superior compared

to the MVE and MV efficient frontier.

6. Conclusion

This study mainly compare the performance of three different portfolio, i.e. Mean Variance

portfolio, MVE portfolio and Fast-MCD portfolio. The empirical results shows for a set of given return,

the composition of three portfolio are different. Meanwhile, it is no significant different between MV

portfolio and Fast-MCD portfolio when the expected retun grows.

Through the comparison of the risk (standar deviation) , Sharpe Ratio and the efficient frontier,

it is clear that Fast-MCD portfolio performs better than Mean Variance portfolio and MVE portfolio.

0.0011

0.0012

0.0013

0.0014

0.0015

0.0016

0.0017

0.0018

0.0019

0.002

0.0021

0.0022

0.006 0.007 0.008 0.009 0.01 0.011 0.012 0.013 0.014 0.015 0.016 0.017

MV

MVE

Fast-MCD

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

107

References

Bengtsson, C. (2004). The Impact of Estimation Error on Portfolio Selection for Investor with Constat

Relative Risk Aversion.

Best, M. J., and Grauer, R. R. (1991). On the sensitivity of mean-variance efficient portfolios to changes

in asset means: some analytical and computational results. Review of Financial Studies, 4(2),

315-342.

Broadie, M. (1993). Computing efficient frontiers using estimated parameters. Annals of Operations

Research, 45, 21-58.

Ceria, S., and Stubbs, R. A. (2006). Incorporating estimation errors into portfolio selection: robust

portfolio construction. Journal of Asset Management, 7(2), 109-127.

Chopra, V. K. and Ziemba,W. T. (1993). The effects of errors in means, variances, and covariances on

optimal portfolio choice. Journal of Portfolio Management, 19(2), 6-11.

DeMiguel, V. and Nogales, F. J. (2008). Portfolio selection with robust estimation. Technical Report,

London Business School.

Gentil, P.C and Feser.V.MP (2004). Robust Mean Variance Portfolio Selection. Working Paper 173,

National Centre of Competence in Research NCCR FINRISK.

Hu, J. (2012). An Empirical Comparison of Different Approaches in Portfolio Selection. U.U.D.M.

Project Report 2012:7.`````

Huber, R. J. (1981). Robust statistics. New York: Wiley.

Lauprete, G.J. (2001). Portfolio risk minimization under departures from normality. PhD thesis, Sloan

school of Management, Massachusetts Institute of Technology, Cambridge,MA.

Markowitz, H. M. (1952). Portfolio selection. Journal of Finance, 7: 77-91.

Rousseeuw, P.J. and K. Van Driessen (1999). A Fast Algorithm for the Minimum Covariance

Determinant Estimator. Technometrics, 41, 212–223.

Staudte, R.G. and Sheather, S.J. (1990). Robust Estimation and Testing. John Wiley and Sons Inc.

Vaz-de Melo, B., R. P. Camara. 2003. Robust modeling of multivariate financial data. Coppead

Working Paper Series 355, Federal University at Rio de Janeiro, Rio de Janeiro, Brazil.

Welsh. R.Y. and Zhou. (2007). Application of Robust Statistics to Asset Allocation Models. Statistical

Journal. 5(1): 97-114.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

108

Multivariate Models for Predicting Efficiency

of Financial Performance in The Insurance

Company (Case Study in the Insurance Company Listed on the Indonesia Stock Exchange)

Iin IRIANINGSIHa, Sukonob, Deti RAHMAWATIc

a,b,cDepartment of Mathematics, FMIPA, Universitas Padjadjaran

Jl. Raya Bandung-Sumedang km 21 Jatinangor aEmail: [email protected]

ABSTRACT: In this research was discussed about a multivariate model for predicting the

efficiency of the financial performance insurance companies. Multivariate models which used

are discriminant model and logistic regression model. Changing net profit is based for grouping

data to two categories because of profit is often to used as an indicator of measurement

performance company. The predictive variables are represented as 7 financial ratios. A

multivariate model is obtained by comparing the results of discriminant analysis and logistic

regression analysis. Five of seven financial ratios are significantly influence for predicting the

efficiency of the financial performance insurance companies.

Keywords: financial ratio, financial performance insurance companies, discriminant analysis,

logistic regression analysis

1. Introduction

Profit is one indicator of the performance of a company. Earnings growth constantly increasing from

year to year can give a positive signal about the prospects of the company in the future performance of

the company (Margaretta, 2010). Financial ratio analysis can be used as a tool for predicting the

financial performance of a company. The financial performance of a company is a picture of a

company's financial statements, as in the financial statements are estimates as assets, liabilities, capital

and profits of the company. One of the usefulness of financial statements is to make a picture of the

company from one period to the next on the growth or decline, and allow it to be compared with other

companies similar industries.

Beaver (1966) using financial ratios as predictors of failure and states that usability can only be tested

ratios relating to some specific purpose. The ratio is now widely used as predictors of failure. BarNiv

and Hershbarger (1990) presents a model that incorporates variables that are designed to identify the

financial solvency of the life insurance. Three multivariate analysis (multidiscriminant, nonparametric,

and logit) has been used to examine the implementation and efficiency of alternative multivariate

models for life insurance solvency (Mahmoud, 2008).

In this study the authors present a multivariate model to predict the efficiency of financial performance

based on net profit insurance companies listed on the Stock Exchange using financial ratios.

Multivariate models were used, namely discriminant models and logistic regression models.

2. Literature Review

2.1 Earnings and Earnings Growth

Profit or gain in accounting is defined as the difference between selling price and cost of production ..

(Wikipedia, 2011). Corporate profit growth is the result of a reduction profit in year t the profit for the

year t-1 divided by profit for the year t-1. (Zainuddin dan Jogiyanto, 1992). Earnings growth forecasts

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

109

are often used by investors, creditors, companies and governments to advance their business. Earnings

growth formula:

𝑌 =𝑋𝑡−𝑋𝑡−1

𝑋𝑡−1 (1)

where: Xt profit in t, Xt-1 profit in t-1, and Y earnings growth.

2.2 Discriminant Analysis Discriminant analysis is a multivariate technique that includes dependence method, ie, the dependent

and independent variables (Santoso, 2005). General purpose of discriminant analysis is to determine

whether there is a clear difference between groups in the dependent variable, and if there is a difference,

which one independent variable in discriminant function that makes the difference. Assumptions in discriminant analysis that must be met in order discriminant model can be used is:

1. Multivariate normality, or the independent variable should be normally distributed.

2. Covariance matrix of all the independent variables should be the same (equal).

3. There is no correlation between the independent variables.

4. There are not very extreme data (outliers) on the independent variable.

2.2.1 Multivariate Normality Test According to Hajarisman (2008) graphical method can be used to determine the multivariate normality

of a cluster of data which is based on the size of the square of the distance (squared generalized

distance), which is defined by:

2 1( ) ' ( )j j jd x x S x x j = 1,2, ..., n (2)

where x1,x2, ..., xn is the observation of the sample.

The distance 2 2 2

1 2, ,..., nd d d is a random variable that distributed Chi-Square (Johnson and Wichern, 1982).

Although the distance are not independent or not Chi-Square distribution that exactly, but it will be

very useful to plot it. The results of this plot are also known as Chi-Square plot. The algorithm

for the formation of Chi-Square plot is as follows:

1. Sort the distance squared value of the calculation in equation (2), from the smallest to the

largest.

2. Plot it of pair 2 2

( )

1( , (( ) / ))

2j pd j n , where 2 1

(( ) / )2

p j n is 1

100( ) /2

j n percentile of the Chi-

Square distribution with degrees of freedom p.

2.2.2 Mean Vector Test In practice, as much as p variables measured in each sampling unit in the two samples will be tested

with a hypothesis: H0 = μ1 = μ2 versus H1 = μ1 ≠ μ2, with the test criterion is:

2 11 21 12 2

1 2

( ) ' ( )gab

n nT x x S x x

n n

. (3)

Equation (3) will follow the distribution 1 2

2

, 2p n nT when H0 is true.

Reject H0 when the 1 2

2 2

, , 2p n nT T . Statistic T2 can be transformed to statistic of F became

1 2

21 2, 1

1 2

1

( 2)p n n p

n n pT F

n n p

(4)

where p dimensions of statistic of T2 be the first degrees of freedom for statistic of F.

2.2.3 Test of Similarity Variance-Covariance Matrix Similarity Test variance-covariance matrix with Box's M-test can use the assistance IBM SPSS

Statistics 19 software. Formulation of hypotheses as follows: H0 : group covariance matrices is

relatively similar, versus H1 : group covariance matrices is significantly different. Test criteria used is:

If Sig. > 0.05 then H0 acepted, and if Sig. < 0.05 then H0 rejected.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

110

2.2.4 Fisher Linear Discriminant Function

Fisher classifying an observation based on the score which is calculated from the linear function 𝑌 = ′𝑋 where ′ states vector containing the explanatory variable coefficients, that form the linear

equation of the response variable.

′ = [1,2, … ,𝑝]

𝑋 = [𝑋1𝑋2]

Xk states of the data matrix in the group k-th

𝑋𝑘 = [

𝑥11𝑘 𝑥12𝑘 … 𝑥1𝑝𝑘𝑥21𝑘 𝑥22𝑘 … 𝑥2𝑝𝑘⋮

𝑥𝑛1𝑘

⋮𝑥𝑛2𝑘

⋱ ⋮… 𝑥𝑛𝑝𝑘

]

i = 1,2,...,n

j = 1,2,...,p

k= 1 and 2

xijkk states of observation i-th, variable j-th, and on group k-th.

The linear combination of that best according to Fisher is to maximize the ratio between the average

squared distance Y that obtained from X of groups 1 and 2 with a variance of Y, or be formulated as

follows: (𝜇1𝑌−𝜇2𝑌)

2

𝜎𝑌2 =

′(𝜇1−𝜇2)(𝜇1−𝜇2)′

′𝛴 (5)

If (𝜇1 − 𝜇2) = , then equation (4) became (′)2

′Σ. Because Σ is a positive definite matrix, then

according to the theory of the Cauchy-Schwartz inequality, the ratio (′)2

′Σ can be maximized when

= 𝑐𝛴−1 = 𝑐𝛴−1(𝜇1 − 𝜇2) by choosing c = 1 produces the linear combination of the so-called Fisher linear combinations of the

following:

𝑌 = ′𝑋 = (𝜇1 − 𝜇2)′𝛴−1𝑋 (6)

2.3 Multiple Logistic Regression Analysis

Multiple logistic regression model analysis, is used to solve the case of the (logistic regression

having more than one independent variable. Suppose there are k independent variables of 1 2, ,..., kX X X

. Suppose the probability of the dependent variable is denoted by 1|P Y X X , then the logit

function is:

0 1 1 2 2 ... k kg X X X X (7)

with multiple logistic regression models that:

1

g X

g X

eX

e

(8)

2.3.1 Parameter Estimation of Multiple Logistic Regression Model The method used to estimate the parameters of on multiple logistic regression model, that is maximum

likelihood method. Likelihood function used is:

1

1 1

1i i

n ny y

i i i

i i

x x x

(9)

with

1

g X

g X

eX

e

0 1 1 2 2

0 1 1 2 2

...

...1

k k

k k

X X X

X X X

e

e

(10)

Principle of maximum likelihood method that determines the parameters that maximize the value of

likelihood function. The estimated value of the parameters can be obtained using assistance IBM SPSS

19 statistical software.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

111

2.3.2 Significance Test for Multiple Logistic Regression Model Parameters

a. Likelihood ratio test Likelihood ratio test was used to test its significance all parameters contained in the multiple logistic

regression model. The statistic used is:

G 1 1 0 0

1 1

ˆ ˆ2 ln 1 ln 1 ln ln lnn n

i i i i

i i

y y n n n n n n

(11)

This statistic will follow a Chi-square distribution with degrees of freedom equal to the number of k

independent variables. Formulation of hypothesis of the Likelihood Ratio Test:

0 1 2: ... 0kH (variable 1 2, ,..., kX X X together does not affect X )

1 : 0iH , for 1,2,...,i k (the variables of

1 2, ,..., kX X X jointly affect to X )

with the criteria:

If 2

1 kG

then

0H accepted, and 1H rejected

If 2

1 kG

then

0H rejected, and 1H accepted

b. Wald Test Wald test is used to test the partial or individual independent variables which are significant, and are

not significant to the multiple logistic regression models. The Wald test using the statistic Z which

following the standard normal distribution. The Statistic Z used is:

ˆ

ˆi

i

ZSE

(12)

where ˆi

= estimators for the parameters i

ˆiSE

= estimator of the standard error for the coefficient i

Formulation of a hypothesis for the Wald test is:

0 : 0iH ( variables of iX has no effect on X )

1 : 0iH (variables of iX have an effect on X )

with the criteria:

If 1 11 1

1 1k k

Z Z Z

then 0H accepted, and

1H rejected

If 1 1

1k

Z Z

or 1 1

1k

Z Z

then 0H rejected, and

1H accepted

c. Hosmer and Lemeshow Test Hosmer and Lemeshow test is known to test the logistic regression model fit to the data. Hosmer and

Lemeshow test will follow the Chi-square distribution with degrees of freedom 2g . Formulation of

hypotheses for Hosmer and Lemeshow test is:

0 :H There is no difference between the observed values with the model predicted values

1 :H There is a difference between the values of observations with the model predicted values

with the criteria:

If P value then 0H accepted, and

1H rejected

If P value then 0H rejected, and

1H acepted

Hosmer and Lemeshow value can be obtained with the assistance of IBM SPSS 19 statistic software.

d. R -Square

Value of 2R in the logistic regression analysis to show the strength of the relationship between

independent variables and the dependent variable. For the value of 2R determined by using the formula:

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

112

22 1 exp

LR

n

(13)

where L= log likelihood value of the model.

n = the number of data.

3. Data and Prediction Variables The data in this study is a secondary data obtained from the Indonesian Capital Market

Directory, regarding the annual financial statements of the insurance companies listed on the Stock

Exchange December 31, the period 2006-2009, published by ICMD years 2007,2008,2009 and 2010

were obtained through the internet (http://www.martintobing.com , 2011).

In this study, the dependent variable is (Y) data categories used are based on changes in net

income. Code of 1 for insurance companies that positive earnings changes are assumed to show good

financial performance efficiency. Code of 0 to the insurance company that negative earnings changes

are assumed to show the efficiency of poor financial performance.

𝑦 = { 1 , 𝑖𝑓 𝑡ℎ𝑒 𝑓𝑖𝑛𝑎𝑛𝑐𝑖𝑎𝑙 𝑝𝑒𝑟𝑓𝑜𝑟𝑚𝑎𝑛𝑐𝑒 𝑖𝑠 𝑔𝑜𝑜𝑑 0 , 𝑖𝑓 𝑡ℎ𝑒 𝑓𝑖𝑛𝑎𝑛𝑐𝑖𝑎𝑙 𝑝𝑒𝑟𝑓𝑜𝑟𝑚𝑎𝑛𝑐𝑒 𝑖𝑠 𝑏𝑎𝑑

While the independent variable (X) in the form of financial ratios as follows:

• Current Ratio(X1)

• Return On Investment (X2)

• Return On Equity (X3)

• Solvency Ratio (X4)

• Price Earning Ratio (X5)

• Expenses Ratio (X6)

• Loss Ratio (X7)

4. Result and Analysis

4.1 Discriminant Analysis Results

4.1.1 Normality Test of Independent Variables Normality test for independent variables is done by using IBM SPSS 19 statistical software. The

normality test results are given in Table 1.

Table 1. Normality Test Results of the Independent Variable

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

113

From the Table 1 above it can be concluded that in the group coded 0, only independent variables of

ROI (x2), PER (x5), and LR (x7) were normal distribution, and in the group coded 1, only independent

variable ROI (x2), ROE (x3), and ER (x6) were normal distribution. Thus, it can be said that most of the

independent variables are not normally distributed.

4.1.2 Multicollinearity Checking

In this section we done checking whether or not there of multicollinearity relationship between the

independent variables. Checking relationships multikolinieritas here performed using IBM SPSS 19

statistical software. The results multicollinearity relationship checking is given in Table 2.

Table 2. Correlation Matrix

CR ROI ROE SR PER ER LR

CR 1.000 0.128 0.004 0.135 0.074 0.212 0.060

ROI 0.128 1.000 0.870 -0.027 0.013 0.046 0.180

ROE 0.004 0.870 1.000 -0.195 -0.039 -0.058 0.151

SR 0.135 -0.027 -0.195 1.000 -0.117 0.532 0.568

PER 0.074 0.013 -0.039 -0.117 1.000 0.093 -0.216

ER 0.212 0.046 -0.058 0.532 0.093 1.000 0.652

LR 0.060 0.180 0.151 0.568 -0.216 0.652 1.000

From the Table 2 it can be seen that there is some multicollinearity between the independent variables,

as follows:

- Correlation between the independent variables ROI (x2) and ROE (x3) is 0.870 > 0.5 (strong

correlation)

- Correlation between the independent variables SR (x4) and ER (x6) is 0.532 > 0.5 (strong correlation)

- Correlation between the independent variables SR (x4) and LR(x7) is 0.568 > 0.5 (strong correlation)

- Correlation between the independent variables ER (x6) and LR (x7) is 0.652 > 0.5 (strong correlation)

4.1.3 Average Value Vector Test

Test vector average value was conducted to determine whether there or not, the difference

between groups. Test vector average value is done by using IBM SPSS 19 statistical software. Test

results the average vector is given in Table 3.

Table 3. Test results of the Vector Average Value

From Table 3 it can be seen from the seven, only two independent variables were

significantly different for the two groups of the discriminant of the efficiency of the financial

performance of the insurance company, which is the independent variable ROI (x2) and ROE

(x3).

Independent

Variables (x) F Sig Conclusions

CR 0.288 0.595 There was no difference

ROI 16.099 0.000 There is a difference

ROE 16.107 0.000 There is a difference

SR 0.485 0.490 There was no difference

PER 2.524 0.120 There was no difference

ER 0.522 0.474 There was no difference

LR 0.463 0.500 There was no difference

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

114

4.1.4 Similarity Test of Variance Covariance Matrix

To test the equality of the variance-covariance matrix is done with the Box's M test using IBM SPSS

19 statistical software. The result of the variance covariance equality test is given in Table 4.

Table 4 Test results of the Variance-Covariance Matrix Similarity

Because the Sig. = 0,000 < 0,05 meaningful the hypotesis H0 rejected. Means group covariance matrices

are significantly different. Based on the results of the independent variables for normality testing,

checking multicollinearity and variance-covariance matrix similarities apparently violated assumptions.

In the case that there are only two groups / categories if the group covariance matrices differ

significantly advanced the process should not be done. (Santoso, 2005). This research was also

supported by a previous study by Cooper and Emory (1995), the implications of the distribution of most

of the independent variables are not normal, then testing with parametric analysis such as test-t, Z,

ANOVA and discriminant analysis is not appropriate. (Almillia, 2004).

Data that did not meet the assumption of multivariate normality can cause problems in the

estimation of the discriminant function, therefore if possible logistic regression analysis can be used as

an alternative (Gessner, et al., 1998; Huberty, 1984;Johnson and Wichern,1982).

4.2 Multiple Logistic Regression Analysis

4.2.1 Parameter Estimation and Significance Tests In this section, first performed logistic regression parameter estimation, and the second performed

significance tests on the parameter estimation results. Neither process of parameter estimation and

parameter significance tests are done with the assistance IBM SPSS 19 statistical software. Here are the

results of parameter estimation and significance test parameter logistic regression using assistance IBM

SPSS 19 statistical software.

Tabel 5 Model Summary

Step

-2 Log

likelihood

Cox &

Snell

R

Square

Nagelkerke

R Square

1 32.805a 0.357 0.498

From the Table 5 it can be seen the value of log likelihood = -16,402

Box's M 93.441

F Approx. 2.509

df1 28

df2 2070.719

Sig. 0

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

115

Table 6 Parameter Estimation and Results of Wald Test

From Table 6 obtained multiple logistic regression model while you are the following:

1 2 3 4 5 6 7

1 2 3 4 5 6 7

1,702 0,022 27,891 16,267 0,011 0,034 2,431 2,286

1,702 0,022 27,891 16,267 0,011 0,034 2,431 2,286ˆ

11

g x x x x x x x x

x x x x x x xg x

e ex

ee

a. Likelihood Ratio Test

0 1 2 3 4 5 6 7: 0H (variables of 1 2 7, ,...,x x x together does not affect to x )

1 : 0iH , for 1,2,...,7i (variables of 1 2 7, ,...,x x x jointly affect to x )

Statistics of G is determined as follows:

1 1 0 0

1 1

ˆ ˆ2 ln 1 ln 1 ln ln lnn n

i i i i

i i

G y y n n n n n n

2 16,4025 27ln 27 13ln13 40ln 40

2 16,4025 88,9876 33,3443 147,5552

17,6416

2

1 0.05 72,1674

2

1 0,05 7G

that is 17.6416 > 2.1674, then H0 rejected.

b. Wald Test Based on the Wald test has been done in Table 6, it can be concluded that the independent variables

Current Ratio (x1) and Solvency Ratio (x4) not significantly affect the partial or not the insurance

company's financial performance efficiency. The independent variables were significant or partial effect

on the efficiency of the financial performance of insurance companies, which is ROI ( 2x ), ROE ( 3x ),

PER ( 5x ), Expenses Ratio ( 6x ), and Loss Ratio ( 7x ). Independent variables Current Ratio (x1) and

Solvency Ratio (x4) not significant, it is removed from the model. Then the parameter estimation was

repeated with five independent variables were significant.

Table 7 Model Summary

Step

-2 Log

likelihood

Cox &

Snell

R

Square

Nagelkerke

R Square

1 32.851a 0.356 0.497

From the Table 7 it can be seen the value of log likelihood = -16,4255

Independent

Variables (xi)

Conclusions

CR (x1) -0.022 0.102 -0.2157 Not Significance

ROI (x2) 27.891 40.896 0.6820 Significance

ROE (x3) 16.267 20.892 0.7786 Significance

SR (x4) -0.011 0.349 -0.0315 Not Significance

PER (x5) -0.034 0.034 -1 Significance

ER (x6) 2.431 2.142 1.1349 Significance

LR (x7) -2.286 2.309 -0.9900 Significance

Constant -1.702 1.574 0.5741 Significance

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

116

Table 8 Parameter Estimation and Results of Wald Test

From the Table 8 obtained multiple logistic regression model that is:

2 3 5 6 7

2 3 5 6 7

1,686 25,704 x 16,927 x – 0,034 x 2,306 x – 2,230 x

1,686 25,704 x 16,927 x – 0,034 x 2,306 x – 2,230 xˆ

11

g x

g x

e ex

ee

a. Likelihood Ratio Test

0 2 3 5 6 7: 0H (variables of 2 3 5 6 7, , , ,X X X X X together does not affect to x )

1 : 0iH , for 2,3,5,6,7i (variables of 2 3 5 6 7, , , ,x x x x x simultaneously affect to x )

Statistic of G that is :

1 1 0 0

1 1

ˆ ˆ2 ln 1 ln 1 ln ln lnn n

i i i i

i i

G y y n n n n n n

2 16,4255 27ln 27 13ln13 40ln 40

2 16,4255 88,9876 33,3443 147,5552

17,5956

2

1 0,05 51,1455

2

1 0,05 5G

that is 17,5956 > 1,1455, then H0 rejected.

b. Wald Test Based on the results in Table 8, after the return parameter estimation and all the independent variables

in the partial test using the Wald test results of five independent variables, that is ROI ( 2x ), ROE ( 3x

), PER ( 5x ), Expenses Ratio ( 6x ), and Loss Ratio ( 7x ) significant or partial effect on the efficiency of

the financial performance of insurance companies.

c. Hosmer and Lemeshow Test H0 : There is no difference between the observed values with the model predicted values

H1: There is a difference between the value of observations with the model predicted values

P value = 0,578 and 0,05 P value or 0,578 0,05 then H0 acepted.

So with a 95% confidence level, there was no difference between the observed values with the model

predicted values, so that the resulting logistic regression model fit for used.

d. R- Square

The value of 2R in the logistic regression analysis showed strong correlation between the independent

variables and the dependent variable. Value of 2R is determined as follows:

Independent

Variables (xi)

Conclusions

ROI (x2) 25.704 39.413 0.6522 Significance

ROE (x3) 16.927 18.573 0.9114 Significance

PER (x5) -0.034 0.032 -1.0625 Significance

ER (x6) 2.306 2.016 1.1438 Significance

LR (x7) -2.23 2.014 -1.1078 Significance

Konstanta -1.686 1.47 0.1536 Significance

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

117

22 1 exp

LR

n

2( 16,4255)1 exp

40

1 exp 6,7449

0.9988

Because 2R = 0,9988 or 99,88 % meaningful independent variables ROI ( 2x ), ROE ( 3x ), PER ( 5x ),

Expenses Ratio ( 6x ), and Loss Ratio ( 7x ) has a strong relationship to the efficiency of the financial

performance of insurance companies.

4.2.3 Classification accuracy Multiple Logistic Regression Model Classification accuracy results of the logistic regression model are shown in Table 9.

Table 9 Logistic Regression Model Classification

Observed

Predicted

kodelaba Percentage

Correct 0 1

Step 1 kodelaba

0 7 6 53.8

1 3 24 88.9

Overall Percentage 77.5

From Table 9 above classification accuracy of logistic regression models in predicting the financial

performance of the efficiency of insurance companies is equal to 77.5%.

4.3 Comparison of Discriminant Model and Logistic Regression Model Comparison between discriminant model and logistic regression model was conducted to determine

which model is more appropriate to predict the efficiency of the financial performance of insurance

companies see greater classification accuracy. But in the event of a breach of discriminant analysis

assuming that it can not proceed in the process of establishing discriminant model and classification

accuracy. Therefore logistic regression model is more appropriate model to predict the efficiency of the

financial performance of insurance companies listed in the Indonesia Stock Exchange.

5. Conclusions In this case, a violation of the assumption of multivariate normal, multicollinearity, and not with him

the variance-covariance matrix in discriminant analysis. Appropriate model to predict the efficiency of

the insurance company's financial performance that are listed in the Indonesia Stock Exchange is the

logistic regression model with a model accuracy of 77.5%, the rest is influenced by other factors.

Variables that affect the efficiency of financial performance based on changes in income insurance

companies listed in the Indonesia Stock Exchange ROI ( 2x ), ROE ( 3x ), PER ( 5x ), Expenses Ratio (

6x ), and Loss Ratio ( 7x ).

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

118

6. References Agresti, A. 1996. An Introduction Categorical Data Analysis. New York: John Wiley & Sons, Inc.

Almilia, L.S. 2004. Analisis Faktor-Faktor yang Mempengaruhi Kondisi Financial Distress Suatu

Perusahaan yang Terdaftar di Bursa Efek Jakarta. Jurnal Riset Akuntansi Indonesia. Vol.7 No.1.

Januari. Hal 1-22.

Hajarisman, N. 2008. Seri Buku Ajar Statistika Multivariat. Bandung: Program Studi Statistika

Universitas Islam Bandung.

Johnson, R.A., and Wichern, D.W. 1982. Applied Multivariate Statistical Analysis. New Jersey:

Prentice-Hall, Inc., Englewood Cliffs.

Mahmoud, O.H. 2008. A Multivariate Model for Predicting the Efficiency of Financial Performance

for Property and Liability Egyptian Insurance Companies. Casualty Actuarial Society.

Margaretta, Y. 2010. Analisis Rasio Keuangan, Kebijakan Deviden dengan Ukuran Perusahaan sebagai

Variabel Kontrol dalam Memprediksi Pertumbuhan Laba pada Perusahaan Manufaktur yang

Terdaftar di Bursa Efek Indonesia. Surabaya: Skripsi Program S1 Akuntansi STIE PERBANAS.

Santoso, S. 2005. Menggunakan SPSS untuk Statistika Multivariat. Jakarta: Elex Media Komputindo.

Santoso, S. 2010. Statistika Multivariat Konsep dan Aplikasinya. Jakarta: Elex Media Komputindo.

Zainudin dan Jogiyanto, H. 1999. Manfaat Rasio Keuangan dalam Memprediksi Pertumbuhan Laba

(Studi Empirirs pada Perusahaan Perbankan yang Terdaftar di Bursa Efek Jakarta). Jurnal Riset

Ekonomi dan Akuntansi Indonesia. Vol.2. No.1 . Januari. Hal 66-90.

Tobing, M. 2011. http://www.martintobing.com/view/214 (diakses 31 Januari 2012).

_____________. http://cafe-ekonomi.blogspot.com/2009/09/artikel-tentang-laba.html (diakses 27

Maret 2012).

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

119

A Property of 𝒛−𝟏𝑭𝒎 [[𝒛−𝟏]] Subspace

Isah AISAHa*, Sisilia SYLVIANIa aDepartment of Mathematics Faculty of Mathematics and Natural Sciences , Universitas Padjadjaran,

Indonesia

*[email protected]

Abstract: The set 𝑧−1𝐹𝑚[[𝑧−1]] is a vector space consists of power series in 𝑧−1 with

coefficients in 𝐹𝑚. The subspace of 𝑧−1𝐹𝑚[[𝑧−1]] have many properties that often used in

the behavior theory. In this paper we will discuss one of many properties of 𝑧−1𝐹𝑚[[𝑧−1]] subspace. We will discuss the necessary and sufficient condition that cause annihilator‘s

preannihilator of a subspace of 𝑧−1𝐹𝑚[[𝑧−1]] is equal to the subspace itself.

Keywords: Annihilator, Behavior, Preannihilator

1. Introduction The behavior theory is an interesting material to be studied in algebraic point of view. Willems [1]

defined behavior as the set of all trajectories of a dynamical system. In the algebraic point of view, we

can see behavior as a linear, shift invariant, and complete subspace of ]][[ 11 zz m . In this paper,

we’re not focusing on the behavior theory. However, we will discuss a property in the ]][[ 11 zz m

subspace that usefull to studied the behavior theory in the algebraic point of view.

2. Preliminaries

Let be an arbitrary field and m be the space of all m-vectors with coordinates in . The set ))(( 1zis defines as follows

.,|=))((=

1

fi

i

i

fni

nfzfz (1)

If i

if

nizfzf

==)( and

i

ig

nizgzg

==)( both are elements in ))(( 1z , then the operations of

addition and multiplication are defined by

,)(=))((,=

i

ii

gn

fnmaksi

zgfzgf

and

,=))((=

k

k

gn

fnk

zhzfg

where ikiik gfh

== (P.A. Fuhrmann, 2010).

The set ][z , polynomial ring over the field , is defined as follows

.,|=][0

=

fi

i

i

nfi

nfzfz ,

The set ][z is defined as the set of all polynomials with form i

if

n

izf 0=

where if and fn

is nonnegative integer. It also can be expressed as a subset of ))(( 1z as follows:

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

120

.,|=][0

=

fi

i

i

nfi

nfzfz

The set ][z is the polynomial ring over the field [2].

The set of all formal series in form 1z with coefficient in the field , denoted by ]][[ 1z ,

which can be expressed as follows:

.2,1,0,=,|=]][[0=

1

ifzfz i

i

i

i

The set ]][[ 11 zz is defined by

.2,1,=,|=]][[1=

11

ifzfzz i

i

i

i

We have been defined the sets ))(( 1z , ][z , and ]][[ 1z at the beginning of this section.

Now we will discuss about the sets ))(( 1zm , ][zm , and ]][[ 1zm which is a vector space over

the field . The set ))(( 1zm defined as follows:

.,|=))((=

1

f

m

i

i

i

fni

m nfzfz

Elements in mz][ can be identified as elments in the set ][zm , that is

.{0},|=][0=

f

m

i

i

i

fn

i

m nfzfz

The set ][zm is a module over the ring ][z . The set ]][[ 1zm defined as

.2,1,0,=,|=]][[0=

1

ifzfz m

i

i

i

i

m

The set ]][[ 11 zz m is defined by

.2,1,=,|=]][[1=

11

ifzfzz m

i

i

i

i

m

The set ]][[ 11 zz m is a vector space over the field .

Definition 1 (P.A. Fuhrmann, 2002) For all n , let nP is a projection on ]][[ 11 zz m

defined as follows

.

]][[]][[:

1=1=

1111

i

i

n

i

i

i

i

mm

n

zhzh

zzzzP

(2)

The subset ]][[ 11 zz mB is complete if for any ]][[= 11

1=

zzzww mi

ii and for

each positive integer N , )()( BNN PwP implies Bw .

3. The Characteristic of 𝒛−𝟏𝑭𝒎 [[𝒛−𝟏]] Subspace

Let E and F is vector spaces. A mapping FE: that satisfy :

FyExxyxyxyxx ,,),,(),(=),( 212121

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

121

and FyyExyxyxyyx 212121 ,,),,(),(=),(

Is called Bilinear Function On FE (Greub, G.H., 1967).

Bilinear on ))(( 1zm is defined by

,))(())((:],[ 11 mmm zz

(3)

with the rules that are defined as follows, for all ))((, 1 zgf m , where j

jf

n

jzfzf =

=)( and

j

jg

n

jzgzg =

=)( wherem

jj gf , ,

1

=

=,

j

T

j

j

fggf

where T

jg is transpose of m

jg . Note that

,= 1011011

=f

nf

n

TT

gn

T

gnj

T

j

j

fgfgfgfgfg

(4)

That means ],[ is well defined. We can also show that ],[

is a bilinear form on

))(())(( 11 zz mm . Selanjutnya perhatikan teorema berikut ini.

Definition 2 (Fuhrmann P.A., 2002) Let a subspace ][zM m . We define its Annihilator

dari M by

},0,=],[|]][[{= 11 MgfgzzfM m

that also a subspace ]][[ 11 zz m . Let a subspace ]][[ 11 zzV m , thes preannihilator V

defined as follow

},0,=],[|][{= VhkhzkV m

that also a subspace ][zm .

Lemma 3 (Fuhrmann P.A., 2002) Let ]][[=])[( 11 zzz mm , ]])[[(= 11

0

zzPA m

n

and *A denote its dual. Every functional on *A can be identified by elements in ][zm .

Proposisi 4 (Fuhrmann P.A., 2002) Let V ]][[ 11 zz m be a subspace. VV =)( if

and only if for any Vzzh m ]][[ 11 there exist Vk such that 0],[ hk .

Proof. ][ Assume VV =)( holds. We wil show that for any Vzzh m ]][[ 11 there exist

Vk such that 0],[ hk . Let h be an arbitrary element in Vzz m ]][[ 11 . From the assumtion

above, we have )( Vh . This implies that there exist Vk such that 0],[ kh .

][ We will prove this implication with conradiction. Assume VV )( . It obvous that )( VV

. That is means there exist VVh )( , such that for all Vk we have 0=],[ kh . This

contradiction with assumption that for all Vzzh m ]][[ 11 there exist Vk such that

0],[ hk . Thus we have VV =)( .

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

122

Proposition 5 (Fuhrmann P.A., 2002) Let V ]][[ 11 zz m be a subspace. V is complete

if and only if VV =)( .

Proof. Assume thatV is complete. We will show that VV =)( . We will use Propotition 4

to prove that VV =)( . It suffice to show that for any Vzzh m ]][[ 11 there exist Vk

such that 0],[ hk . Let Vzzh m ]][[ 11 . We will show that there exist Vf such that

0],[ hf . Since V is complete and Vzzh m ]][[ 11 then there exist an index 0n such that

)()(00

VPhP nn . Let ][= zX m then ]][[= 11* zzX m . We also defined )(= *

0

* XPX nn which

is a finite dimensional vector space. Let )(=0

hPy n and )(=00

VPV nn . Based on the assumption

above, we have *

nXy , *

0nn XV , and

0nVy so that

00},{ nn VyVspan . Let },,,{= 21 kbbbB is

a basis for 0

nV , then }{yB is a basis for },{0

yVspan n . So that, the exist where

1.

0

},{:0

y

b

yVspan

i

n

In the other words, there exist such that 0=)(0

nV and 0)( y . Extending to

*: nX , implies **

0

** ))((=)( XPX nn . From Lema 3, we can identify with elements in

][zm . In the other words, there exist Vf such that 0],[ hf . Thus, based on Propotition 4, we

have proved V complete lengkap if and only if VV =)( .

4. Conclusion

The set 𝑧−1𝐹𝑚[[𝑧−1]] is a vector space consists of power series in 𝑧−1 with coefficients in 𝐹𝑚. The

subspace of 𝑧−1𝐹𝑚[[𝑧−1]] have many properties that often used in the behavior theory. One of them

is described in proposition 5. That is, a subspace ]][[ 11 zz m is complete if and only if its

annihilator‘s preannihilator is equal to the subspace itself.

Acknowledgements We would like to thank all the people who prepared and revised previous versions of this document.

References

Fuhrmann, P.A. (2002). A Study of Behavior, Linear Algebra and Its Applications, vol. 351-352, 2002, pp. 303-

380.

Fuhrmann, P.A. (2010). A Polynomial Approach to Linear Algebra. Springer.

Greub, G.H.. (1967). Linear Algebra, Springer-Verlag.

Willems, J.C. (1986). From Time Series to Linear Systems. Part I: Finite-Dimensional Linear Time Invariant

Systems. Automatica 22.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

123

Fractional Colorings in The Mycielski Graphs

Mochamad SUYUDI1), Ismail BIN MOHD2), Sudradjat SUPIAN3), Asep K. SUPRIATNA4),

Sukono5)

1,3,4,5)Jurusan Matematika FMIPA Universitas Padjadjaran, Indonesia 2)Jabatan Matematik FST Universiti Malaysia Terengganu, Malaysia

1)E-mail: [email protected]

Abstract: In this paper, we will investigate the results of graph coloring by Michael Larsen,

James Propp and Daniel Ullman 1995. Namely, "fractional chromatic number of Mycielski

Graf," that the fractional clique number of a graph G bounded below by the number of clique

integer, and it is equal to the fractional chromatic number, which is bounded above by the

number of chromatic integer. In other words,

𝜔(𝐺) ≤ 𝜔𝐹(𝐺) = 𝜒𝐹(𝐺) ≤ 𝜒(𝐺) Given this relationship, giving rise to a question whether the difference 𝜔𝐹(𝐺) − 𝜔(𝐺) and

𝜒(𝐺) − 𝜒𝐹(𝐺) can be made arbitrary large. The question then will be proved in the affirmative,

with the order of the graph to show the differences between them are increased without limit.

Proof to determine the fractional coloring and the fractional chromatic number, will be shown

in two different ways: first intuitively, combinatorial way marked in relation to with graph

homomorphisms, and then in relation to with an independent set, with calculations using linear

programming. In this second context, will be defined fractional clique, and see how this relates

to fractional coloring. Relationship between coloring fractions and fractional clique is the key

proof of Larsen, Propp, and Ullman.

Keywords: fractional clique number, fractional chromatic number, Mycielski Graf, linear

programming.

1. Introduction

In this paper, we discuss a result about graph colorings from 1995. The paper we will be investigating

is "The Fractional Chromatic Number of Mycielski's Graphs," by Michael Larsen, James Propp and

Daniel Ullman [3].

We will begin with some preliminary definitions, examples, and results about graph colorings.

Then we will define fractional colorings and the fractional chromatic number, which are the focus of

Larsen, Propp and Ullman's paper. We will define fractional colorings in two different ways: first in a

fairly intuitive, combinatorial manner that is characterized in terms of graph homomorphisms, and then

in terms of independent sets, which as we shall see, lends itself to calculation by means of linear

programming. In this second context, we shall also define fractional cliques, and see how they relate to

fractional colorings. This connection between fractional colorings and fractional cliques is the key to

Larsen, Propp and Ullman's proof.

2. Graphs and graph colorings

2.1 Basic definitions

A graph is defined as a set of vertices and a set of edges joining pairs of vertices. The precise definition

of a graph varies from author to author; in this paper, we will consider only finite, simple graphs, and

shall tailor our definition accordingly.

A graph G is an ordered pair (V (G), E(G)), consisting of a vertex set, V (G), and an edge set,

E(G). The vertex set can be any finite set, as we are considering only finite graphs. Since we are only

considering simple graphs, and excluding loops and multiple edges, we can define E(G) as a subset of

the set of all unordered pairs of distinct elements of V (G).

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

124

2.1.1 Independent sets and cliques

If u and v are elements of V (G), and {u, v} ∈ E(G), then we say that u and v are adjacent, denoted u ~

v. Adjacency is a symmetric relation, and in the case of simple graphs, anti-reflexive. A set of pairwise

adjacent vertices in a graph is called a clique and a set of pairwise non-adjacent vertices is called an

independent set.

For any graph G, we define two parameters: 𝛼(G), the independence number, and 𝜔(G), the

clique number. The independence number is the size of the largest independent set in V (G), and the

clique number is the size of the largest clique.

2.1.2 Examples

As examples, we define two families of graphs, the cycles and the complete graphs.

The cycle on n vertices (n > 1), denoted Cn, is a graph with V (Cn) = {1,. . . , n} and x ~ y in Cn if and

only if 𝑥 − 𝑦 ≡ ± 1 (mod n). We often depict Cn as a regular n-gon. The independence and clique

numbers are easy to calculate: we have 𝛼(𝐶𝑛) = ⌊𝑛

2⌋ and 𝜔(Cn) = 2 (except for C3, which has a clique

number of 3).

The complete graph on n vertices, Kn, is a graph with V (Kn) = {1,. . . , n}and x ~ y in V (Kn) for

all x ≠ y. It is immediate that 𝛼(Kn) = 1 and 𝜔(Kn) = n. The graphs C5 and K5 are shown in Figure 1.

C5 K5

Figure 1: Cycle and complete graph on five vertices

2.1.3 Graph colorings and graph homomorphisms

A proper n-coloring (or simply a proper coloring) of a graph G can be thought of as a way of assigning,

from a set of n "colors", one color to each vertex, in such a way that no adjacent vertices have the same

color. A more formal definition of a proper coloring relies on the idea of graph homomorphisms.

If G and H are graphs, a graph homomorphism from G to H is a mapping : V (G) → V (H) such

that u ~ v in G implies 𝜙(u) ~ 𝜙(v) in H. A bijective graph homomorphism whose inverse is also a graph

homomorphism is called a graph isomorphism.

Now we may define a proper n-coloring of a graph G as a graph homomorphism from G to Kn.

This is equivalent to our previous, informal definition, which can be seen as follows. Given a "color"

for each vertex in G, with adjacent vertices always having different colors, we may define a

homomorphism that sends all the vertices of the same color to the same vertex in Kn. Since adjacent

vertices have different colors assigned to them, they will be mapped to different vertices in Kn, which

are adjacent. Conversely, any homomorphism from G to Kn assigns to each vertex of G an element of

{1, 2, . . . , n}, which may be viewed as colors. Since no vertex in Kn is adjacent to itself, no adjacent

vertices in G will be assigned the same color.

In a proper coloring, if we consider the inverse image of a single vertex in Kn, i.e., the set of all

vertices in G with a certain color, it will always be an independent set. This independent set is called a

color class associated with the proper coloring. Thus, a proper n-coloring of a graph G can be thought

of as a covering of the vertex set of G with independent sets.

We define a graph parameter 𝜒(G), the chromatic number of G, as the smallest positive integer n

such that there exists a proper n-coloring of G. Equivalently, the chromatic number is the smallest

number of independent sets required to cover V (G). Any finite graph with k vertices can certainly be

colored with k colors, so we see that 𝜒(G) is well-defined for a finite graph G, and bounded from above

by |𝑉 (𝐺)|. It is also clear that, if we have a proper n-coloring of G, then 𝜒(G) ≤ n.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

125

We can establish some inequalities relating the chromatic number to the other parameters we

have defined. First, 𝜔(G) ≤ 𝜒(G), since all the vertices in a clique must be different colors. Also, since

each color class is an independent set, we have |𝑉 (𝐺)|

𝛼(𝐺) ≤ 𝜒(𝐺), where equality is attained if and only if

each color class in an optimal coloring is the size of the largest independent set.

We can calculate the chromatic number for our examples. For the complete graphs, we have

𝜒(Kn) = n, and for the cycles we have 𝜒(Cn) = 2 for n even and 3 for n odd. In Figure 2, we see C5 and

K5 colored with three and five colors, respectively.

Figure 2: The graphs C5 and K5 with proper 3- and 5-colorings, respectively

2.2 Fractional colorings and fractional cliques

2.2.1 Fractional colorings

We now generalize the idea of a proper coloring to that of a fractional coloring (or a set coloring),

which allows us to define a graph's fractional chromatic number, denoted 𝜒𝐹(G), which can assume

non-integer values.

Given a graph, integers 0 < b ≤ a, and a set of a colors, a proper a/b-coloring is a function that

assigns to each vertex a set of b distinct colors, in such a way that adjacent vertices are assigned disjoint

sets. Thus, a proper n-coloring is equivalent to a proper n/1-coloring.

The definition of a fractional coloring can also be formalized by using graph homomorphisms.

To this end, we define another family of graphs, the Kneser graphs. For each ordered pair of positive

integers (a, b) with a ≥ b, we define a graph Ka:b. As the vertex set of Ka:b, we take the set of all b-

element subsets of the set {1, . . . , a}. Two such subsets are adjacent in Ka:b if and only if they are

disjoint. Note that Ka:b is an empty graph (i.e., its edge set is empty) unless a ≥ 2b.

Just as a proper n-coloring of a graph G can be seen as a graph homomorphism from G to the

graph Kn, so a proper a/b-coloring of G can be seen as a graph homomorphism from G to Ka:b.

The fractional chromatic number of a graph, 𝜒𝐹(G), is the infimum of all rational numbers a/b

such that there exists a proper a/b-coloring of G. From this definition, it is not immediately clear that

𝜒𝐹(G) must be a rational number for an arbitrary graph. In order to show that it is, we will use a different

definition of fractional coloring, but first, we establish some bounds for 𝜒𝐹(G) based on our current

definition.

We can get an upper bound on the fractional chromatic number using the chromatic number. If

we have a proper n-coloring of G, we can obtain a proper 𝑛𝑏

𝑏 coloring for any positive integer b by

replacing each individual color with b different colors. Thus, we have 𝜒𝐹(G) ≤ 𝜒(G), or in terms of

homomorphisms, we can simply note the existence of a homomorphism from Kn to Knb:b (namely, map

i to the set of j ≡ i (mod n)).

To obtain one lower bound on the fractional chromatic number, we note that a graph containing

an n-clique has a fractional coloring with b colors on each vertex only if we have at least n . b colors to

choose from; in other words, 𝜔(G) ≤ 𝜒𝐹(G).

Just as with proper colorings, we can obtain another lower bound from the independence number.

Since each color in a fractional coloring is assigned to an independent set of vertices (the fractional

color class), we have |𝑉 (𝐺)| . b ≤ 𝛼(G) . a, or |𝑉 (𝐺)|

𝛼(𝐺) ≤ 𝜒𝐹(𝐺).

4

5

3 3

2

2

2

1 1

1

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

126

Another inequality, which will come in handy later, regards fractional colorings of subgraphs. A

graph H is said to be a subgraph of G if V (H) ⊆ V (G) and E(H) ⊆ E(G). Notice that if H is a subgraph

of G, then any proper a/b-coloring of G, restricted to V (H), is a proper a/b-coloring of H. This tells us

that 𝜒𝐹(H) ≤ 𝜒𝐹(G).

Figure 3: The graph C5 with a proper 5/2-coloring

2.2.2 Fractional colorings in terms of independent sets

Now we introduce an alternative definition of fractional colorings, one expressed in terms of

independent sets of vertices. This definition is somewhat more general than the previous one, and we

will see how fractional colorings, understood as homomorphisms, can be consistently reinterpreted in

terms of independent sets. This new characterization of fractional colorings will lead us to some

methods of computing a graph's fractional chromatic number.

Let I(G) denote the set of all independent sets of vertices in V (G), and for u ∈V (G), we let I(G,

u) denote the set of all independent sets of vertices containing u. In this context, a fractional coloring

is a mapping f : I(G) → [0, 1] with the property that, for every vertex u, ∑ 𝑓( 𝐽 ) ≥ 1𝐽∈𝐼(𝐺,𝑢) . The sum

of the function values over all independent sets is called the weight of the fractional coloring. The

fractional chromatic number of G is the infimum of the weights of all possible fractional colorings

of G.

It may not be immediately clear that this definition has anything to do with our previous

definition, in terms of graph homomorphisms. To see the connection, consider first a graph G with a

proper coloring. Each color class is an independent set belonging to I(G). We define f : I(G) → [0;

1] mapping each color class to 1 and every other independent set to 0. Since each vertex falls in one

color class, we obtain

∑ 𝑓( 𝐽 ) = 1

𝐽∈𝐼(𝐺,𝑢)

for each vertex u. The weight of this fractional coloring is simply the number of colors.

Next, suppose we have a graph G with a proper a/b coloring as defined above, with a b-element

set of colors associated with each vertex. Again, each color determines a color class, which is an

independent set. If we define a function that sends each color class to 1

𝑏 and every other independent set

to 0, then again, we have for each vertex u, ∑ 𝑓( 𝐽 ) = 1𝐽∈𝐼(𝐺,𝑢) , so we have a fractional coloring by our

new definition, with weight a/b.

Finally, let us consider translating from the new definition to the old one. Suppose we have a

graph G and a function f mapping from I(G) to [0, 1] ⋂ Q. (We will see below why we are justified in

restricting our attention to rational valued functions.) Since the graph G is finite, the set I(G) is finite,

and the image of the function f is a finite set of rational numbers. This set of numbers has a lowest

common denominator, b. Now suppose we have an independent set I which is sent to the number m/b.

Thus, we can choose m different colors, and let the set I be the color class for each of them. Proceeding

in this manner, we will assign at least b different colors to each vertex, because of our condition that for

all u ∑ 𝑓( 𝐽 ) ≥ 1𝐽∈𝐼(𝐺,𝑢) . If some vertices are assigned more than b colors, we can ignore all but b of

{1, 2}

{4, 5} {3, 4}

{2, 3} {1, 5}

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

127

them, and we have a fractional coloring according to our old definition if the weight of f is a/b, and we

do not ignore any colors completely, then we will have obtained a proper a/b coloring. If some colors

are ignored, then we actually have a proper d/b fractional coloring, for some d < a.

2.2.3 Fractional chromatic number as a linear program

The usefulness of this new definition of fractional coloring and fractional chromatic number in terms

of independent sets is that it leads us to a method of calculation using the tools of linear programming.

To this end, we will construct a matrix representation of a fractional coloring.

For a graph G, define a matrix A(G), with columns indexed by V (G) and rows indexed by I(G).

Each row is essentially the characteristic function of the corresponding independent set, with entries

equal to 1 on columns corresponding to vertices in the independent set, and 0 otherwise.

Now let f be a fractional coloring of G and let y(G) be the vector indexed by I(G) with entries

given by f. With this notation, and letting 1 denote the all 1's vector, the inequality y(G)TA(G) ≥ 1T

expresses the condition that

∑ 𝑓( 𝐽 ) ≥ 1

𝐽∈𝐼(𝐺,𝑢)

for all u ∈V (G).

In this algebraic representation of a fractional coloring, the determination of fractional chromatic

number becomes a linear programming problem. The entries of the vector y(G) are a set of variables,

one for each independent set in V (G), and our task is to minimize the sum of the variables (the weight

of the fractional coloring), subject to the set of constraints that each entry in the vector y(G)TA(G) be

greater than 1, and that each variable be in the interval [0, 1]. This amounts to minimizing a linear

function within a convex polyhedral region in n-dimensional space defined by a finite number of linear

inequalities, where n = |𝐼(𝐺)|. This minimum must occur at a vertex of the region. Since each

hyperplane forming a face of the region is determined by a linear equation with integer coefficients,

then each vertex has rational coordinates, so our optimal fractional coloring will indeed take on rational

values, as promised.

The regular, integer chromatic number, can be calculated with the same linear program by

restricting the values in the vector y(G) to 0 and 1. This is equivalent to covering the vertex set by

independent sets that may only have weights of 1 or 0. Although polynomial time algorithms exist for

calculating optimal solutions to linear programs, this is not the case for integer programs or 0-1

programs. In fact, many such problems have been shown to be NP-hard. In this respect, fractional

chromatic numbers are easier to calculate than integer chromatic numbers.

2.2.4 Fractional cliques

The linear program that calculates a graph's fractional chromatic number is the dual of another

linear program, in which we attempt to maximize the sum of elements in a vector x(G), subject to the

constraint A(G)x(G) ≤ 1. We can pose this maximization problem as follows: we want to define a

function h : V (G) → [0, 1], with the condition that, for each independent set in I(G), the sum of function

values on the vertices in that set is no greater than 1. Such a function is called a fractional clique, the

dual concept of a fractional coloring. As with fractional colorings, we define the weight of a fractional

clique to be the sum of its values over its domain. The supremum of weights of fractional cliques

defined for a graph is a parameter, 𝜔𝐹(G), the fractional clique number.

Just as we saw a fractional coloring as a relaxation of the idea of an integer coloring, we would

like to understand a fractional clique as a relaxation of the concept of a integer clique to the rationals

(or reals). It is fairly straightforward to understand an ordinary clique as a fractional clique: we begin

by considering a graph G, and a clique, C ⊆ V (G). We can define a function h : V (G) → [0, 1] that

takes on the value 1 for each vertex in C and 0 elsewhere. This function satisfies the condition that its

values sum to no more than 1 over each independent set, for no independent set may intersect the clique

C in more than one vertex. Thus the function is a fractional clique, whose weight is the number of

vertices in the clique.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

128

Since an ordinary n-clique can be interpreted as a fractional clique of weight n, we can say that

for any graph G, 𝜔(G) ≤ 𝜔𝐹(G).

2.2.5 Equality of 𝜒𝐹 and 𝜔𝐹

The most important identity we will use to establish our main result is the equality of the

fractional chromatic number and the fractional clique number. Since the linear programs which

calculate these two parameters are dual to each other, we apply the Strong Duality Theorem of Linear

Programming. We state the theorem in full. The reader is referred to [4] for more information about

linear programming.

Suppose we have a primary linear program (LP) of the form:

Maximize cT x

subject to Ax ≤ b

and x ≥ 0

with its dual, of the form:

Minimize yT b

subject to yTA ≥ cT

and y ≥ 0

If both LPs are feasible, i.e., have non-empty feasible regions, then both can be optimized, and the two

objective functions have the same optimal value.

In the case of fractional chromatic number and fractional clique number, our primary LP is that

which calculates the fractional clique of a graph G. The vector c determining the objective function is

the all 1s vector, of dimension |𝑉 (𝐺)|, and the constraint vector b is the all 1s vector, of dimension |𝐼(𝐺)|. The matrix A is the matrix described above, whose rows are the characteristic vectors of the

independent sets in I(G), defined over V (G). The vector x for which we seek to maximize the objective

function cT x has as its entries the values of a fractional clique at each vertex. The vector y for which we

seek to minimize the objective function y T b has as its entries the values of a fractional coloring on each

independent set.

In order to apply the Strong Duality Theorem, we need only establish that both LPs are feasible.

Fortunately, this is easy: the zero vector is in the feasible region for the primary LP, and any proper

coloring is in the feasible region for the dual. Thus, we may conclude that both objective functions have

the same optimal value; i.e., that for a graph G, we have 𝜔𝐹(G) = 𝜒𝐹(G).

This equality gives us a means of calculating these parameters. Suppose that, for a graph G, we

find a fractional clique with weight equal to r. Since the fractional clique number is the supremum of

weights of fractional cliques, we can say that r ≤ 𝜔𝐹(G). Now suppose we also find a fractional coloring

of weight r. Then, since the fractional chromatic number is the infimum of weights of fractional

colorings, we obtain 𝜒𝐹(G) ≤ r. Combining these with the equality we obtained from duality, we get

that 𝜔𝐹(G) = r = 𝜒𝐹(G). This is the method we use to prove our result.

3. Fractional Colorings and Mycielski's Graphs

We have noted that the fractional clique number of a graph G is bounded from below by the integer

clique number, and that it is equal to the fractional chromatic number, which is bounded from above by

the integer chromatic number. In other words, 𝜔(G) ≤ 𝜔𝐹(G) = 𝜒𝐹(G) ≤ 𝜒(G).

Given these relations, one natural question to ask is whether the differences 𝜔𝐹(G) − 𝜔(G) and

𝜒(G) − 𝜒𝐹(G) can be made arbitrarily large. We shall answer this question in the affirmative, by

displaying a sequence of graphs for which both differences increase without bound.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

129

3.1 The Mycielski transformation

The sequence of graphs we will consider is obtained by starting with a single edge K2, and

repeatedly applying a graph transformation, which we now define. Suppose we have a graph G, with V

(G) = {v1, v2, . . . , vn}. The Mycielski transformation of G, denoted 𝜇(G), has for its vertex set the set

{x1, x2, . . . , xn, y1, y2, . . . , yn, z} for a total of 2n + 1 vertices. As for adjacency, we put

xi ~ xj in 𝜇(G) if and only if vi ~ vj in G,

xi ~ yj in 𝜇(G) if and only if vi ~ vj in G,

and yi ~ z in 𝜇(G) for all i ∈ {1, 2, . . . , n}. See Figure 4 below.

Figure 4: The Mycielski transformation

The theorem that we shall prove states that this transformation, applied to a graph G with at least

one edge, results in a graph 𝜇(G) with

(a) 𝜔(𝜇(G)) = 𝜔(G),

(b) 𝜒(𝜇(G)) = 𝜒(G) + 1, and

(c) 𝜒𝐹(𝜇(G)) = 𝜒𝐹(G) + 1

𝜒𝐹(𝐺) .

3.2 Main result

We prove each of the three statements above in order:

3.2.1 Proof of part (a)

First we note that the vertices x1, x2, . . . ,xn form a subgraph of 𝜇(G) which is isomorphic to G.

Thus, any clique in G also appears as a clique in 𝜇(G), so we have that 𝜔(𝜇(G)) ≥ 𝜔(G).

To obtain the opposite inequality, consider cliques in 𝜇(G). First, any clique containing the vertex

z can contain only one other vertex, since z is only adjacent to the y vertices, none of which are adjacent

to each other. Now consider a clique {xi(1), . . . ,xi(r), yj(1), . . . , yj(s)}. From the definition of the Mycielski

transformation, we can see that the sets {i(1), . . . , i(r)} and {j(1), . . . , j(s)} are disjoint, and that the

set {vi(1), . . . ,vi(r), vj(1), . . . , vj(s)} is a clique in G. Thus, having considered cliques with and without

vertex z, we see that for every clique in 𝜇(G), there is a clique of equal size in G, or in other words,

𝜔(𝜇(G)) ≤ 𝜔(G). Combining these inequalities, we have 𝜔(𝜇(G)) = 𝜔(G), as desired.

z

y1

y2 y5

y4 y3

x1

x2

x3 x4

x5

v1

v2

v3 v4

v5

y2

y1

x2

x1

v2

v1

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

130

3.2.2 Proof of part (b)

Suppose we have that 𝜒(G) = k. We must show that 𝜒(𝜇(G)) = k + 1. First, we shall construct a

proper k+1-coloring of 𝜇(G), which will show that 𝜒(𝜇(G)) ≤ k + 1. Suppose that f is a proper k-coloring

of G, understood as a mapping f : V (G) → {1, . . . , k}. We define a proper k + 1-coloring, h : V(𝜇(G))

→ {1, . . . , k + 1}as follows. We set h(xi) = h(yi) = f(vi), for each i ∈ {1, . . . , n}. Now set h(z) = k + 1.

From the way 𝜇(G) was constructed, we can see that this is a proper coloring, so we have 𝜒(𝜇(G)) ≤ k

+ 1.

For the inequality in the opposite direction, we will show that, given any coloring of 𝜇(G), we

can obtain a coloring of G with one color fewer. Since the chromatic number of G is k, this will show

that 𝜒(𝜇(G))−1 ≥ k, or equivalently, 𝜒(𝜇(G)) ≥ k + 1. So, consider a proper coloring h of 𝜇(G). We

define a function f on the vertices of G as follows: f(vi) = h(xi) if h(xi) ≠ h(z), and f(vi) = h(yi) if h(xi) =

h(z). From the construction of 𝜇(G), it should be clear that this is a proper coloring. It does not use the

color h(z), so it uses one color fewer than h uses, and thus we have that 𝜒(𝜇(G)) ≥ k + 1.

Again, combining our two inequalities, we obtain 𝜒(𝜇(G)) = 𝜒(G) + 1, as desired.

3.2.3 Proof of part (c)

Now we will show that 𝜒𝐹(𝜇(G)) = 𝜒𝐹(G) + 1

𝜒𝐹(𝐺) , or in other words, we want to show that if

𝜒𝐹(G) = 𝑎

𝑏 , then 𝜒𝐹(𝜇(G)) =

𝑎2+𝑏2

𝑎𝑏 . Our strategy will be to construct a fractional coloring and a

fractional clique on 𝜇(G), each with the appropriate weight, and then invoke our strong duality result.

Suppose we have a proper a/b-coloring of G, understood in terms of sets of colors assigned to

each vertex. Thus, each vertex in G is assigned some subset of b colors out of a set of size a. We

suppose, somewhat whimsically, that each of the a colors has a "offspring", b of them "male" and a−b

of them "female". Taking all of these offspring as distinct, we have obtained a2 offspring colors. To

these, add a set of b2 new colors, and we have a total of a2 + b2 colors with which to color the vertices

of 𝜇(G). We assign them as follows. To each of vertex xi, we assign all the offspring of all the colors

associated with vertex vi, a offspring of each of b colors for a total of ab colors. To each vertex yi, we

assign all the female offspring of all the colors associated with vi (there are b(a − b)), and all of the new

colors (there are b2). To vertex z, we assign all of the male offspring of all the original colors, which is

ab distinct colors. We see that we have assigned ab colors to each vertex, and it is easy to check that

the coloring is a proper (a2 + b2) /ab-coloring of 𝜇(G). The existence of this coloring proves that

𝜒𝐹(𝜇(G)) ≤ 𝑎2+𝑏2

𝑎𝑏 .

Our final, and most complicated, step is to construct a fractional clique of weight 𝜔𝐹(G) + 1

𝜔𝐹(𝐺)

on 𝜇(G). Suppose we have a fractional clique on G that achieves the optimal weight 𝜔𝐹(G). Recall that

this fractional clique is understood as a mapping f : V (G) → [0; 1] which sums to at most 1 on each

independent set, and whose values all together sum to 𝜔𝐹(G). Now we define a mapping

𝑔 : V (𝜇(G)) → [0; 1] as follows:

𝑔(𝑥𝑖) = (1 −1

𝜔𝐹(𝐺)) 𝑓(𝑣𝑖)

𝑔(𝑦𝑖) =1

𝜔𝐹(𝐺) 𝑓(𝑣𝑖)

𝑔(𝑧) =1

𝜔𝐹(𝐺)

We must show that this is a fractional clique. In other words, we must establish that it maps its

domain into [0; 1], and that its values sum to at most 1 on each independent set in 𝜇(G). The codomain

is easy to establish: the range of f lies between 0 and 1, and since 𝜔𝐹(G) ≥ 𝜔(G) ≥ 2 > 1, then 0 < 1

𝜔𝐹(𝐺) < 1. Thus each expression in the definition of 𝑔 yields a number between 0 and 1. It remains to

show that the values of 𝑔 are sufficiently bounded on independent sets.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

131

We introduce a notation: for M ⊆ V (G), we let 𝑥(𝑀) = {𝑥𝑖|𝑣𝑖 ∈ 𝑀} and 𝑦(𝑀) = {𝑦𝑖|𝑣𝑖 ∈ 𝑀}. Now we will consider two types of independent sets in 𝜇(G): those containing z and those not containing

z.

Any independent set S ⊆ V (𝜇(G)) that contains z cannot contain any of the yi vertices, so it must

be of the form S = {z} ∪ x(M) for some independent set M in V (G). Summing the values of 𝑔 over all

vertices in the independent set, we obtain:

∑𝑔(𝑣) = 𝑔(𝑧)

𝑣∈𝑆

+ ∑ 𝑔(𝑣)

𝑣∈𝑥(𝑀)

=1

𝜔𝐹(𝐺)+ (1 −

1

𝜔𝐹(𝐺))∑ 𝑓(𝑣)

𝑣∈𝑀

≤1

𝜔𝐹(𝐺)+ (1 −

1

𝜔𝐹(𝐺)) = 1

Now consider an independent set S ⊆ V(𝜇(G)) with z ∉ S. We can therefore say S = x(M) ⋃ y(N)

for some subsets of V (G), M and N, and we know that M is an independent set. Since S is independent,

then no vertex in y(N) is adjacent to any vertex in x(M), so we can express N as the union of two sets A

and B, with A ⊆ M and with none of the vertices in B adjacent to any vertex in M. Now we can sum the

values of g over the vertices in S = x(M) ⋃ y(N) = x(M) ⋃ y(A) ⋃ y(B):

∑𝑔(𝑣) = (1 −1

𝜔𝐹(𝐺))

𝑣∈𝑆

∑𝑓(𝑣) +

𝑣∈𝑀

(1 −1

𝜔𝐹(𝐺))∑ 𝑓(𝑣)

𝑣∈𝑁

= (1 −1

𝜔𝐹(𝐺))∑ 𝑓(𝑣) +

𝑣∈𝑀

(1 −1

𝜔𝐹(𝐺))∑𝑓(𝑣)

𝑣∈𝐴

+ (1 −1

𝜔𝐹(𝐺))∑𝑓(𝑣)

𝑣∈𝐵

≤ (1 −1

𝜔𝐹(𝐺))∑ 𝑓(𝑣) +

𝑣∈𝑀

(1 −1

𝜔𝐹(𝐺))∑ 𝑓(𝑣)

𝑣∈𝑀

+ (1 −1

𝜔𝐹(𝐺))∑𝑓(𝑣)

𝑣∈𝐵

= ∑ 𝑓(𝑣)

𝑣∈𝑀

+1

𝜔𝐹(𝐺)∑𝑓(𝑣)

𝑣∈𝐵

The first two equalities above are simply partitions of the sum into sub-sums corresponding to

subsets. The inequality holds because A ⊆ M, and the final equality is just a simplification. It will now

suffice to show that the final expression obtained above is less than or equal to 1.

Let us consider H, the subgraph of G induced by B. The graph H has some fractional chromatic

number, say r/s. Suppose we have a proper r/s-coloring of H. Recall that the color classes of a fractional

coloring are independent sets, so we have r independent sets of vertices in V (H) = B; let us call them

C1, . . . , Cr. Not only is each of the sets Ci independent in H, but it is also independent in G, and also Ci

⋃ M is independent in G as well, because Ci ⊆ B.

For each i, we note that f is a fractional clique on G, and sum over the independent set Ci ⊆M to

obtain:

∑𝑓(𝑣)

𝑣∈𝑀

+ ∑ 𝑓(𝑣) ≤ 1

𝑣∈𝐶𝑖

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

132

Summing these inequalities over each Ci, we get:

𝑟 ∑ 𝑓(𝑣)

𝑣∈𝑀

+ 𝑠 ∑ 𝑓(𝑣) ≤ 𝑟

𝑣∈𝐶𝑖

The second term on the left side of the inequality results because each vertex in B belongs to s

different color classes in our proper r/s-coloring. Now we divide by r to obtain:

∑𝑓(𝑣)

𝑣∈𝑀

+𝑠

𝑟∑𝑓(𝑣) ≤ 1

𝑣∈𝐵

Since r/s is the fractional chromatic number of H, and H is a subgraph of G, we can say that 𝑠

𝑟 ≤

𝜒𝐹(G) = 𝜔𝐹(G), or equivalently, 1

𝜔𝐹(𝐺) ≤

𝑠

𝑟 . Thus:

∑ 𝑓(𝑣)

𝑣∈𝑀

+1

𝜔𝐹(𝐺)∑𝑓(𝑣)

𝑣∈𝐵

≤ ∑ 𝑓(𝑣)

𝑣∈𝑀

+𝑠

𝑟∑𝑓(𝑣) ≤ 1

𝑣∈𝐵

as required. We have shown that the mapping g that we defined is indeed a fractional clique on 𝜇(G).

We now check its weight.

∑ 𝑔(𝑣)

𝑣∈𝑉(𝜇(𝐺))

=∑ℎ(𝑥𝑖)

𝑛

𝑖=1

+∑ℎ(𝑦𝑖)

𝑛

𝑖=1

+ ℎ(𝑧)

= (1 −1

𝜔𝐹(𝐺)) ∑ 𝑓(𝑣) +

𝑣∈𝑉(𝐺)

1

𝜔𝐹(𝐺)∑ 𝑓(𝑣) +

1

𝜔𝐹(𝐺)𝑣∈𝑉(𝐺)

= ∑ 𝑓(𝑣) +1

𝜔𝐹(𝐺)𝑣∈𝑉(𝐺)

= 𝜔𝐹(𝐺) +1

𝜔𝐹(𝐺)= 𝜒𝐹(𝐺) +

1

𝜒𝐹(𝐺)

This is the required weight, so we have constructed a fractional coloring and a fractional clique

on 𝜇(G), both with weight 𝜒𝐹(𝐺) +1

𝜒𝐹(𝐺) . We can now write the inequality

𝜒𝐹(𝜇(𝐺)) ≤ 𝜒𝐹(𝐺) +1

𝜒𝐹(𝐺)≤ 𝜔𝐹(𝐺)

and invoke strong duality to declare the terms at either end equal to each other, and thus to the middle

term. ∎

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

133

4. Discussion of main result

Now that we have a theorem telling us how the Mycielski transformation affects the three parameters

of clique number, chromatic number, and fractional chromatic number, let us apply this result in a

concrete case, and iterate the Mycielski transformation to obtain a sequence of graphs {Gn}, with Gn+1

= 𝜇(Gn) for n > 2. For our starting graph G2 we take a single edge, K2, for which 𝜔(G) = 𝜒𝐹(G) = 𝜒(G)

= 2.

Applying our theorem, first to clique numbers, we see that 𝜒(Gn) = 2 for all n. Considering

chromatic numbers, we have 𝜒(G2) = 2 and 𝜒(Gn+1) = 𝜒(Gn)+1; thus 𝜒(Gn) = n for all n. Finally, the

fractional chromatic number of Gn is determined by a sequence {𝑎𝑛}𝑛∈{2,3,… } given by the recurrence:

a2 = 2 and an+1 = an + 1

𝑎𝑛.

This sequence has been studied (see [5] or [1]), and it is known that for all n:

√2𝑛 ≤ 𝑎𝑛 ≤ √2𝑛 +1

𝑛ln𝑛

Clearly, an grows without bound, but less quickly than any sequence of the form nr for r > 1

2. Thus, the

difference between the fractional clique number and the clique number grows without limit, as does the

difference between the chromatic number and the fractional chromatic number.

References

[1] D. M. Bloom. Problem e3276. Amer. Math. Monthly, 95:654, 1988.

[2] C. D. Godsil and G. Royle. Algebraic Graph Theory. Springer, New York, 2001.

[3] M. Larsen, J. Propp, and D. Ullman. The fractional chromatic number of Mycielski's graphs. J. Graph Theory,

19(3):411{416, 1995.

[4] M. Hall, Jr. Combinatorial Theory. Blaisdell Publishing Co., Waltham, MA, 1967.

[5] D. J. Newman. A Problem Seminar. Springer, New York, 1982.

[6] D. B. West. Introduction to Graph Theory, 2nd Edition. Prentice Hall, 2000.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

134

Simulation of Factors Affecting The

Optimization of Financing Fund for Property

Damage Repair on Building Housing Caused by

The Flood Disaster (Literature Review)

Pramono SIDIa*, Ismail BIN MOHDb, Wan Muhamad AMIR WAN AHMADc,

Sudradjat SUPIANd, Sukonoe, Lusianaf aDepartment of Mathematics FMIPA Universitas Terbuka, Indonesia

b,cDepartment of Mathematics FST Universiti Malaysia Terengganu, Malaysia d,e,fDepartment of Mathematics FMIPA Universitas Padjadjaran, Indonesia

a*Email : [email protected]

Abstract: Natural disasters, such as floods are one cause of property damage in residential

buildings that cannot be avoided, because it cannot know when it happened. This leads to the

risk that financing should be optimized. In this paper, optimization is performed on a

combination of three methods of financing funds (insurance, credit, and savings). Optimization

is made under conditions to ensure a comprehensive loss of damage to property on the

residential buildings. Simulation is done to see the effect of changes in the factors that

influence the optimization value of property, amount of debt, and the future value annuity. As

a result, the value of the loss ratio and the cost ratio is numerically shown in the table, and in

graphical form in three different situations.

Keywords: flood insurance, credit, savings, optimization, simulation.

1. Introduction Natural disasters are categorized into four, namely: natural disasters were meteorological or also called

hydro-meteorological disasters are climate-related causes, such as floods (events that occur when the

flow of excess water soak the mainland), geological disaster is a natural disaster that occurs on the

surface of the Earth, such as earthquakes (vibration or shock that occurs in the earth's surface caused by

the release of energy from the sudden that creates seismic waves), volcanic eruptions (events that occur

due to deposition of magma in the earth's crust is pushed out by a high-pressure gas), and also tsunami

(water body displacement caused by changes in sea surface vertically with a sudden). Disasters from

space is the arrival of various celestial bodies such as asteroids or solar storms disorders, such as

asteroids can be a threat to countries with large population such as China, India, United States, Japan,

and Southeast Asia. Outbreak or epidemic is an infectious disease that spreads through the human

population at large in scope, eg, across the country or around the world.

Natural disasters caused the risks. Risks cannot be avoided, eliminated, or moved, but the risk

can be minimized. One of the risks caused by natural disasters is financing risks arising from property

damage. There are various methods of financing (insurance, reserves, loans, etc.). Financial risk

management can be done by one or a combination of several methods of financing. Research on this

problem was made by Hanak (2010), the sensitivity analysis on the optimization of funding providing

property damage repair in residential buildings contained in a journal with a combination of the two

methods of financing. Based on the journals, the research focus is on the optimization of simulation

analysis of funding provision for property damage repairs in residential buildings with a combination

of three methods of financing. Hanak research goal is to ensure that losses covered comprehensively by

utilizing the advantages of the method of financing and credit insurance. Results of research conducted

by Hanak (2010) is valid on the parameters used in the case study.

In this paper conducted a review of the literature study simulation model optimization improved

the provision of funding for property damage to a residential building, which was developed by Hanak

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

135

(2009; 2010). The purpose of this study is a simulation model to determine the factors that influence

the optimization property damage repair fund, the value of the property, annuity future value, and the

amount of debt. These factors are examined for three situations characterized by different input

parameters and is described by two indicator ratio (loss ratio and cost ratio).

2. Mathematical Model In this section discussed about the ex ante risk financing, the risk of ex post financing, the factors that

Affect the provision of funds, and the optimization models of the supply of funds. This discussion is

very useful in the study of simulation optimization done providing funds to repair damage to residential

buildings by the caused by floods.

2.1 Ex Ante Risk Financing Ex Ante derived from Latin, meaning "before the fact" (Hanak, 2010), so that the Ex Ante risk financing

is an attempt to resolve the burden of cost of losses before the incident. One method of financing ex

ante: insurance and savings. Insurance of property provides protection against the risk in building

activity. Value of the property is covered by insurance value of the building included interior and all

materials. Seeing objects covered, property insurance easier in determining the value of the premiums

(Marhusor, 2004).

Total annual premium depends on assured (insured property), insurance rate (level of

insurance) can be divided into two, among others: basic insurance and flood insurance rate, discount

insurance, additional insurance, which is generally a measurable variables. Suppose The Total Annual

Premium (TAP), Capital Assured (CA), total insurance rate (IR), additional premium (ADD),

Deductions from premiums (DIS), deductibles and x value. According Hanak (2010), the total annual

premium is calculated using the following equation:

100 100

100 100

DIS ADDTAP CA IR

(1)

Savings (savings) is the amount of money saved in reserve just in case in the short term. savings

is also one way of managing the risk. The advantage of the savings is low cost because it does not like

credit and insurance operating costs also benefit the borrower / insurance company. On the other hand,

savings also have a disadvantage that a large amount of money that can not be invested in long-term

investments, such as term deposits and incidents of damage (to make a sufficient amount of savings that

normally require the annual time, but losses can occur instantaneously) (Hanak , 2009:43). In this case,

a constant amount of money we can count the "annuity future value":

1 1

n

s

iAFV A

i

(2)

where, 𝐴𝐹𝑉 annuity future value, s

A one annuity amount (Amount of each payment), i interest rate

(interest rate), and n period (in years).

2.2 Ex Post Risk Financing Ex Post derived from Latin, meaning "after the event" (Hanak, 2010), so the risk of Ex Post financing

is an attempt to resolve the burden of cost of losses after the event. One of the ex post financing method:

credit. Advantage Credit is no need to create a reserve fund so that money can be invested. On the other

hand there is no guarantee if the borrower (the bank) would give you a loan, and the cost is expensive

because of the mortgage interest (Hanak, 2009:43).

"An annuity is a series of payments / receipts an amount of money equal to the period of time

there for every payment" (Frensidy, 2005:62). Calculation of interest payments on installment loans is

an example of annuity credit. Where the total installment credit depends on the number of installments:

per period, time period, and the interest rate per period. In the case of installment loans with constant

installments (credit + interest), calculated as the total installment (Hanak, 2009):

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

136

1 1

1

p

p

rAC

r r

(3)

or (Frensidy,2005:62) :

1 1p

rAC D

r

(4)

where AC annuity credit (total installment credit), D debt amount (installment amount per period), r

annual credit interest rate (interest rate per period), and p term of expiration (long period).

2.3 Factors Affecting the Provision of Funds In this case there are a lot of factors that affect the efficiency of the provision of funds. These factors

are divided into three, among others (Hanak, 2009): common factors (duration of Examined period,

property values, loss frequency, extent of particular losses, location), the factors insurance property loss

(extent of insurance coverage (insured hazards), flood zone, the value of the insured, Deductions from

premium (eg fire resistant materials), additional premium (eg insufficient security against theft), the

amount of the premium (contributory), coinsurance value, upper limit insurance benefits), the factors

savings (interest rate of invested savings, limited capability to create savings), and credit factors

(interest rate of credit interest and ability to repay debt).

Optimization Model In this paper examined two studies conducted by Hanak (2009; 2010) related to flood disasters, ex ante

risk financing methods (life insurance and savings), the risk of ex post financing methods (credit). Then

conducted studies optimization models provide funding to repair the damage of property residential

buildings. Further simulations were performed to study the data to determine the case of two indicators:

the ratio of the loss ratio and the cost ratio as a comparison. Simulations are done using the data as it is

presented by Hanak (2010), because the data relating to the provision of funding to repair property

damage to residential buildings not obtained fully in global.

Values of the loss ratio and the cost ratio values shown in the graph in three different situations.

Three different situations are characterized by differences in parameters of the factors that influence the

optimization of funding models for property damage repair (Hanak, 2010), as follows:

Situation 1: The value of property value (not fixed), the value of debt of amount (fixed), and the value

of future value annuity (fixed), the intention is that: the value of the variable VP intended variable (not

fixed); Value of Variable D included a constant, and the value of variables included AFV constant.

Situation 2: The value of property value (not fixed), the value of debt of amount (not fixed), and the

value of future value annuity (fixed), the point is that: the value of the variable VP intended variable

(not fixed ); Value of variable D are included in the models optimized by varying intervals, and Value

of AFV which included variables constant.

Situation 3: The value of value of the property (not fixed), the value of debt of amount (not fixed), and

the annuity value future value (not fixed), the intention is that: the value of the variable VP intended

variable (not fixed); Value of variables D is incorporated in the models optimized by varying intervals,

and Value of variable AFV models optimized by varying intervals.

Mathematical model for optimizing the provision of funding property repairs consist of

objective function is the total cost of repairing damaged property. To model the financing fund repair

of property damage in residential buildings, is defined as the following variables:

𝑁 = tahun, lamanya pembayaran premi yang dilakukan atau lamanya properti diasuransikan.

𝑉𝑃 = value of property, value of the insured property

𝐶𝐴 = capital assured, total insured, the total value of the property to be multiplied with the

insurance rate.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

137

𝑃𝐿𝑚 = particular loss, claims / losses that occurred m times.

𝐷𝐼𝑆 = discount, pieces given insurance company.

𝐴𝐷𝐷 = additional, additional expenses the insurance company.

𝐼𝑅 = insurance rate, total insurance premium rate in a year.

𝐵𝐼𝑅 = base insurance rate, basic level of insurance premiums in a year.

𝐹𝐼𝐵𝐵 = flood insurance rate, flood insurance premium rate in a year.

𝑘 = levels of flood zone, (k = 1,2,3,4)

𝑈𝐼𝐵𝐿 = upper insurance benefit limit, maximum insurance reimbursement.

𝐷𝑅 = deductible rate, own level of risk for losses incurred / of claims filed.

𝐷 = debt of amount / loan amount.

𝑝 = long installment loan / debt carried.

𝑟 = loan interest rate.

𝐴𝐹𝑉 = annuity future value, future of the savings will coming.

𝑖 = interest rate of savings.

𝑛 = time deposit savings made.

From the variables in the model obtained for the total cost of financial backing's creation, namely:

Total cost of insurance premiums

100 100

100 100

DIS ADDTP N CA IR

(5)

with

1

1

1 1

1

p

m

j p

m

j

rPL AC

r rCA VP

PL

(6)

so the total cost of premiums:

1

1

1 1

1 100 100

100 100

p

m

j p

Bm

j

rPL AC

r r DIS ADDTP N VP BIR FIR k

PL

(7)

Total payments beyond the limits of the ability of insurance (insurance benefit payment limit).

Calculation of insurance benefit payments over the upper limit (PIBL) influenced by the particular

loss (PL) and the upper limit insurance benefits (UIBL).

If PL UIBL then PIBL PL IB (8)

If PL UIBL then 0PIBL (9)

Total own risk payment (deductible). Own calculations for the case of flood risk is influenced only

by the magnitude of a particular loss (PL) or a claim filed by the insured is:

1

m

jDP DR PL (10)

Total installment credit (annuity credit)

1

1 1

p

p

r rAC D

r

(11

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

138

Total savings payments (one annuity amount)

1 1n

AFV iAS

i

(12)

So, of the five mathematical models for each property in the total cost of repairs data, the mathematical

model for the total cost of funding repairs to the damaged property is a residential building:

0 0 1 1

a b c dTC TP PIBL DP AC AS (13)

Or

1

1

0 1

1 1

1 100 100

100 100

1

1 1 1 1

p

m

j p

Bm

j

p

b m

j p n

rPL AC

r r DIS APPTC N BIR FIR k

PL

r r AFV iPIBL DR PL D

r i

(14)

3. Illustrations

3.1 Data Simulation The data used for illustration here is a simulated data, with the parameters given in Table-1 as follows:

Table-1 Parameter of Data Simulation Symbol Description Values

VP value of property <80.000.000; 1.000.000.000> IDR

PL particular loss

nr.1 6.550.500 IDR

nr.2 5.973.100 IDR

nr.3 9.837.500 IDR

DIS deduction from premium / premium

discount 0%

ADD additional premium 0%

IR insurance rate 0,145%

K flood zone ratio 1 (flood zone)

IBL Insurance benefit limit / The maximum limit of coverage

25.000.000 IDR

DR deductible rate / level of risk itself 10%

D debt of amount / the amount of the loan <0; 8.000.000> IDR

r credit rate / loan interest rate 12%

p credit duration 12 month

AFV annuity future value <0; 5.000.000> IDR

I savings rate / the interest rate of

savings 12%

n duration savings 12 month

3.2 Optimization Analysis Optimization analysis is done by comparing the ratio of the two indicators, the loss ratio and expense

(cost) ratio. Where favorable conditions will be achieved when the ratio of losses greater than the cost

ratio. Loss ratio is defined as follows:

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

139

1

m

jPLLR

VP

(15)

Cost ratio is defined as follows: (with 𝑇𝐶 equation (13) or (14))

TCCR

VP (16)

Simulation of optimization analysis is used in the case study to the risk of property damage due

to flooding in residential buildings. Sensitivity analysis focused only on three factors: the value of the

property, annuity future value, and the amount of debt. All the factors mentioned above have their

impact on the model and the three variables have a discrete probability distribution. The LR and CR

values shown on the chart in three different situations based on changes in the value of the selected

factors (value of property, annuity future value, and the amount of debt).

Situation 1: The value of property value (not fixed), the value of debt of amount (fixed), and the value

of future value annuity (fixed). VP variables included in the input of the interval (80,000,000;

1,000,000,000); variables constant D (0; 0); constant and variable AFV (0; 0). Simulation results are

shown in graphical form as in Figure-1.

Situation 2: The value of property value (not fixed), the value of debt of amount (not fixed), and the

value of future value annuity (fixed). The VP variables included in the input of the interval (80,000,000;

10,000,000); Variable D optimized by the model on the interval (0; 8,000,000); constant and variable

AFV (0; 0). Simulation results are shown in graphical form as in Figure-2.

Figure-1 Graph of function 𝐿𝑅 and 𝐶𝑅1 –

Situation 1 Figure-2 Graph of function 𝑳𝑹 and 𝑪𝑹𝟐 –

Situation 2

Based on Figure-1, the conditions will be favorable if LR> 16%. And become ineffective when

the curve is under CR LR (LR <16%), thus added a new financing method is credit. Meanwhile, Based

on Figure-2, the conditions will be favorable if LR> 11%. And become ineffective when the curve is

under CR LR (LR <11%), thus added a new financing method that is savings.

Situation 3: the value of the property (not fixed), the value of debt of amount (not fixed), and the

annuity future value (not fixed). VP variables included in the input of the interval (1000000; 10000000);

Variable D optimized by the model on the interval (0; 8,000,000), and Variable AFV optimized by the

model on the interval (0; 5,000,000). The results are shown in graphical form as in Figure-3.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

140

Figure-3 Graph of function 𝑳𝑹 and 𝑪𝑹𝟑

– Situation 3

Figure-4 Graph of Function 𝐿𝑅, 𝐶𝑅1, 𝐶𝑅2, and

𝐶𝑅3

Based on Figure-3, a change in the slope of the curve CR, thereby reducing the inefficiencies of

insurance and credit when the condition where the curve LR is under the CR (LR < CR). Figure-4

illustrates the effect of the combination of three types of financial reserve that insurance, credit, and

savings, made with mathematical models. Where by combining three types of reserve financing, method

of financing can make the fund more efficient and effective.

3.3 Analysis of Result Simulation results show the impact of three factors (the value of the property, and the amount

of deductible debt) financing on optimizing the recovery of funds from property damage to residential

buildings. Optimization analysis is processed through two indicators ratios: Loss Ratio (LR) and Cost

Ratio (CR). The purpose of the insured are paying at least possible, on the other hand there must

be a comprehensive cost recovery of all losses should be anticipated. Therefore, in accordance

with the conditions of each favorable situation occurs when LR > CR. In a simulation study situations like condition when LR > 3.68% (applicable to Situation 1 described by

the curve LR and CR1 in Figure-3. If LR < 3.68% typical insurance becomes effective. First Steps, how

to improve efficiency is to optimize the value of the insurance deductible situation 2 is described by the

curve LR and CR2 in Figure-4. optimization subtracted value is hard because it requires complex

calculations and is time consuming. Optimization must be performed using the above mentioned model.

in such cases the condition efficiency extended to LR > 3.46%. Influence optimized value less plausible

but not quite.

The problem is that the decrease in LR higher than the decrease in CR. To create an efficient

funds for LR < 3.46% it was necessary to add other types of financial support (situation described by

curve 3 of LR and CR3 in Figure-4). This paper discusses only the credit, but the backup can be used as

well. Even if LR < CR, causing changes in the slope of the credit curve CR and thereby clearly reducing

inefficiencies single insurance.

Optimization of funds is shown in Figure-4. Graph illustrates the combined effects of certain

types of financial support made by mathematical models. The results obtained are applicable solely to

the simulated case studies analyzed. For different parameters (value of the property, the burden of loss,

etc.) the results achieved will be different. However, the principle of the model lies on the slope of the

curve changes by adding CR and optimize various types of funds. This optimization is made under

conditions ensuring closing costs comprehensively. The impact of the optimization model will be

specialized when considering exposure to flooding in certain areas expressed through flooded areas by

type of insurance and financing - a backup.

4. Conclusions In this paper has studied on simulated factors that influence the optimization of funding the provision

for property damage repairs on a residential building, which is caused by the floods. There are two

studies on the method of financing, namely: ex ante risk financing methods (life insurance and savings),

the risk of ex post financing methods (credit). Using three factors, namely insurance, saving, and credit,

simulation studied to determine the characteristics of the factors that influence the optimization of

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

141

providing funding repairs. Optimization analysis is done by comparing the ratio of the two indicators,

the loss ratio and expense ratio. Results of simulation studies on the combination of three factors,

namely insurance reserve financing, credit, and savings, which are made with mathematical models,

shows that by combining these three types of reserve financing, method of financing can make the fund

more efficient and effective, or more optimum.

References Frensidy, Budi. 2005. Matematika Keuangan. Jakarta : Salemba Empat.

Alcrudo. (2003). Mathematical Modelling Techniques for Flood Propagation in Urban Areas. Working

Paper. Universidad de Zaragoza, SPAIN.

Firdaus, Fahmi. (2011). 1598 Bencana Alam Terjadi Di Indonesia. Online artikel.

http://news.okezone.com/read/2011/12/30/337/549497/bnpb-1-598-bencana-alam-terjadi-

ditahun-2011. (diakses tanggal 16 November 2012).

Friedman, D.G. (2005). Insurance and Natural Hazard. Working Paper. The Travelers Insurance

Company, Hartford, Connecticut, USA.

Hanak, Tomas. 2009. Sensitivity Analysis of Selected Factors Affecting the Optimization of the Funds

Financing Recovery from Property Damage on Residential Building. Nehnuteľnosti a bývanie.

ISSN : 1336-944X.

Hanak, Tomas. 2010. How to Ensure Sufficiency of Financial Backing to Cover Future Losses on

Residential Buildings in Efficient Way?. Nehnuteľnosti a bývanie, Vol. 4, pp. 42-51. ISSN: 1336-

944X.

Irawan, D. & Riman. 2012. Apresiasi Kontraktor Dalam Penggunaan Asuransi Pada Pembangunan

Konstruksi di Malang. Jurnal Widya Teknika Vol.20 No.1; Maret 2012.

Jongman, B., Kreibich, H., Apel, H., Barredo, J.I., Bates, P.D., Feyen, L., Gericke, A., Neal, J., Aerts,

J.C.J.H., & Ward, P.J. (2012). Comparative Flood Damag Model Assessment: Toward a European

Approach. Natural Hazards and Earth System Sciences. 12, 3733-3752, 2012.

Marhusor, Hilda. 2004. Studi Perhitungan Premi Pada Asuransi Konstruksi Untuk Risiko Pada Banjir.

Skripsi, Jurusan Matematika, FMIPA, ITB.

Merz, B., Kreibich, H., Scwarze, R., and Thieken, A. (2010). Assessmen of Economic Flood Damage”

(Review Article). Natural Hazards and Earth System Sciences. 10, 1697-1724, 2010.

Sanders, R., Shaw, F., MacKay, H., Galy, H., & Foote, M. (2005). National Flood Modelling for

Insurance Purposes: Using FSAR for Flood Risk Estimation in Europe. Hidrology & Earth System

Sciences, 9(4), 446-456 (2005) © EGU.

Shrubsole, D., Brooks, G., Halliday, R., Haque, E., Kumar, A., Lacroix, J., Rasid, H., Rossulle, J., &

Simonovic, S.P. (2003). An Assessment of Flood Risk Management in Canada. Working Paper

No. 28. Institute for Catastrophic Loss Reduction.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

142

Analysis of Variables Affecting the Movement

of Indeks Harga Saham Gabungan (IHSG) in

Indonesia Stock Exchange by using Stepwise

Method

Riaman, Sukono, Mandila Ridha AGUNG

Riaman, University of Padjadjaran, Indonesia

Firman SUKONO, University of Padjadjaran, Indonesia

Mandila Ridha AGUNG, University of Padjadjaran, Indonesia

Abstract: Stock market is one of the economic driving on a country. It is because the stock market is

capital facilities and accumulation of long term funds that directed for increasing community

participation in moving the funds in order to support financing the national economy. Almost all industry

in the country were represented by stock market. Therefore the fluctuation of stock price that recorded is

reflected through an index movement that better known as Indeks Harga Saham Gabungan (IHSG).

Indeks Harga Saham Gabungan (IHSG) are affected by some factors, such as internal factor and external

factor. Factor that came from overseas is like foreign exchange index, crude oil prices, and sentiment

overseas market. Factor that came from domestic is usually came from foreign exchange rate. Through

this study, we will see the factors influence, both of internal and external against Indeks Harga Saham

Gabungan (IHSG) especially occur in Bursa Efek Indonesia (BEI)

Keyword: IHSG, Internal IHSG Factors, External IHSG Factors

1. Background of The Problem

The global economic crisis had a significant impact on the development of the capital market in

Indonesia. The impact of the world financial crisis, better known by the global economic crisis, which

occurred in the United States, is very influential towards Indonesia. Because the majority of Indonesian

exports performed in the U.S. and of course it greatly affects the economy in Indonesia. One of the most

influential impact of the economic crisis America is the rupiah weakened against the dollar, Indeks

Harga Saham Gabungan (IHSG) is increasingly unhealthy, and of course exports are hampered due to

reduced demand from the U.S. market itself. Besides closing for a few days as well as the suspension

of stock trading in Bursa Efek Indonesia (BEI) is one of the real impact and the first time in history,

which of course can reflect on how big the impact of the global nature of this problem.

The capital market is one of the economy movers of a country. Because the stock market is a

tool of capital formation and accumulation of long-term funds, directed to increase participation from

the public in the mobilization of funds, to support its financing national development. In addition, the

stock market is also a representation to assess the condition of the companies in a country. Because

almost all the industries sector in the State represented by the capital market. Capital markets are

experiencing increased (Bullish) or decreased (Bearish) seen from the rise and fall of stock prices listed,

reflected through a movement of the index or better known as the Indeks Harga Saham Gabungan

(IHSG). IHSG is t he value used to measure the combined performance of all capitals (companies /

issuers) listed on the Bursa Efek Indonesia (BEI).

The summary of simultaneous and complex effects on various influencing factors, primarily

economic phenomena. IHSG even today used as a barometer of economic health of a country and as a

foundation of statistical analysis on current market conditions (Widoatmojo, S. 1996:189). Meanwhile,

according to Ang (1997:14.6), Indeks Harga Saham Gabungan (IHSG/Stock Price Index) is a value that

is used to measure the performance of stocks listed in the stock exchanges. The IHSG issued by the

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

143

respective stock exchanges and officially issued by private institutions such as the financial media,

financial institutions, and others.

In this research will be demonstrated the variables that will affect the movement of Indeks

Harga Saham Gabungan (IHSG) as an index of foreign exchange, oil prices, exchange rates, interest

rates, and inflation in Bursa Efek Indonesia (BEI). So we could know which variables affect the

movement of Indeks Harga Saham Gabungan (IHSG) the most.

Based on the description of the background of this paper, we can identified the issues to be

discussed, as follows:

1. What is the influence of foreign stock indices, oil prices, and the exchange rate towards

Indeks Harga Saham Gabungan (IHSG)?

2. Which variables are the most dominant in the movement of Indeks Harga Saham Gabungan

(IHSG)?

2. Theory and Methods

Indeks Harga Saham Gabungan

IHSG is the earliest published Index, which is a reflection of the development of prices on the Stock

Exchange in general. IHSG is the change in stock price, either common stock or stock preferens, when

calculating the price at the time of the calculation basis. Unit change in the stock price index is for

points. If today's IHSG BEJ is 1800 points, while the previous day is 1810 points, the index fell 10

points, said.

IHSG calculations are done by weighting according to the weighted average of the market value

(market value weighted average index). First each share is calculated based on market value weights,

the number of shares multiplied by the share price. This value is then compared to the overall market

value to gain weight. The method of calculating the index on the BEI is as follows:

𝐼𝐻𝑆𝐺′𝑠 𝐶ℎ𝑎𝑛𝑔𝑒 =𝐶𝑢𝑟𝑟𝑒𝑛𝑡 𝑀𝑎𝑟𝑘𝑒𝑡 𝐶𝑎𝑝𝑖𝑡𝑎𝑙𝑖𝑧𝑎𝑡𝑖𝑜𝑛 𝐶𝑜𝑢𝑛𝑡

𝐵𝑎𝑠𝑖𝑠 𝑇𝑖𝑚𝑒 𝑀𝑎𝑟𝑘𝑒𝑡 𝐶𝑎𝑝𝑖𝑡𝑎𝑙𝑖𝑧𝑎𝑡𝑖𝑜𝑛 𝐶𝑜𝑢𝑛𝑡× 100%

Theory of Linear Regression Analysis

Linear regression analysis is a study involving a functional relationship between variables in the data,

expressed in mathematical equations (Sudjana, 2005:310). On its use, it is less simple linear regression

analysis could represent a more complex issue, because it involves only one independent variable.

Simple linear regression analysis was developed into a multiple linear regression analysis.

Model of multiple linear regression analysis equation can generally be written in matrix

notation as follows:

Y=Xβ + ε (1)

Where Y = dependent variable vector ( 1)n

X = matrix of independent variables (nxp)

β = parameter vector ( 1)p ε = vector error ( 1)n

1p k , k = number of independent variables, n = number of data

Coefficient β is a parameter that must be estimated

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

144

Method of Least Squares

Linear regression model in equation (1) is unknown from the elements of the parameter vector.

Therefore we need an estimation of the model.

A common method used to estimate the parameters is the method of least squares regression.

The main principle of this method is estimating the parameters by minimizing the residual sum of

squares.

Equation (1) is a regression model of the population, then the regression model of the sample

is expressed as follows: ˆY = Xβ + e

if described, would be as follows::

0 1 1 2 2ˆ ˆ ˆ ˆ...i i i k ik iY X X X e

(2)

With 𝑖 = 1,… , 𝑛

From equation (3) obtained ie values as follow:

0 1 1 2 2ˆ ˆ ˆ ˆ( ... )i i i i k ike Y X X X

(3)

LSM is the principle of minimizing the squared residuals following:

2 2

0 1 1 2 2

1 1

ˆ ˆ ˆ ˆ( ... )n n

i i i i k ik

i i

e Y X X X

(4)

In a simple way, the parameter estimation ��0, ��1, ��2, … , ��𝑘 using the LSM as follows: ˆ T -1β = (X X) XY

Sum Of Square Residual

The number of Square’s Sum (SS) show the number of irregularities around its average value,

which consists of two sources, namely the sum of squares regression which states influence on the

regression and residual sum of squares which is the remainder of the DPS that can’t be explained by

(Sembiring, 2003: 45).

SSR can be written as follows:

𝑆𝑆𝑅 =∑𝑒𝑖2

𝑛

𝑖=1

Multiple Regression Model T Test

T test is used to demonstrate how far the influence of the independent variable or a dependent

individually in explaining variation in the dependent variable or bonded variable. The purpose of the

test is to test the t itself. Here are the steps to test the hypothesis with a t distribution:

1) formulate hypothesis

H0 : βi = 0, means that the independent variables do not have a relationship with the dependent variable

H1 ∃ βi ≠ 0, (i = 1,2,… , n; and n is the index of economic) means that the independent variables

have a relationship with the dependent variable

2) Determine the significant level or degree of certainty

Significant level or degree of certainty that is used for α = 1%, 5%, 10%, with df = n − k

Where: df = degree of freedom

n = number of samples

k = number of regression coefficients

3) Determine the decision regions, the region where the null hypothesis or H0 is accepted or not. With

the following criteria:

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

145

H0 is accepted if −t(α 2⁄ ; n − k) ≤ tcount ≤ t(α 2⁄ ; n − k), means that there is no relationship

between the dependent and independent variables.

H0 is accepted if tcount > 𝑡(α 2⁄ ; n − k) or − tcount < −𝑡(α 2⁄ ; n − k), means that there is a

relationship between the dependent and independent variables.

4) Determine the statistical tests

In this step we should determine the test statistics and calculate to determine which hypothesis is

accepted or rejected.

5) Taking decision

In this step would be taken the decision to accept or reject H0. To take the necessary decisions, we need

t value that we have got in the previous step and the value of t from the table. Then we compare the two

values, if the value of tcount is greater than ttable, then H0 is rejected, so that it can be concluded that the

independent variables have a relationship with the dependent variable. And if tcount smaller than ttable,

then H0 is accepted, so it can be concluded that the independent variables do not have a relationship

with the dependent variable.

Multiple Regression Model F Test

F-test was conducted to determine the effect of independent variables together on the dependent

variable. Here are the steps to test the hypothesis:

1) Formulate Hypotheses

H0 :β1 = β2 = β3 = ⋯ = βk = 0, means there is no relationship between the dependent and

independent variables.

H1 :β1 ≠ β2 ≠ β3 ≠ ⋯ ≠ βk = 0, means there is a relationship between the dependent and independent

variables.

2) Determine the significant level

Significant level or degree of certainty that is used for α = 1%, 5%, 10%

3) Determine the decision the region, _ the region where the null hypothesis or is accepted or not.

H0 is accepted if the calculated Fcount ≤ Ftable, means that all the independent variables together do not

have a relationship with the dependent variable.

H0 is rejected if the Fcount> F table, means that all the independent variables together have a relationship

with the dependent variable.

4) Determine the value of the test statistic F

That the test statistic used is:

F =𝐴𝑆𝑅

ASE

Where: ASR is the average squared regression

ASE is the average squared error

F Distributions always be positive

5) Draw conclusions

Decision-making can be either acceptance or rejection of. Calculated F value obtained in the previous

step and compared with the value of F obtained from the table. If the calculated F is greater than F table,

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

146

then is rejected, so that it can be concluded that there is a relationship between the independent variables

together with the dependent variable. But if F count less than or equal to the F table, then is accepted,

so it can be concluded that there is no relationship of independent variables together with the dependent

variable.

Stepwise Method

This method is a combination of the two methods, the forward selection method (forward selection) and

the allowance method (backward elimination). In this method both methods are applied

interchangeably. Stepwise method is used to get a similar conclusion but from a different direction, that

is by including variables in order to meet the regression equation. The order to insert a variable,

determined by using partial correlation coefficient as a measure, for determining the importance of the

variables in the equation that has not been there.

Stepwise regression analysis for the selection of a good equation similar to forward selection

methods, except for any portion that meets the hypothesis H0: βi = 0 for all variables tested, and which

satisfy |ti|-statistic is less than the critical value are eliminated from the equation. The next equation is

added into the equation to see that the value of the partial correlation of the forward method. Stepwise

election continues until the achievement of equality without variables |ti|-statistic is less than some

critical value of t correspond and no variables left to put into the equation.

To perform stepwise regression analysis using the SPSS application is very easy. There already

provided tools to process data by stepwise method. But here I will explain the procedure to release the

model stepwise method without using the tools available in SPSS.

The first step for the stepwise method is to find the value of the correlation coefficient of each

variable used. Further note that the value of its variable approach |R| → 1, then insert variables into the

model. Once there is one variable, then estimate the regression model. Note the value of t, if |tcount| >ttable(1−α;db) or p - value < α, then the first significant variables entered into the model.

The next step is to calculate the partial correlation remaining independent variables with the

dependent variable, the control variable is the variable that is entered into the model in the first step

earlier. After the results of the calculations obtained, note that the correlation coefficient approaching |

R | → 1 and then enter into the model. Proceed by estimating the model using the first and second

variable. If the value of |tcount| > ttable(1−α;db) or p - value < α, then resumed by calculating partial

correlation remaining independent variables with the dependent variable. As a second control variable

is the variables obtained from the previous estimate. If |tcount| > ttable(1−α;db) are not met by the final

estimated variable, then that variable can be eliminated or not used in the model.

Next repeat the above steps until all independent variables until there are no remaining

variables. If all the variables are no longer left, then the next step is to build a model of the variables

that have been obtained from the previous process. Models formed only of the variable or variables that

are not eliminated or qualifying |tcount| > ttable(1−α;db) or p-value <α. The model that has been

established is the best model of the stepwise method

3. Data Processing

This study uses monthly data in the form of year 2003 to 2012. The data used in this paper is the

simulation data and secondary data. Simulation data is used to see a pattern that can provide conclusions

regarding the methods used. Secondary data obtained from http://finance.yahoo.com/,

http://www.bi.go.id/, and http://www.esdm.go.id/

The data used are:

1) Data Indeks Harga Saham Gabungan (IHSG) in Indonesia in 2003-2012.

2) Data Dow Jones Industrial Average in 2003-2012.

3) Financial Times Stock index data exchange years 2003-2012.

4) U.S. Dollar exchange rate data in 2003-2012.

5) British Pound exchange data years 2003-2012.

6) Data crude oil prices in 2003-2012.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

147

This section contains the data displayed on the attachment in the form of secondary data.

Secondary data that has been obtained is being processed by using SPSS 17. Dow Jones Industrial

Average data or United States of America’s stock exchanges hereinafter referred to as DJIA. Financial

Times Stock Exchange or Great Britain’s stock exchanges hereinafter referred to as FTSE. World crude

oil data hereinafter referred to as MINYAK. Rupiah’s exchange rate to United States Dollar hereinafter

reffered to as USD. Rupiah’s exchange rate to Great Britain Poundsterling hereinafter referred to as

GBP.

3.1 Calculation Results of Coefficient of Determination

Calculation of the coefficient of determination is done to determine which variables will be entered first

into the model. The result of the calculation is a value | R | which is between 0 and 1.

Table 1 Calculation results of Coefficient of Determination.

Correlations

JKSE DJIA FTSE MINYAK USD GBP

JKSE Pearson Correlation 1 .620** .578** .889** -.064** -.323**

Sig. (2-tailed) .000 .000 .000 .002 .000

N 2327 2327 2327 2327 2327 2327

DJIA Pearson Correlation .620** 1 .900** .661** -.339** .210**

Sig. (2-tailed) .000 .000 .000 .000 .000

N 2327 2327 2327 2327 2327 2327

FTSE Pearson Correlation .578** .900** 1 .632** -.200** .280**

Sig. (2-tailed) .000 .000 .000 .000 .000

N 2327 2327 2327 2327 2327 2327

MINYA

K

Pearson Correlation .889** .661** .632** 1 -.060** -.099**

Sig. (2-tailed) .000 .000 .000 .004 .000

N 2327 2327 2327 2327 2327 2327

USD Pearson Correlation -.064** -.339** -.200** -.060** 1 .406**

Sig. (2-tailed) .002 .000 .000 .004 .000

N 2327 2327 2327 2327 2327 2327

GBP Pearson Correlation -.323** .210** .280** -.099** .406** 1

Sig. (2-tailed) .000 .000 .000 .000 .000

N 2327 2327 2327 2327 2327 2327

**. Correlation is significant at the 0.01 level (2-tailed).

Based on Table 1 it can be seen that the value of | R | is close to 1 is the value of the price of

oil. Then the first variable entered first is the price of oil.

3.2 First Stage Model Estimation Results

Oil prices was the first variable entered into the model. The next step is to estimate the model with

variables entered first. Estimate the model used in this step to enter method in SPSS tools.

Table 2 First Stage Estimation Results

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

148

Coefficientsa

Model

Unstandardized Coefficients

Standardized

Coefficients

T Sig. B Std. Error Beta

1 (Constant) -440.386 29.351 -15.004 .000

MINYAK 36.484 .390 .889 93.564 .000

a. Dependent Variable: JKSE

Based on Table 2 it can be seen that the results |tcount| > ttable(1−α;db) or p − value < 𝛼 then

the price of oil retained in the model.

3.3 First Stage of Calculation of Partial Correlation Results

Then the next step is further partial correlation calculation. Partial correlation is calculated using the

control variables obtained from existing variables in the model first is the price of oil.

Table 3 Calculation of Partial Correlation Results

Correlations

Control Variables JKSE DJIA FTSE USD GBP

MINYAK JKSE Correlation 1.000 .096 .044 -.022 -.515

Significance (2-

tailed)

. .000 .034 .281 .000

Df 0 2324 2324 2324 2324

DJIA Correlation .096 1.000 .830 -.399 .368

Significance (2-

tailed)

.000 . .000 .000 .000

Df 2324 0 2324 2324 2324

FTSE Correlation .044 .830 1.000 -.209 .445

Significance (2-

tailed)

.034 .000 . .000 .000

Df 2324 2324 0 2324 2324

USD Correlation -.022 -.399 -.209 1.000 .402

Significance (2-

tailed)

.281 .000 .000 . .000

Df 2324 2324 2324 0 2324

GBP Correlation -.515 .368 .445 .402 1.000

Significance (2-

tailed)

.000 .000 .000 .000 .

Df 2324 2324 2324 2324 0

Table 3 Shows the results of the calculation of the partial correlation with oil prices as control

variables. From the table it can be seen the value of |𝑟𝑥𝑖,𝑀𝐼𝑁𝑌𝐴𝐾| is close to 1 is the correlation of GBP

or pound sterling exchange rate.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

149

3.4 Second Stage Model Estimation Results

GBP or pound sterling exchange rate entered as the second variable when seen from the results in Table

3. Further back we estimate the model by including two variables: oil prices and the pound sterling

exchange rate.

Table 4 Estimation Results of Phase Two

Coefficientsa

Model

Unstandardized Coefficients

Standardized

Coefficients

T Sig. B Std. Error Beta

1 (Constant) 2318.355 98.454 23.548 .000

MINYAK 35.524 .336 .866 105.759 .000

GBP -.167 .006 -.237 -28.983 .000

a. Dependent Variable: JKSE

Based on Table 4 it can be seen that the results |tcount| > ttable(1−α;db) or p − value < 𝛼 the

oil price and the exchange rate at the pound retained in the model.

3.5 Second Stage of Calculation of Partial Correlation Results

At this stage we re-calculated value of the partial correlation of the remaining variables. Calculation at

this stage using two control variables obtained from the model that has been estimated previously.

Because oil prices and the exchange rate of pound sterling is not eliminated from the model, the two

variables become the control variable to calculate the partial correlation stage.

Table 5 Calculation of Partial Correlation Results Second Stage

Correlations

Control Variables JKSE DJIA FTSE USD

MINYAK &

GBP

JKSE Correlation 1.000 .358 .356 .236

Significance (2-tailed) . .000 .000 .000

Df 0 2323 2323 2323

DJIA Correlation .358 1.000 .800 -.643

Significance (2-tailed) .000 . .000 .000

Df 2323 0 2323 2323

FTSE Correlation .356 .800 1.000 -.473

Significance (2-tailed) .000 .000 . .000

Df 2323 2323 0 2323

USD Correlation .236 -.643 -.473 1.000

Significance (2-tailed) .000 .000 .000 .

Df 2323 2323 2323 0

Table 5 shows the calculation of the partial correlation with oil prices and the pound sterling

exchange rate as a control variable. From the table it can be seen the value of |rxi,MINYAK;GBP| which is

close to 1 is the correlation of the DJIA or the value of the U.S. stock price index.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

150

3.6 Third Stage Model Estimation Results

DJIA or the U.S. stock price index is the next variable entered into the model. Based on Table 5 of the

calculation of the partial correlation, then DJIA goes into estimation model and a model with the DJIA

as a third variable.

Table 6 Estimation Results Third Stage

Coefficientsa

Model

Unstandardized Coefficients

Standardized

Coefficients

t Sig. B Std. Error Beta

1 (Constant) 1647.564 98.850 16.667 .000

MINYAK 29.830 .440 .727 67.848 .000

GBP -.207 .006 -.293 -35.659 .000

DJIA .155 .008 .202 18.482 .000

a. Dependent Variable: JKSE

Based on Table 6, it can be seen that the results |tcount| > ttable(1−α;db) atau p − value < 𝛼

then oil price, exchange rate pound sterling, and U.S. stock index maintained in the model.

3.7 Third Stage of Calculation of Partial Correlation Results

The next stage is re-calculating the partial correlation. This time is used three control variables for the

DJIA retained in the model.

Table 7 Calculation of Partial Correlation Results Third Stage

Correlations

Control Variables JKSE FTSE USD

MINYAK & GBP & DJIA JKSE Correlation 1.000 .124 .652

Significance (2-tailed) . .000 .000

Df 0 2322 2322

FTSE Correlation .124 1.000 .090

Significance (2-tailed) .000 . .000

Df 2322 0 2322

USD Correlation .652 .090 1.000

Significance (2-tailed) .000 .000 .

Df 2322 2322 0

Table 7 shows the results of the calculation of the partial correlation with oil prices, exchange

rate pound sterling, and U.S. stock price index as a control variable. From the table it can be seen that

the value of |rxi,MINYAK;GBP;DJIA| which is close to 1 is USD or the value of the U.S. dollar exchange

rate. These variables are subsequently incorporated into the model to be estimated.

3.8 Fourth Stage Model Estimation Results

USD or U.S. dollar exchange rate is the fourth variable is the next entry. Hereinafter jointly be estimated

models there are three variables that have been previously.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

151

Table 8 Estimation Results Fourth Stage

Coefficientsa

Model

Unstandardized Coefficients

Standardized

Coefficients

t Sig. B Std. Error Beta

1 (Constant) -3381.638 142.699 -23.698 .000

MINYAK 21.988 .383 .536 57.336 .000

GBP -.360 .006 -.511 -62.643 .000

DJIA .375 .008 .489 45.300 .000

USD .602 .015 .342 41.422 .000

a. Dependent Variable: JKSE

Based on Table 8, it can be seen that |tcount| > ttable(1−α;db)or p − value < 𝛼, then no variable

is eliminated. Thus, the price of oil, pound sterling exchange rate, stock price index of the United States,

and the U.S. dollar exchange rate goes into the model.

3.9 Last Stage Model Estimation Results

At this stage the model estimation is done directly by entering the last variable or the FTSE UK

index of stock prices. Calculation of partial correlation is no longer done because this variable is the

last variable that has not been entered into the model. Therefore at this stage the model is estimated

with all variables.

Table 9 Last Stage Estimation Results

Coefficientsa

Model

Unstandardized Coefficients

Standardized

Coefficients

t Sig. B Std. Error Beta

1 (Constant) -3319.035 142.982 -23.213 .000

MINYAK 21.777 .385 .531 56.497 .000

GBP -.364 .006 -.516 -62.761 .000

DJIA .338 .012 .441 27.783 .000

USD .596 .015 .338 41.025 .000

FTSE .090 .021 .058 4.181 .000

a. Dependent Variable: JKSE

Based on Table 9 it can be seen that|tcount| > ttable(1−α;db)orp − value < 𝛼. Because these

conditions are met then no variable is eliminated in the final results of this estimation.

From the estimation results obtained from the model regression equation is:

JKSE = −3319,035 + 21,777 ∗ OIL − 0,364 ∗ GBP + 0,338 ∗ DJIA + 0,596 ∗ USD + 0.90 ∗ FTSE From the above regression model if there is no movement of the five independent variables

then IHSG will be decreased by 3319.035. In this model, the price of oil greatly affects the movement

of the composite stock price index (IHSG) as the value of the coefficient of oil prices is quite large.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

152

4. CONCLUSION

Based on the model it can be seen that the coefficient of OIL has the greatest value. It means that

most affect the movement of oil price index compared with other variables. And most do not affect JCI

is the DJIA or the U.S. stock price index since the value of the coefficient is small.

References

Abimanyu, Y. 2004. MemahamiKursValutaAsing. Jakarta :LembagaPenerbitFakultasEkonomiUniversitas

Indonesia.

Ang, R. 1997. BukuPintarPasar Modal Indonesia. Jakarta :Mediasoft Indonesia.

Fahmi, I. 2012. PengantarPasar Modal. Bandung :Alfabeta.

Fakhruddin, H. M. danTjipto, D. 2001. Pasar Modal di Indonesia. Jakarta :SalembaEmpat.

Moechdie, Abi H. danRamelan, H. 2012. GerbangPintarPasar Modal. Jakarta : PT. JembatanKapital Media.

Morrison, Donald F. 1983. Applied Linear Statistical Methods. Skripsi. New Jersey : Prentice-Hall.

Puspopranoto, S. 2004. KeuanganPerbankan Dan PasarKeuangan. Jakarta :Pustaka LP3ES.

Sembiring, R.K. 2003. AnalisisRegresi, Edisi 2. Bandung : ITB.

Sudjana. 2005. MetodaStatistika, Edisi 6. Bandung :Tarsito.

Sugiyono. 2007. MetodePenelitianKuantitatifKualitatif dan R&D. Bandung : Alfabeta

Widoatmojo, S. 1996. Cara SehatInvestasi di Pasar Modal. Jakarta : PT. JurnalindiAksaraGrafika.

http://www.esdm.go.id/

http://finance.yahoo.com/

http://www.bi.go.id/

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

153

Learning Geometry Through Proofs

Stanley DEWANTO

University of Padjadjaran, Bandung, Indonesia

[email protected]

Abstract: Proofs in geometry is considered difficult and boring in school mathematics, so that

many students and teachers as well try to avoid it. Indeed, solving proofs require a

comprehensive basic knowledge of geometry. This is a preliminary research of how to learn

geometry through proofs. The instructions consist of several stages. With three classes of total

60 graduate preservice students, in one semester they enjoy and having fun learning proofs, and

absolutely gain their knowledge of geometry deeper and enhance their curiosity of learning

geometry further.

Keywords: proofs, geometry, class instructions.

1. Introduction

Stephen S. Willoughby (1990) once stated:

“Change in the teaching of mathematics is needed, not because it has recently

deteriorated, nor even because it has always been bad (though it probably has been),

but, rather because the world is changing. The people who are going to solve the

problems of the present and future—or even understand and evaluate those problems

and solutions—must have a far better grasp of mathematics than most people have at

present, or have ever had in the past” (p. 4).

Conjecturing and demonstrating the logical validity of conjectures are the essence of

the creative act of doing mathematics (NCTM Standards, 2000).

It’s been a common old story that geometry is a difficult and boring subject in learning mathematics.

As we know, geometry is a subject avoided by most high school teachers (in Indonesia), since their

perception of geometry as a difficult subject, needs a lot of spatial thinking (imagination) and deductive

reasoning. As a matter of fact, it is supposed to be more interesting and easier to teach geometry

compared to algebra. Look everywhere in or outside the class, you will see a lot of geometry shapes

around! Furthermore, evidence in mathematics curriculum tells that, algebra has a lot more materials

than geometry, and consequently teachers and students spend a lot more time in studying algebra, rather

than geometry.

Proof in geometry is one important subject in geometry, yet only a few has been taught in high

school as well as in the university (Burger & Shaughnessy, 1986; Usiskin, 1982). Solving proofs require

comprehensive and continuous concepts from the start to the end. Numerous attempts have been made

to improve students' proof skills by teaching formal proof, albeit largely unsuccessful ones (Harbeck,

1973; Ireland, 1974; Martin, 1971; Summa, 1982; Van Akin, 1972). Moreover Senk (1985) examined

that of over 1500 students; only about 30 percent of students enrolled in full-year geometry courses

achieved 75 percent mastery level in proof writing.

In class, generally geometry is taught in a teacher-centered approach, where the teacher

is the center of attention and students are consider as a whole group. In certain session,

problems are assigned, corrected, and handed back with a little feedback. This approach might

work for some students, but the problem is how to make all a better life-long problem solver.

The purpose of this preliminary research is to describe an approach that is suitable in teaching

geometry, through proofs.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

154

2. Theoretical Background

In the development of geometrical thinking specifically, where proofs will be the core of geometry,

students must pass through several stages; initial stage is to identify the problem, that is what is given

and its picture, and what has been asked; middle stage is to retrieve all the knowledge they have around

what is given such as definition and properties, how to approach the solution using his or her intuitive

perceptions, and final stage is to write down the proof rigorously. These mean is for discovery or

deduction, students will make conjectures based on the picture or pattern. Proofs might be considered

as an open-ended problem that is there are many ways to construct proofs, so students might prove upon

not only their mathematical knowledge, but also their imagination and creativity. As Silver, et al. (2005)

stated “You can learn more from solving one problem on many different ways than you can from solving

many different problems”.

Moore (1994) stated that there are seven major sources of students’ difficulties in solving

proofs: inability to state the definition, inadequate concept image, inability to use the definition to

structure a proof, inability or unwillingness to generate examples, and difficulties with mathematics

language and notation. Accordingly, in most geometry class, teacher seldom discuss alternative

solutions.

Solving proofs might be also considered as problem solving procedures, but many mathematics

teachers often teach students by having them copy standard solution methods, and not surprisingly that

students find it difficult when facing new problems (Harskamp and Suhre, 2006).

More importantly, proofs supposed to be a meaningful tool to learn mathematics, not as a formal and

boring exercise for students and teachers.

Hanna (1995) argues that “the most important challenge to mathematics educators in the context of

proof is to enhance its role in the classroom by finding more effective ways of using it as a vehicle to

promote mathematical understanding.”

3. Methodology

Three classes of preservice teachers in master’s program, comprised of a total of 60 graduate students

of mathematics education were participating in the Geometry course. The topic is ‘how to prove Euclid-

geometry problem’, ranging from easy to hard problems. At the first and the last session of the semester,

students filled a questionnaire of their perception of geometry.

The design instructional is as follows:

Teacher reviewed some basic properties of several of 2-dimensional shapes, including its

definitions.

The students were provided 30 proof problems. Started from the first 10 problems, students

tried to prove them individually or by discussion. For each problem, there might be different

type of proof. The teacher circled around, provided small hints to students whom asked for

help. If the students could not get them done, then they did it as an assignment for the next

session.

Next, the teacher checked all the students’ assignment, and he will carefully choose a particular

problem (or the teacher might ask the students if there are any problem to be discussed), and 3

students who solved in different ways. These 3 students will write their complete proof on the

board.

The teacher, together with the students, analyzed these student’s proofs, by showing their

mistakes, if exists. The last step, the teacher asked the students: which way of proving is the

most effective, the most comprehendible, the ‘easiest’, etc.

This process continued until all problems have been proven.

Throughout the course, students might use dynamic power of GSP to explore, and lead to

conjectures.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

155

4. Result and Discussion

Initially, students felt scary and not confidence in learning geometry, and it appeared that lack of

geometry concept was the problem. Slowly, along with proving problems they comprehend the concept

and gained their confidence. At the end of the semester, there was a huge difference of their perception

of geometry. They concluded that mastering the basic concept were the most fundamental of learning

geometry. In facing problems, they should know what is given and what has been asked, use their

intuition or reasoning to choose the appropriate rules or properties and implement them towards the

goal, and use their skills to write down a accurate and rigorous proof. We believe by writing proofs

accurately and rigorously, students understand the underlying concepts and ideas.

Some benefits from this instruction:

The dynamic geometry software helps students to sharpen their intuition, and reasoning.

Proofs are always open-ended problems, means there are many ways to prove.

Proofs can be considered as problem solving procedure, so that it is a good exercise to approach

problems and make use of proper mathematical tools.

In solving proofs rigorously, one must have comprehensive concepts, which are mastering the

concept.

Students improve how to write proofs accurately by thinking logically.

This approach is a student-centered learning and process-oriented, where students communicate

to each other scientifically.

Finally, the students found that learning geometry was fun, exciting, and challenging, even for less

successful student, and they felt with ecstasy once a problem was proven. What important for students

is they learn with pleasure, and that will enhance their curiosity to learn geometry!

References

Burger, W. F. and Shaughnessy, J.M. (1986). Characterizing the van Hiele Levels of Development in

Geometry. Journal for Research in Mathematics Education 17 :31-48.

Hanna, G. (1995). Some pedagogical Aspects of Proof. Interchange 21: 6-13.

Harbeck, S.C.A. (1973). Experimental Study of the Effect of Two Proof Formats in High School

geometry on Critical Thinking and Selected Student Attitudes. Ph.D diss. Dissertation Abstracts

International 33 :4243A.

Harskamp, E. and Suhre, C. (2006). Improving Mathematical Problem Solving: A Computerized

Approach. Computers in Human Behavior, 22, 801-815.

Ireland, S.H. (1974). The Effects of a One-Semester Geometry Course, Which Emphasizes the Nature

of Proof on Student Comprehension of Deductive Processes. Ph.D diss. Dissertation Abstracts

International 35 :102A-103A.

Martin, R.C. (1971). A Study of Methods of Structuring a Proof as an Aid to the Development of Critical

Thinking Skills in High School Geometry. Ph.D diss. Dissertation Abstracts International 31

:5875A.

Moore, R.C. (1994). Making the Transition to Formal Proof. Educational Studies in Mathematics, 27,

249-266.

NCTM. (2000). Principles and Standards for School Mathematics. Reston, VA: The National Council

of Teachers of Mathematics, Inc.

Senk, S. L. (1985). How Well Do Students Write Geometry Proofs? Mathematics Teacher 78

(September 1985):448-56.

Silver, E.A., et al. (2005). Moving from Rhetoric to Praxis: Issues Faced by Teachers in Having

Students Consider Multiple Solutions for Problems in the Mathematics Classroom. Journal of

Mathematical Behavior, 24, 287-301.

Summa, D.F. (1982). The Effects of Proof Format, Problem Structure, and the Type of Given

Information on Achievement and Efficiency in Geometric Proof. Ph.D diss. Dissertation Abstracts

International 42 :3084A.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

156

Usiskin, Z. (1982) Van Hiele Levels and Achievement in Secondary School Geometry. Final report of

the Cognitive Development and Achievement in Secondary School Geometry Project. Chicago:

University of Chicago, Department of Education.

Van Akin, E. F. (1972). An Experimental Evaluation of Structure in Proof in High School Geometry.

Ph.D diss. Dissertation Abstracts International 33 (1972):1425A.

Willoughby, S. (1990). Mathematics education for a changing world. ASCD: Alexandria, VA.

Ziemek, T.R (2010). Evaluating the Effectiveness of Orientation Indicators with an Awareness of

Individual Differences. Ph.D dissertation. Univ of Utah.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

157

Mathematical Modeling In Inflammatory

Dental Pulp On The Periapical Radiographs

Supian S1., Nurfuadah P2., Sitam S3., Oscanda F4., Rukmo M5., and Sholahuddin A6.

1,6 Departement of Mathematics, Faculty of Mathematics and Natural Sciences,

University of Padjadjaran, Bandung, West Java, Indones, INDONESIA,

[email protected], [email protected]; [email protected] 2,3,4 Nurfuadah, P, Departement of Dentomaxillofacial Radiology, Faculty of

Dentistry, University of Padjadjaran, Bandung, West Java, Indonesian.,

INDONESIA, [email protected]; [email protected];

[email protected]; 5Rukmo, M, Departement of Conservative Dentistry, Faculty of Dentistry,

University of Airlangga, Surabaya, East Java, Indonesian, INDONESIA,

[email protected];

Abstract: This research describes a mathematical model of the inflammation dental pulp. The

reaction can be either reversible or irreversible pulpitis, depends on the intensity of stimulation,

the severity of the damaged tissue, and host response. It causes pain from the lightest pain,

namely sensitive teeth complaints to the most severe spontaneous pain localized difficult. The

aim of this research is to obtain the characteristic function of the level of inflammatory dental

pulp (reversible and irreversible pulpitis) based on histogram analysis on the periapical

radiography.

Keywords: Model, Digital image processing, pulpitis, histogram,periaptical, radiographs

1. Introduction

A mathematical model is a description of a system using mathematical concepts and language. The

process of developing a mathematical model is termed mathematical modelling. Mathematical models

are used not only in the natural sciences (such as physics, biology, earth science, meteorology) and

engineering disciplines (e.g. computer science, artificial intelligence), but also in the social sciences

(such as economics, psychology, sociology and political science); physicists, engineers, statisticians,

operations research analysts, economists and medicine (dentis) are use mathematical models most

extensively. A model may help to explain a system and to study the effects of different components,

and to make predictions about behaviour.

Mathematical models can take many forms, including but not limited to dynamical systems,

statistical models, differential equations, or game theoretic models. These and other types of models

can verlap, with a given model involving a variety of abstract structures. In general, mathematical

models may include logical models, as far as logic is taken as a part of mathematics. In many cases, the

quality of a scientific field depends on how well the mathematical models developed on the theoretical

side agree with results of repeatable experiments. Lack of agreement between theoretical mathematical

models and experimental measurements often leads to important advances as better theories are

developed. (Sudradjat, 2013)

The main limitation in conventional intraoral radiograph for dentoalveolar disease imaging is

representing 3D structure in 2D image. This limitation also occur on caries, pulpa, and periodontal.

(Tyndall and Rathore, 2008). Technology development, especially in computer and information

technology, has affect dental radiology (White and Goaz, 2004). Radiologi imaging, one of technology

development benefits, is able to detect 70% lesion. On digital image, there are two basic form, that are

indirect digital imaging and direct digital imaging. (Langlais, 2004). Indirect digital imaging began to

be used after there are reports that told no differences between indirect digital imaging and direct digital

imaging in measuring demineralization belom enamel surface. The same procedure also reported as

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

158

successful by Eberhard, et al, in monitoring in vitro dental demineralization, and Ortman, et al., in

detecting alveolar bone defect changes with bone loss about 1-5%.

The paper presents a review of number of mathematical model of the inflammation dental pulp.

The reaction can be either reversible or irreversible pulpitis, depends on the intensity of stimulation, the

severity of the damaged tissue, and host response. It causes pain from the lightest pain, namely sensitive

teeth complaints to the most severe spontaneous pain localized difficult. The aim of this research is to

obtain the characteristic function of the level of inflammatory dental pulp (necrosis, pulpitis and normal)

based on histogram analysis on the periapical radiography.

2. Materials and Methods

2.1 Digital Image

Digital image is a continous image ( , )f x y that have been mapped into a discrete image including its

properties (i.e., spasial coordinate and brightness level). Digital image is a M x N matrix which its rows

and colums value correspond to pixel value as shown in equation (1). (Munir, R., 2004).

(0,0) (0,1) (0, 1)

(1,0) (1,1) (1, 1)( , )

( 1,0) ( 1,1) ( 1, 1)

f f f N

f f NX f x y

f M f M f M N

(1)

It is usual to digitize the values of the image function ( , )f x y , in addition to its spatial

coordinates. This process of quantisations involves replacing a continuously varying ( , )f x y with a

discreate set of quantization levels. The accuracy with wich variations in ( , )f x y are represented

determined by the number of quantization level that we use; the more levels we use, the better the

approximations.

Convensionally, a set of n quantization levels comparises the integer 0,1,2, 1n , 0 and

1n are usually displayed or printed as black and white, repectively, with intermediate levels rendered

in various shades of grey. Quantisation level are therefore commonly referred to as grey levels. The

collective term for all the grey levels, ranging from black to white, is a grayscale.

For convenient and efficient processing by a computer, the number of grey levels, n is usually

an integral power two. We may write

2Kn (2)

where K is number of bites used for quantisation. K is typically 8, giving us images with 256 possible

grey levels ranging from 0 (black) to 250 (white) (Phillips, 1994).

Quantitative analysis shows that the average gray scale pixel value can be used to see

remineralization of dental caries, with lesion status. Quantitative value correlate to lesion status in

subjective analysis. The more bigger values are for remineralization of lesion, the lower ones are for

demineralization, gray scale values that close to 128 are stabilize lesion. Quantitative grading form

dental caries lesion does not depend on observer because the operator job is limited to decide ROI and

software automatically show the gray scale pixel values. (Carneiro LS, et al., 2009).

We can get ROI (Region of Interest) value for grading demineralization below enamel surface

by setting caries region as ROI. In some cases, lesion boundaries become harder to set because

raidiolusen from karies lesion is not defined well, moreover when there is an overlapping. We cannot

fully avoid these things in daily clinical routine, but these are important in designing research to assure

operator precision in measuring to avoid ROI disperse because the operator choose bigger or smaller

lesion. (Wenzel A, 2002). Then we will continue with measuring gray scale pixel values using

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

159

histogram in selected area. We can also see the average and standard deviation of grayscale pixel values

on the software. (Carneiro LS, et al., 2009).

2.2 Mathematical model

Mathematical models can take many forms, including but not limited to dynamical systems, statistical

models, differential equations, or game theoretic models. These and other types of models can overlap,

with a given model involving a variety of abstract structures. In general, mathematical models may

include logical models, as far as logic is taken as a part of mathematics. In many cases, the quality of a

scientific field depends on how well the mathematical models developed on the theoretical side agree

with results of repeatable experiments. Lack of agreement between theoretical mathematical models

and experimental measurements often leads to important advances as better theories are developed. The

basic process modeling we see Figure 1.

Figure 1 Process of mathematical modelling

This model describes the development of a mathematical model that can provide the basis for

a decision support system to aid dentists (or patients) in making decisions about how often to perform

(or receive) intraoral periapical radiographs. The model, which describes the initiation and

progression of approximal dental pulp used simple descriptive method yaitu dengan membentuk

caracteristic function wich tree levels are necrosis, pulpitis, and normal, Walton and Torabinejad

2008.

Propose by Walton and Torabinejab, 2008 and , if x is mean of grayscale (2),than we see the following

characteristic function,

0 64,

( ) 64 128,

128,

x necrosis

f x x pulpitis

x normal

(3)

The study was conducted with simple descriptive method an sampling using accidental

sampling. Data obtained based on the results of vlinical examination, then the periaptical radiograph

with reversible or irreversible pulpitis diagnosis digitized using Matlab V.7.0.4 wich will in the

histogram graph then can be determined for characteritic function of dental pulp inflammation. For

validation of model (1) we gather sampel 100 intraoral periapical radiographs pulp pathological cases

in Departement Dentomaxillofacial Radiology, Dental Hospital, Padjadjaran University,Bandung, West

Java, Indonesia in September to December 2012 with accidental sampling.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

160

From the results of the study 100 samples obtained 29 samples of normal catagory, 30 samples

pulpitis, necrosis 30 and 1 is not clear, the results are presented in Table 1 -3

Table 1 Degree of grayscale Normal

No Grayscale level

Average Minimum Maximum

1 160 188 172.881

2 146 178 160.995

3 137 197 162.061

4 132 161 145.422

5 169 201 186.666

6 175 221 199.015

7 146 175 160.814

8 128 170 148.621

9 165 195 148.621

10 157 205 179.076

11 138 172 153.621

12 149 187 168.775

13 133 189 156.565

14 164 200 179.154

15 176 205 188.531

16 180 225 207.8

17 144 185 164.19

18 182 217 198.685

19 151 195 171.483

20 142 191 165.715

21 146 177 160.541

22 199 233 215.841

23 160 217 182.061

24 128 158 142.331

25 152 184 169.766

26 162 217 185.458

27 169 212 189.833

28 157 190 173.211

29 143 191 173.522

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

161

Table 2 Degree of grayscale pulpitis

No

Grayscale Level Average

ROI A ROI B ROI A ROI B

Min Max Min Max

1 118 128 113 127 124.408 118.278

2 70 106 93 116 87.56 104.707

3 71 93 94 125 83.0608 110.235

4 81 104 78 127 93.1024 102.523

5 76 99 74 96 88.5269 81.512

6 74 96 79 99 84.0384 88.2896

7 105 126 104 125 113 113.883

8 107 124 103 119 115.874 110.925

9 95 107 96 108 101.363 103.014

10 114 128 98 113 121.158 106.72

11 90 112 83 102 99.5984 92.6736

12 101 128 101 128 114.117 115.632

13 80 107 80 113 95.5264 96.9056

14 70 91 80 100 80.6736 90.4096

15 81 104 74 101 93.1024 88.608

16 78 95 78 98 88.288 85.1216

17 92 120 97 126 105.086 110.798

18 80 115 106 128 94.856 116.68

19 74 91 93 110 83.2704 100.48

20 66 83 114 128 73.9312 122.322

21 64 80 68 88 69.8752 78.8352

22 87 112 100 120 99.2336 110.736

23 105 127 102 120 114.112 112.382

24 95 125 90 119 111.584 103.362

25 74 96 64 80 83.8992 70.56

26 77 110 75 93 88.1712 82.696

27 94 117 83 107 103.347 94.6736

28 63 79 54 77 70.9104 67.2384

29 74 126 65 99 95.5296 82.904

30 83 123 87 91 106.421 71.48

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

162

Table 3 Degree of grayscale necrosis

No

Grayscale level Average

A B A B

Min Max Min Max

1 40 75 50 73 57.435 63.574

2 22 49 36 67 36.241 50.998

3 8 55 23 53 27.565 35.895

4 27 59 38 71 40.213 52.787

5 14 68 29 66 33.745 43.064

6 10 58 27 68 29.328 40.533

7 19 70 35 77 51.694 52.871

8 7 60 20 50 31.952 33.483

9 20 51 27 65 36.003 48.305

10 31 51 37 59 39.516 47.684

11 31 64 38 74 43.294 52.843

12 24 64 30 61 39.855 45.554

13 41 70 44 73 55.918 56.561

14 41 69 51 78 55.288 63.818

15 47 80 46 79 60.427 61.065

16 37 77 54 77 51.997 64.153

17 41 71 49 76 50.33 60.934

18 40 65 37 72 53.084 57.535

19 37 57 53 72 47.652 60.868

20 46 68 53 74 55.887 62.944

21 42 68 54 81 53.945 63.802

22 27 45 39 61 37.434 51.333

23 38 68 45 67 37.798 47.98

24 7 35 10 31 51.022 56.026

25 21 38 45 68 29.075 55.76

26 15 36 46 75 26.284 58.647

27 30 54 52 72 42.914 62.598

28 21 43 34 67 30.872 48.195

29 43 63 54 70 51.483 60.385

30 29 48 57 76 38.282 64.684

Table 4 Means and deviation standard demage levels

Caracteristic Minimum

value

Maximum

value

Means Diviation

Standard

Normal 142.33 215.84 172.8 18.58

Pulpitis 85.58 108.91 96.96 3.04

necrosis 34.48 63.86 48.68 9.44

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

163

(a) (b) (c)

Figure 3 Histogram (a) Normal (b) pulpitis (c) necrosis

3. Conclusion and discussion

The conclution of this research that the description of dental pulp inflammantion based on analysis of

histrogram graph on periaptical radiograph, the photo is more radiolucent than nomal pulp and more

radiopaque than necrosis pulp (1). The results showed that levels of inflammatory dental pulp x based

on grayscale value are normal 128x , pulpitis 64 128x and necrosis 0 64x .

References Carneiro LS, Nunes CA, Silva MA, Leles CR, and Mendonc EF. 2009. In vivo study of pixel grey measurement

in digital subtraction radiography for monitoring caries remineralization. Dentomaxillofacial Radiology

Journal, 38, 73-78. Available online at http://dmfr.birjournals.org (diakses 17 April 2012).

Langlais, R. P. 2004. Exercises in Oral Radiology and Interpretation, 4th edition. Missouri W.B. Saunders

Company. Pp. 67-68.

Michael Shwartz, Joseph S. Pliskin, Hans-Göran Gröndahl and Joseph Boffa, 1987, A Mathematical Model of

Dental Caries Used to Evaluate the Benefits of Alternative Frequencies of Bitewing Radiographs

Management Science 1987 33:771-783; doi:10.1287/mnsc.33.6.771.

Munir, R. 2004. Pengolahan Citra Digital dengan Pendekatan Algoritmik. Bandung : Penerbit Informatika.

Phillips, D. 1994. Image Processing in C. Kansas : R&D Publications, Inc.

Sudradjat, 2013, Model and Simulation, Departement of Mathjematics, Faculty of Mathematic and Natural

Sciences, Universitas Padjadjaran.

Tyndall and Rathore, 2008 .Cone-beam CT diagnostic applications: caries, periodontal bone assessment, and

endodontic applications. Dent Clin North Am. 2008 Oct;52(4):825-41, vii. doi: 10.1016/j.cden.2008.05.002.

Walton E Richard dan Torabinejad Mahmoud. 2008. Prinsip dan Praktik Ilmu Endodonsia, Ed 3. Alih bahasa :

Narlan Sumawinata, editor bahasa Indonesia : Lilian Juwono. Jakarta : Penerbit Buku Kedokteran EGC.

Wenzel A. 2002. Computer-automated caries detection in digital bitewings: consistency of a program and its

influence on observer agreement. J Dent Res, 81, 590-593. Available online at

http://www.medicinaoral.com.

White P. Goaz, 1994, Oral Radiology; Principles and Interpretation. Missouri: Mosby. 1-5 pp.

Whaites, Eric. 2007. Essentials of Dental Radiology Principle and Interpretation. St. Louis : Mosby Inc.

Wilhelm Burger , Mark J. Burge, 2008, Principles of Digital Image Processing, Springer-Verlag London Limited

2009

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

164

The Selection Study of International Flight

Route with Lowest Operational Costs

Warsitoa*, Sudradjat SUPIANb, Sukonoc aDepartment of Mathematics FMIPA Universitas Terbuka, Indonesia

b,cDepartment of Mathematics FMIPA Universitas Padjadjaran, Indonesia *Email: [email protected]

Abstract: Mode of transportation is a very important requirement for human life. Because

Humans in performing everyday life Often perform movements from one place to another

place, good movement at close range and long distance. Aircraft is a prime choice today to

make long-distance traveling, especially traveling cross-country (international). This has

prompted many airline companies offer international routes. Ticket price competition has also

occurred among airline companies, so that any airline should be able to control the aircraft

operational cost management. The selection of these aircraft from the national airport to the

international airport destination countries need to be considered with the lowest cost. In this

paper, the selection of these aircraft with the lowest operating costs is done using network

analysis. Results of the analysis are expected to provide an overview of how the selection of

one of the methods with the aircraft operating costs lowest done.

Keywords: Modes of transportation, international flights, lowest operating costs, network

analysis.

1. Introduction

Transportation is an essential requirement for human life today. Rapid technological advances have

driven the need for people to become more mobile (Azizah et al., 2012). Humans are always looking

for ways to shorten move from one place to another. Existing transportation facilities became more

advanced, so humans have not been quite satisfied with the land and sea transportation (Lee, 2000;

Bazargan, 2004), people use air transport. The existence of human aircraft can go to one place to the

premises much faster than using land and sea transportation (Forbes & Lederman, 2005). The existence

of transport aircraft, the company makes a variety of flight service providers began to emerge. Various

airlines not only offer many destinations, but also the best rates varied (Gustia, 2010).

The airlines currently managing its management are given freedom. So that they can freely

manage system services and flights, even setting the rates theirs of flight (Neven et al., 2005). Freedom

to manage: system services, flight routes, and the rates to stimulate the rapid growth of new airline

companies. Competition in the airline market is increasingly tight (Sarndal & Statton, 2011). This

causes any aviation service companies must be increasingly careful and cautious in doing cost control,

by implementing good management (Prihananto, 2007). Good management is one of them is the

selection of the aircraft. The selection of these aircraft flew very influential on the effectiveness and

efficiency of flight planning, where each route that run the company have feasible and meets the

requirements of flight. Terms of flight, among others, is that each required flight leg must be the optimal

combination of these aircraft, the number of aircraft planned to fly, as well as the needs of the turn-

around time for each aircraft (Yan & Young, 1996). The fulfillment of these requirements is reflected

in the total operating cost of each flight of the aircraft (Bower & Kroo, 2008).

Operational costs will affect the financial ability of the airline company. The higher the

operating costs will be lower profits and conversely, the lower the operating costs will be higher profits

(Bae et al., 2010). So the analysis is needed to determine the selected aircraft (Renaud & Boctor, 2002).

Based on the above, this paper aims to analyze the selection of international routes that provide lowest

operational costs. Selection of international routes with the lowest cost is analyzed using network

analysis. As a numerical illustration of the issue will be analyzed the selection Operational flights route

with the lowest cost, as discussed in Section 4 in this paper.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

165

2. Theoretical

In this section we explore the theoretical basis that includes: Selection Mode of Transportation, Aviation

International, Aircraft Costs and Operating Costs Issues Cheapest Flight routes.

2.1 Selection of Air Transport Modes

Behavior of public transport service users in selecting transport modes are generally determined by the

3 (three) factors, namely: trip characteristics, trip characteristics of the offender, and the characteristics

of the transportation system. Trip characteristics include: distance and travel purposes, while traveling

offender characteristics, among others: the level of income, the ownership mode of transportation, and

employment. Characteristics of the transport system include: relative travel time, travel expenses

relative, and the relative service levels. When viewed from the side of the transport providers, transport

the selection mode behavior can be influenced by a change in the characteristics of transportation system

(Prihananto, 2007).

Ortuzar (1994), states that the selection of mode of transport is a very important part of

transportation planning models. This is because the selection of modes to be played a key role for policy

makers transportation providers, especially air transport. The main factors affecting public transport

services by Morlok & Willumsen (1998) related to the travel time or travel speed, while the other quality

factors can be ignored. Based on the theory of voting behavior such transportation, airline companies

should be able to select these aircraft, considering three of these characteristics.

2.2 International Flights

In the world of aviation known the difference between civil aircraft with state aircraft, the difference

between civil aircraft by aircraft regulated under the Paris Convention State Convention, 1919, Havana

1928, the 1944 Chicago Convention, 1958 Geneva Convention, and the United Nations Convention on

UNCLOS. In various national laws such as the United States, Australia, the Netherlands, Britain, and

Indonesia also made a distinction between civil aircraft with aircraft State (Prihananto, 2007).

The differences between civil aircraft in one hand with the other state aircraft were based on

authority of each type of aircraft used by each agency. Thus the difference is important, because

according to international law the treatment of civil aircraft in contrast to the treatment of State aircraft.

State aircraft have certain immunity rights are not owned by civil aircraft. The treatment is in line with

the Paris Convention in 1919, the 1944 Chicago Convention, the Geneva Conventions, 1958 and 1982

UNCLOS UN Convention mentioned above (Prihananto, 2007).

Thus from the above description, the notion of international civil aviation is conducted flight

of civil aircraft, which has the nationality mark and registration signs, and in times of peace can cross

the airspace of Member States other international civil aviation organization (Prihananto, 2007).

2.3 Costs of Aircraft

Aircraft financing can basically be categorized into two that is non-operating expenses and operating

costs. Non-operating Costs are costs that have nothing to do with the operation of the aircraft, while

operating costs are costs of operating aircraft (Prihananto, 2007).

Operating Costs consist of direct operating costs DOC, and indirect operating costs IOC. DOC

is directly related to the cost of flying a plane, while the cost of supporting the IOC is heavily influenced

by the airline company's management policy, but can be expected to need for the IOC. Both types of

these operating costs (DOC and IOC) is one factor in considering the type of aircraft that will be

operated on a route that has been selected (Bazargan, 2004).

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

166

2.4 Direct Operating Costs

An all expenses directly related to and dependent on the type of aircraft operated, and will change to a

different type of aircraft. Direct operating costs can be grouped into (Bazargan, 2004; Prihananto, 2007):

Flight operating cost, is the cost to be incurred in connection with the operation of the aircraft. This cost

component includes several elements, that is crew costs, fuel costs, leasing costs, and insurance costs.

Maintenance cost, are costs that must be incurred as a result of aircraft maintenance. Consists of labor

costs and material costs.

Depreciation and amortization Costs. Depreciation is an expense due to lower nominal value or price

of the aircraft in the course of Waku since the product came out. While amortization is an expense

allowance periodically for costs such as cabin crew training, and pre-development costs associated with

the operation or use of the development of new aircraft.

2.5 Indirect Operating Costs

Is all fixed costs are not affected by changes in aircraft type, because it does not depend directly with

the operation of the aircraft. These Costs consist of the station and ground cost (the cost of handling

and servicing aircraft on the ground), passenger service cost (passenger service charge). Ticketing, sales

and promotion costs, and administrative costs (Prihananto, 2007).

2.6 The Problem of Lowest Cost flights Route

The problem is the lowest cost problems associated with the search path or route with lowest cost of

location / airport of origin to the location / destination airport. These problems usually arise in air

transport network, both nationally and internationally (Haouari et al., 2009; Clarke et al., 1997). Since

a lot of the problem of determining the application with the lowest cost, then the discussion will start

the model in general. Suppose there is a network of cost flights between the international airports of

origin (origin airport) up to the point of destination (destination airport). Network cost flights between

these airports generally as shown in Figure-1 as follows:

Figure-1 Network Costs of Flights Between International Airport

Suppose the cost of flights between the airport i to airport j expressed as ijd , the problem is

to find the total lowest cost of flights from the airport i to airport j , where the amount of ijd always

the same for every natural number ji . If an airport is not connected directly with the origin airport,

then the airport is given the value ijd = . The problem is to determine the values while for the airports,

and find the value or fixed costs. If the fixed cost for destination airport has to be obtained, it will obtain

the lowest cost from origin airport to destination airport (Wu and Coppins, 1981).

Procedure to determine lowest cost of this can be done according to the following stages:

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

167

Step 1: Provide the cost for airport purposes iy with a value of 0 (zero).

Step 2: Calculate the cost while the other airport with the destination airport. While the costs of charged

directly to the destination airport. If the airport cannot be achieved directly with the airport next

destination given value, which is a very large value

Step 3: From the temporary costs calculated, select the least cost and the state as a permanent cost to

the destination airport. If there is a fee while the same value, select one of them as the least cost

to the destination airport.

Step 4: Perform calculations while the cost to the airport of origin. From this calculation select jyijd

which the cost or route with lowest cost.

Calculation of cost or lowest cost with the above procedure can also be formulated in linear

programming. The problem of determining the route with lowest cost is the same as the mathematical

methods jobs assignment, or transshipment. If origin airport is seen as the source and the destination

airport as the need, then the other airport can be viewed as a point or a location of transshipment (Wu

& Coppins, 1981).

When it is assumed that the route with lowest cost or to be traversed has a coefficient of 1 (one),

then these problems can be formulated as a special assignment jobs, so that the formulation of the model

is the route with lowest cost (Wu and Coppins, 1981):

I

i

J

j

ijxijd

1 1

}{ Minimize ; ji (1)

Subject to

N

ijj

i

N

jj

jiij vxx

111 airportn destinatio final for the ,1

ansitingairport tran for 0,

airport origin for the ,1

(2)

0ijx for all i and j , with ji .

Interpretation of this model is that the airline company wants to determine lowest cost from the

origin airport to the destination airport. Constraints specified in the model showed that only one route

to be taken from the airport of origin, and to the destination airport. Similarly to other routes between

the airports are also only taken one point (Wu and Coppins, 1981). As an illustration, the model with

the lowest cost route selection will be given examples of numerical analysis as follows.

3. Illustrations

In section 3 of this discussion: description of the problem, Problem Solving Using Network Analysis

and Formulation to Linear Programming, as follows.

3.1 Description of Problem

Suppose an airline company wanted to develop a flight that originated from Bandung and intended

finish in Saint Petersburg. During the transiting flight will be planned in several countries. Some

alternative and the cost of operating each route cost studies have been conducted, for example, as given

in Table-1 below.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

168

Table-1 Flight Operational Costs Each Alternative Route

City/ Costs

Country No 1 2 3 4 5 6 7 8 9 10 11

Bandung 1 0 $70 $65 $75

Jakarta 2 0 $50 $90

Batam 3 0 $35 $85

Singapore 4 0 $60 $65

Bangladesh 5 0 $40 $115

Kuala

Lumpur

6 $40 0 $20 $55

Thailand 7 $20 0 $45 $65

Brunei D. 8 $45 0 $60

Moscow 9 0 $50

Japan 10 0 $70

S.

Petersburg

11 0

3.2 Problem Solving Using Network Analysis

Based on the operational costs data given in Table-1 can be composed of a network model of the

operational costs of each route between cities / countries as given in Figure-1 below.

Figure-1 Network Costs Each Alternative Route

Application of the procedures discussed in section 2.4, for determine the cost of route between the

airport is as follows:

Step 1: 0111 yy

Step 2: 50$050$1111.99 ydy , so 50$9 y

70$070$1111.1010 ydy , so 70$10 y

1150115$1111.55 Sydy , so 115$5 y

105$50$55$99.66 ydy , so 105$6 y

115$50$65$99.77 ydy , so 115$7 y

130$70$60$1010.88 ydy , so 1308 y

130$160$115$45$77.88 ydy , so selected 1308 y

115$175$130$45$88.77 ydy , so selected 115$7 y

105$155$115$40$55.66 ydy , so selected 105$6 y

115$145$105$40$66.55 ydy , so selected 115$5 y

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

169

Step 3: Operating costs of the airport 5 to airport 11 through the airport 9, is equal to the operating

costs of the airport 5 straight to the airport 11. Therefore, the chosen one, for example, selected

operating costs of the airport 5 to airport 11 through the airport 6 and 9, or 115$5 y .

Operating costs of the airport 6 to airport 11 through airport 9, is smaller than the operating

costs of the airport 6 to airport 11 through the airport 5. Therefore operating costs of the airport 6 to airport 11 through airport 9 were chosen as the route with the lowest cost that is $ 105 or

105$6 y . Similarly, with the same reason, the operational cost of the airport 7 to the airport

11 through the airport 8, or 115$7 y . Operating costs of airport 8 to airport 11 through the

airport 7 and 9, or 1308 y .

Step 4: 165$105$60$66.44 ydy , so 165$4 y

165$180$115$65$77.44 ydy , so selected 165$4 y

200$165$35$44.33 ydy , so 200$3 y

200$215$130$85$88.33 ydy , so selected 200$3 y

215$165$50$44.22 ydy , so 2152 y

215$130$85$88.22 ydy , so selected 2152 y

285$215$70$22.11 ydy , so 285$1 y

285$265$200$65$33.11 ydy , so selected 265$1 y

265$240$165$75$44.11 ydy , so selected 240$1 y

Based on the results of recent calculations, it appears that the lowest operational cost is $ 240. So route

airplanes with the lowest operating costs was obtained through city or state: 119641 yyyyy

, with each of the operating cost is: $ 75 + $ 60 + $ 55 + $ 50 = $ 240. Thus route selection based solely

on operating costs, so that other factors beyond the operational costs are not considered, and will be

studied further in subsequent research.

3.3 Formulation to Linear Programming

The issue of route selection airplane lowest operational costs described in section 3.1, can be formulated

into a linear programming model. Linear programming model is formulated in a theoretical discussion

refer section 2.4 above. The objective function is formulated based on the operational costs of data

given in Table-1 or in the network given in Figure-2. While the constraint functions is formulated based

on the direction the network arcs shown in Figure-2, whose coefficients are described in Table 2 below.

Table-2 Coefficient ijx Constraint functions

j

i

1 2 3 4 5 6 7 8 9 10 11 iv

1 0 +1 +1 +1 0 0 0 0 0 0 0 +1

2 -1 0 0 +1 +1 0 0 0 0 0 0 0

3 -1 0 0 +1 0 0 0 +1 0 0 0 0

4 -1 -1 -1 0 0 +1 +1 0 0 0 0 0

5 0 -1 0 0 0 +1 0 0 0 0 +1 0

6 0 0 0 -1 +1 0 +1 0 +1 0 0 0

7 0 0 0 -1 0 +1 0 +1 +1 0 0 0

8 0 0 -1 0 0 0 +1 0 0 +1 0 0

9 0 0 0 0 0 -1 -1 0 0 0 +1 0

10 0 0 0 0 0 0 0 -1 0 0 +1 0

11 0 0 0 0 -1 0 0 0 -1 -1 0 -1

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

170

The issue of route selection operating aircraft with lowest cost can be expressed as a

transshipment problem, so that the formulation of linear programming models can be created by using

a variable ijx . This variable ijx is the route of aircraft from the airport i to airport j . The objective

function of linear programming models is the minimization of operational costs from airport i to airport

j . Because of the current flight out at a cost equal to the current airport is entered, the number of

coefficients on the right hand side of the constraint functions will be equal to zero. While the coefficient

of constraint functions is the +1 for express the current flight out, and -1 for express the current flight

into an airport.

Based on the operational costs of data described in Table-1 or networks flights alternative given

in Figure-2, the objective function of the linear programming model can be formulated as follows:

Maximize 8.34.25.24.24.13.12.1 85$35$90$50$75$65$70$ xxxxxxxz

9.67.65.611.56.57.46.4 55$20$40$115$40$65$60$ xxxxxxx

11.1011.910.87.89.78.76.7 70$50$60$45$65$45$20$ xxxxxxx

Subject to:

14.13.12.1 xxx ;

05.24.22.1 xxx ;

08.34.33.1 xxx ;

07.46.44.34.24.1 xxxxx ;

011.56.55.2 xxx ;

09.67.66.56.4 xxxx ;

09.78.77.67.4 xxxx ;

010.88.78.3 xxx ;

011.99.79.6 xxx ;

011.1010.8 xx ;

111.1011.911.5 xxx ;

0ijx ; 11,...,1, ji

While using the coefficients are described in Table-2, the constraints function are formulated as

described above. Completion of linear programming models will produce minimum total operating cost

is the $ 240.

4. Conclusions

This paper has discussed the selection of international routes with lowest operating costs using network

analysis. Based on the description given problem, for the selection of alternative networks route of

aircraft as given in Figure-2. Completion of network analysis in Figure-2 is selected route aircraft from

Bandung airport to the Saint Petersburg airport, through the airport-airport: Singapore, Kuala Lumpur,

Moscow. The route has the lowest total cost of $ 240. The issue of route selection lowest flight operating

costs can also be viewed as a transshipment problem, and can be formulated as linear programming

models. Completion of the linear programming model would also obtained lowest total operating cost

of $ 240.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

171

References

Azizah, F., Rusdiansyah, A., & Indah, N.A. (2012). Perancangan Alat Bantu Pengambilan Keputusan Untuk

Penentuan Jumlah dan Rute Armada Pesawat Terbang. Kertas Kerja. Jurusan Teknik Industri, Institut

Teknologi Sepuluh Nopember, Surabaya.

Bae, K.H., Sherali, H.D., Kolling, C.P., Sarin, S.C., Trani, A.A. (2010). Integrated Airline Operations: Schedule

Design, Fleet Assignment, Aircraft Routing, and Crew Scheduling. Dissertation. Doctor of Philosophy in

Industrial and System Engineering. Faculty of the Virginia Polytechnic Institute and State University.

Blacksburg, Virginia.

Bower, G.C. & Kroo, I.M. (2008). Multi-Objective Aircraft Optimization for Minimum Cost and Emissions Over

Specific Route Networks. 26th International Congress of the Aeronautical Sciences. ICAS 2008.

Bazargan, M., (2004). Airline Operations and Scheduling. Hampshire: Ashgate Publishing Limited,

Burlington,USA.

Clarke, L., Johnson, E., Nemhauser, G., and Zhu, Z. (1997). The Aircraft Rotation Problem. Annals of Operations

Research 69(1997)33-46.

Forbes, S.J. & Lederman, M. (2005). The Role Regional Airlines in the U.S. Airline Industry. Working Paper.

Department of Economics, 9500 Gilman Drive, La Jolla, CA 92093-0508, USA.

Gustia, R.S. (2010). Penerapan Dynamic Programing Dalam Menentukan Rute Penerbangan International dari

Indonesia. Kertas Kerja. Program Studi Teknik Informatika, Institut Teknologi Bandung.

Haouari, M., Aissaoui, N. & Mansour, F.Z., (2009). Network flow-based approaches for integrated aircraft

fleeting and routing. European Journal of Operational Research, pp.591-99.

Lee, J.J. (2000). Historical and Future Trends in Aircraft Performance, Cost, and Emissions. Thesis. Master of

Science in Aeronautics and Astronautics and the Engineering System Division. Massachusetts Institute of

Technology.

Morlok, E.K. (1998). Introduction to Transportation Engineering and Planning. New York: McGraw-Hill Ltd.

Neven, D.J., Roller, L.H., & Zhang, Z. (2005). Endogenous Costs and Price-Cost Margin An Application to the

Europeann Airline Industry. Working Paper. Graduate Institute of International Studies, Universit of

Geneva.

Otuzar, J.D. & Willumsen, L.G. (1994). Modelling Transportation. Second Edition, New York: John Wiley &

Sons.

Prihananto, D. (2007). Pemilihan Tipe Pesawat Terbang Untuk Rute Yogyakarta- Jakarta Berdasarkan Perkiraan

Biaya Operasional. Seminar Nasional Teknologi 2007 (SNT 2007). Yogyakarta 24 November 2007. ISSN:

1978-9777.

Renaud, J. & Boctor, F.F., (2002). A sweep-based algorithm for the fleet size and mix vehicle routing problem.

European Journal of Operational Research, pp.618- 28.

Sarndal, C.E. & Statton, W.B. (2011). Factors Influencing Operating Cost in the Airline Industry. Working Paper.

Centre fo Transportation Studies, The University of British Columbia.

Wu, N. & Coppins, R. (1981). Linear Programming and Extensions. New York: McGraw-Hill Book Company.

Yan, S. & Young, H.F. (1996). A Decision Support Framework for Multi-Fleet Routing and Multi-Stop Flight

Sceduling. Transpn Res.-A, Vol. 30, No. 5, pp. 379-398, 1996.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

172

Scientific Debate Instructional to Enhance

Students Mathematical Communication,

Reasoning, and Connection Ability in the

Concept of Integral.

Yani RAMDANI

Bandung Islamic Unversity

[email protected]

Abstract: This study examines the effect of scientific debate instructional strategy on the

enhancement students’ mathematical communication, reasoning, and connections ability in the

concept of Integral. This study is quasi-experimental with a static group comparison design

involves 96 students from Department of Mathematics Education. Research instruments

include student’ prior knowledge of mathematics (PAM), a test of mathematical

communication, reasoning and connection ability as well as teaching materials. The data are

analyzed by using Mann-Whitney U test, two-path ANOVA and Kruskal Wallis Test. The study

finds that the enhancement in mathematical communication and reasoning abilities in students

with the scientific debate is not significantly different from the conventional instruction.

Students’ mathematical connection ability that follows instruction with scientific debate

strategy is better than that of students who follow the conventional instruction. There was no

difference in the average rate of the increasing mathematical communication and connection

skills of students between the interactions of PAM with learning approach. There is no

interaction between instructional factors and PAM factors on the increasing mathematical

communication and connection skills. The enhancement of student’ mathematical reasoning

abilities with a scientific debate based on the PAM, it is not completely distinctive. On the other

hand, the enhancement of student’ mathematical reasoning abilities with a conventional

instruction based on the PAM was considerably different. On the scientific debate, student’s

educational background differences do not give major effect on the enhancement mathematical

communication, reasoning and connection ability but on the conventional instruction provides

a better effect. It means that, the students with background of Senior High School have enhanced

mathematical communication, reasoning, and connections ability better than compare to the

students of the Islamic Senior High School.

Keywords:Scientific Debate, Communication, Reasoning, and Mathematical Connections.

1. Introduction Integration and differentiation derivative are the two important concept in mathematics. Integrals and

derivatives are the two main operations in calculus. Integration principles are formulated by Isaac

Newton and Gottfried Leibniz in the 17th century by leveraging the close relationship that exists

between the anti-derivative and the definite integrals, which is a relationship that allows to calculate the

actual value of the integral would be easier without the need to wear a Riemann sum. This relationship

is called the fundamental theorem of calculus. Through the fundamental theorem of calculus, which

they independently developed, integration is connected with differentiation. So that the integral can be

defined as anti-derivative.

In France, the concept of integrals are introduced to secondary education students (17-18) years,

which is presented in the form of the traditional definition in the form of primitive functions. In 1972,

integrals calculus introduced that includes: the definition of the Riemann sum for numerical functions

of a real variable in the interval limited; integrabel theorem of continuous functions and monotone

functions. After the reforms in 1982, to return again to see the integral as a function of the primitive

and the area under the positive function, and introduce the example integral value approach with a

variety of numerical methods.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

173

In Indonesia, the concept of integral given to senior high school students (SMU) and the Higher

Education course on calculus 2. Ability of Integrals tested for the senior high school and are equal: (1)

to calculate the indefinite integral, (2) to calculate definite integrals of algebraic functions and

trigonometric functions, (3) to measure the area under the curves, and (4) to calculate the volume of the

rotary. Ability which tested, to revolves the understanding of concepts around, and it is included in the

category of low-level in the level of high-order mathematical thinking. It is characterized by problems

in the form: to remember, to apply the formula regularly, to calculate and to apply simply formulas or

concepts in the simple cases or in the similar cases. Whereas competency standards (SKL), which must

be achieved to understand the concept of the integrals are an integral concept of algebraic functions and

trigonometric functions and be able to apply them in solving problems.

Although the abilities tested are included in the category of low-level and not in accordance

with the SKL, but some studies show that even low levels of student learning outcomes for this integral

concept was included in the category of low compared to other mathematical material. The low ability

of students to understand the concept of integral proposed by Orton (1983) that the average value of

materials integral to the evaluation results have the lowest value, ie 1.895 to 1.685 for the level of

schooling and college level on a scale of 0 to 4, compared with the material in Calculus such as: line,

limits, and derivatives. Sabella and Redish (2011) states that most college students in conventional

classes have a superficial understanding and incomplete information about the basic concepts in

calculus. Romberg and Tufte (1987) states that students view mathematics as a collection of static

concepts and technical to be solved step by step. In the mathematics learning, students are only required

to complete, to describe in graphic form, to locate, to evaluate, to define, and to quantify in a model that

is obvious. They are rarely challenged to solve the problems of high order mathematical thinking

(Ferrini-Mundy 627).

Results of trial UN 2010 were given to 879 senior high school students in the city Bandung

showed that students who were able to answer correctly to the concept of integral only 30.22%. This

condition is certainly not achieve mastery in groups. While the test results the UN 2011 which was

followed by 1578 students in the city of Bandung, also demonstrated the ability of students is still low

in the integral concept that only 6.7% of students were able to answer correctly compared to other

calculus concepts such as limits and derivatives 42.3% 11 , 5%. The not achieve mastery students in

materials integral to the course will have an impact on students who continue their education to the

Department of Mathematics or Mathematics Education. One the causing of low ability students to the

concept of integrals is the learning mathematics is presented in the form of basic concepts, the

explanation of concepts through examples, and the exercises about the settlement. The learning process

is generally carried out in line with the pattern of a dish as it is available in the reference books. The

learning process is more likely to encourage this kind of thinking reproductive process as a result of the

reasoning process is more developed imitative. Such a situation gives less room to enhance the ability

of high-order mathematical thinking and critical and creative thinking for students. Students tend to

solve problems by looking at the integral existing examples, so that when given a non-routine matter,

students' difficulties.

Development of high-order mathematical thinking ability is very important, because in all

disciplines and in the world of work requires a person to be able to: (1) to present ideas through speech,

to write, to demonstrate and to describe the presentation visually in a variety of different, (2) to

understand , to interpret, and to evaluate ideas presented orally, in writing, or in visual form, (3) to

construct, to interpretr, and to connect different representations of ideas and relations, (4) to make

investigations and allegations, to formulate questions, and to draw conclusions and to evaluate

information, and (5) to produce and to present convincing arguments (Secretary's Commission on

Achieving Necessary Skills, 1991). These abilities are closely related to communication, reasoning, and

mathematical connections ability.

In mathematics education, communication, reasoning, and connections ability are high-orger

thinking ability which must be had to solve mathematical problems and life issues that can be used on

any state, such as critical thinking, logical and systematic. This is consistent with the characteristics of

mathematics as a science of value to that reflected in the role of mathematics as a symbolic language

and powerful communication tool, a short, solid, accurate, precise, and do not have a double meaning

(Wahyudin, 2003). This the statement indicated that mathematics has a very important role for the

development of a person's thought patterns both as a representation of understanding of mathematical

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

174

concepts, communication tools, as well as a tool that serves the field of science.To through

mathematical communication ability, students can exchange their knowledge and clarify understanding.

The communication process can help students to construct the meaning and completeness of the ideas

and to avoid misconceptions. Aspects of communication also help students to be able to communicate

ideas both verbally and in the writing. When a student is challenged and asked to argue to communicate

the results of their thinking to others both orally and in writing, the students learn to explain and to

convince others, to listen or to explan ideas of the others, and to provide the opportunity for him to

develop his experience. In addition, the communication ability, other the ability that should be

developed is the reasoning ability. A person’s reasoning ability can be seen from its ability to overcome

life issues. A person with high reasoning ability will always be able to quickly make decisions in the

solving various problems in their’s live. This capability is supported by the strength of his reasoning to

be able to connect the facts and evidence to arrive at a proper conclusion. Mathematical reasoning

ability is not only necessary to resolve issues related to the field of mathematics, but also necessary to

solve the problems faced in life. Mathematical reasoning needed someone when confronted with the

issue, in which we have to evaluate arguments and to select a few feasible solutions. This condition

implies that when one is faced with a number of statements or arguments were related by the issues it

faces, mathematical reasoning ability is required to make judgments or to evaluate the statement before

to make a decision. Thus, the mathematical ability of a person is not only used for the purpose of

calculation but also to provide argument or to claim that requires logically presentation to ensure that

the way of the thinking is right. Thus, the development of reasoning ability is essential for every student,

as a preparation to be able to do the analysis before to make a decision and be able to make an argument

to defend.

Another important ability which be used to be developed by students is the mathematical

connection ability. This connection ability will also appear on the student's ability to communicate and

to reason. mathematical connection ability can be closely related with the relational understanding.

Relational understanding requires people to be able to understand more than one concept and to relate

between these concepts. While the mathematical connection ability are the ability to connect a wide

range of ideas in mathematics and in other areas as well as the real world.

Mathematical ability are developed above, in accordance with mathematical competence

proposed by Niss (in Kusumah, 2012:3), namely: (1) mathematical thinking and reasoning, (2)

mathematical argumentation, mathematical communication, (4) modeling, (5) problem possing and

problem solving, (6) representation, (7) symbol, and (8) tool and technology. NCTM (2000) has

identified that, communication, reasoning, and problem solving ability are an important process in the

mathematics instructional in an effort to solve the mathematical problems.

Communication, reasoning, and connections mathematical ability, can only be achieved

through the instructional that can enhance the capabilities particularly in the cognitive domain, affective

and psychomotor ability. Suryadi study (2005) on the development of high-order mathematical thinking

through an indirect approach, there are two fundamental things that need further study and research and

the relationship of student-depth material and student-teacher relationships. In this study was found that

to encourage student’s mental action, the instructional process must be preceded by a presentation that

contains problems to challenge students to think. Besides the instructional process should also facilitate

the students to construct the knowledge or self-concept so that students will be able to find the back of

knowledge (reinvention).

One of the learning model that can meet the demands is scientific debate. This is supported by

the results of research Legrand, et al. (1986), which revealed that the effect of the application of

scientific debate in the learning can improve students' understanding in the concept of integral during

the final exam. The other result was indicated by Alibert, et al. (1987) that the application of scientific

debate is the majority of students in the learning attained mastery in understanding the concept of

integral, in addition, students can explore their knowledge where settlement is not implemented

algorithms.

In the application of scientific debate, students are trained to communicate the knowledge and

to sustain its arguments according to the truth of the mathematical concepts through debate. This ability

to argue will spur develop mathematical reasoning and connections ability, because the students must

be able to think logically and systematically, and be able to relate various concepts to sustain his

argument. This is consistent with the theory of constructivism which states that, the knowledge was

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

175

gained the students, in which the students construct their own knowledge through interaction, conflict,

and re-equilibration that to involve the knowledge of mathematics, other students, and a variety of

issues. The interaction is governed by a lecturer to choices fundamental issues. Based on the above

conditions, researcher is interested to review and to analyze the application of scientific l debate with

the title of the research is: "Instructional the Scientific Debate to Enhance Students’ Mathematical

Communication, Reasoning, and Connections Ability in the concept of Integral.

2. Problem Formulation

Based on the background of the issues above, research problems can be formulated as follows:

a. Are there differences the enhancement student’ mathematical communication, reasoning, and

connections ability between students that follows instruction scientific debate with the students who

follow the conventional instruction? b. Are there interaction between instructional factors and students’ Mathematical Prior Ability (PAM)

on the enhancement students’ mathematical communication and connections ability?

c. Are there differences the enhancement student’ mathematical reasoning ability among students

that follows instruction scientific debate with the students who follow the conventional instruction were based

on PAM?

d. Are there differences the enhancement student’ mathematical communication, reasoning, and

connections between students that follows instruction scientific debate with the students who follow the

conventional instruction?

3. Research Objectives

a. To analyze differences the enhancement student’ mathematical communication, reasoning, and

connections ability between students that follows instruction with scientific debate with the students who

follow the conventional instruction. b. To analyze interaction between instructional factors and students’ prior knowledge of

mathematics (PAM) on the enhancement students’ mathematical communication and

connections ability.

c. To analyze differences the enhancement student’ mathematical reasoning ability among

students that follows instruction scientific debate with the students who follow the conventional instruction

were based on PAM.

d. To analyze differences the enhancement student’ mathematical communication, reasoning, and

connections ability between students that follows instruction scientific debate with the students who

follow the conventional instruction.

4. Research Methods

This quasi-experimental study with a static group comparison design involves 96 students from who

take Calculus Lecture 2. For the purpose of category 1, the data are analized by using normalized gain

test with the formula:

Normalized gain (g) = (Score Postes-Score pretest)/(Score Ideal-Score pretest).

Test statistic used to test whether there differences the enhancement student’ mathematical

communication, reasoning, and connections ability between students that follows instruction scientific

debate with the students who follow the conventional instruction was used Mann-whiney (U). For purpose 2 was

used ANOVA, For the purpose of 3 was used Kruskal Wallis, and for the purpose of 4 was used the

Mann-Whitney test (U).

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

176

5. Research Result

a. Comparison of the enhancement student’ mathematical communication, reasoning,

and connections ability Based Learning Model

Subjects of study were 94 students who its are 60 students from senior high school and 34 students

from Islamic senior high school. The enhancement student’ mathematical communication, reasoning,

and connections ability are at criteria middle with an average normalized gain scores are 0.660, 0.522,

and 0.635. While statistically calculated by Mann-whiney Test (U) to test the hypothesis that

enhancement student’ mathematical communication, reasoning, and connections ability that follows

scientific debate instruction is better than the students that follow conventional instruction show:

1) There was no difference the enhancement student’ mathematical communication, ability

between students that follows instruction with scientific debate with the students who follow the conventional

instruction. 2) There were no differences the enhancement student’ mathematical reasoning ability between

students that follows instruction with scientific debate with the students who follow the conventional instruction.

3) There are differences the enhancement student’ mathematical connections ability between

students that follows instruction with scientific debate with the students who follow the conventional instruction.

In other words, the enhancement student’ mathematical connections ability students that

follows instruction with scientific debate better than the students who follow the conventional instruction.

b. Comparison of Communication, Reasoning, and Mathematical Connections Ability

According Interaction between Instructional Model and PAM

Since data of the enhancement mathematical communication and connections showed normal

distribution and homogeneous variance, the interaction between instructional models with PAM were

analyzed using ANOVA. The calculation result is presented in Table 2 as follows:

Table 2 ANOVA Score The Enhancement Mathematical Communications and Connections

Based PAM and Instructional Model

Communication

Sumber JK Dk RJK F Sig. H0

PAM (A) 5586,652 2 2793,326 4,042 0,021 Tolak

Model (B) 20,163 1 20,163 0,029 0,865 Terima

A B 115,535 2 57,767 0,084 0,920 Terima

Total 374184,375 94

Connection

PAM (A) 1167,075 2 583,537 1,633 0,201 Terima

Model (B) 15832,501 1 15832,501 44,310 0,000 Tolak

A B 887,312 2 443,656 1,242 0,294 Terima

Total 194942,003 94

From Table 2, it can be concluded that:

1) There are no differences in the enhancement student’ mathematical communication, ability that

was based level PAM.

2) The differences instructional model does not provide a significant effect on the enhancement

student’ mathematical communication ability.

3) There are no differences in the enhancement student’ mathematical connection ability that was

based level PAM.

4) The differences instructional model provide a significant effect on the enhancement student’

mathematical connection ability.

5) There is no interaction between instructional factors and PAM factors on the increasing

mathematical communication and connection ability.

Graphically, the interaction between instructional model with PAM to the enhancement student’

mathematical communication and connections shown in Figure 1.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

177

Figure 1 The Enhancement Mathematical Communication and Connections

From the graphical representation, it appears that the group of students with midle and less PAM

learning outcomes with learning model scientific debate produced an average improvement of

communication capabilities mathematically larger than conventional learning. While the group of

students with high PAM yielded an average improvement of communication capabilities mathematical

smaller than conventional learning. The average increase in the ability of mathematical connections to

each group of PAM in the group of students with learning scientific debate is always better than the

conventional learning student group. This gives an indication that the application of scientific debate

learning model produces a better impact than conventional learning to increase student mathematical

connection capabilities. The average improvement of mathematical reasoning abilities in students

learning model scientific debate to the level of high, medium, low and did not differ significantly.

Whereas in conventional learning models, the average increase in mathematical reasoning ability for

high-level, midle, and low differ significantly.

c. Communication Ability, Reasoning, and Mathematical Connections Learning Model According to

the relationship between the Student Educational Background

Statistical test was used to see the differences in the enhancement students’ mathematical

communication, reasoning, and connections ability based on instructional model and student

educational background is the Mann-Whitney test (U).

Table 3 Enhancement Mathematical Communication, Reasoning, and Connections ability Based on

Instructional Model and Student Educational Background

Statistical Test

Experiment Group Control Group

Communicati

on Reasoning

Connectio

n

Communicati

on Reasoning

Connectio

n

Mann-Whitney U 170.500 213.500 196.000 157.000 167.500 137.500

Wilcoxon W 275.500 318.500 791.000 367.000 377.500 347.500

Z -1.534 -.557 -.954 -2.283 -2.050 -2.717

Asymp. Sig. (2-

tailed)

.125 .577 .340 .022 .040 .007

a. Grouping Variable: Student Educational Background

From Table 3, it appears that in the group with scientific debate instructional, differences in

students’ educational background did not have a significant impact on increasing student’ mathematical

communication, reasoning, and connection ability. For student groups with conventional instructional,

student’educational background differences have a strong influence on increasing students’

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

178

mathematical communication, reasoning, and connections ability. This means the enhancement

mathematical communication, reasoning, and connections ability that the students come from senior

high school better than students from Islamic senior high school.

6. Discussion

The factors discussed in this study are instructional model (conventional and Scientific Debate), prior

knowledge of mathematics (high, midle and low), and educational background of students (Senior High

School and Islamic Senior High School).

a. Instructional Model

Characteristics of the instructional model can be used as an idea of how teaching and learning occurs

in the classroom. From the results of the study literature about scientific debate instructional model and

conventional instructional acquired characteristics for each of the instructional model as shown in Table

4 below,

Table 4 Instructional Model Characteristics

Instructional

Model Characteristics

Scientific Debate

1. Instructional materials was packaged in the form the concepts presentation,

application examples, and problems presentation, concepts understanding,

principles, and procedures in the solving problem was supposed to had

studied independently through instructional materials. The concept

Understanding that was true and deep was solved through applications

problem solving during the debate. This activity is expected to accommodate

the students’ opportunity to construct and to discover knowledge

independently.

2. mathematical was activities a variety of thinking, thus instructional activities

should emphasize at the process of abstraction from the actual experience of

daily life in the world of mathematics and vice versa in other words there is

a correlation between mathematics with other science disciplines. So that to

stimulate debate between students and students or between students and

lecturer the learning was began with a applications problem.

3. Lecturer was facilitator. To be a debate, lecturer initiate and to organize the

results of student’ answering. These results were written on a chalkboard

without to evaluate the truth of the statement. Statement given back to the

students to be considered and they should restate the statement and has been

supported by some way, given its argument, proven, proven that something

is not right whit the counter-examples, and others. The statement is justified

by demonstrating theorems or rules, while some are built as untrue statement

presented as a "false statement" with a related counter-examples.

4. Instructional interaction models developed are multidirectional.

Conventional

1. Teaching materials are provided as references contained in the book. Serving

concepts, principles, and procedures are given directly by the lectures through

lecturing with the example, then the student was given the task to solve the

problem. The problem solving completely is done by pointing to one or more

students then discussed collectively to obtain the correct answer.

2. Lecturers have a major role in learning. More students are receiving what is

conveyed by the lecturer.

3. Instuctional interaction model that was developed over a one-way or two-way.

From the instructional model characteristics of the scientific debate, it appears that each student

is required to construct and to understand knowledge independently. Thus, students have a very big role

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

179

in the effort to understand the concept, to developed the procedure, to discovered the principle, and to

implement of concepts, procedures, and principles to solve a given problem. The lecturer main role is

fasilisator should always facilitate any development that happens to students during the learning process

takes place.

To implement of the scientific debate instructional model, the lecturer prepare teaching

materials and Student Worksheet (LKS) for the experimental class. Teaching materials were developed

in this study were designed so that students were able to find the concepts, procedures, principles, and

be able to apply them to solve a given problem. Teaching materials were developed in such a way that

students allowed to achieve relevant mathematical competence to the material being studied. In

addition, focus on the developing teaching materials are directed to develop students’ mathematical

communication, reasoning, and connection ability optimally. LKS were used to equip students as a

reference for themselves in the debate and problem solving. LKS contain not only the problems to be

solved by the students, but also includes concepts, procedures, and principles, as well as the presentation

examples of applications that can be learned students before the preparation in the debate. In the

scientific debate instructional model, students are required to solve the problem in the form writing and

they also mush account their answers in the debate. This is consistent with the statement Pugalee (2001)

that the students mush trained to communicate their mathematical knowledge, to provide arguments for

each answer and, to respond every the answers given by others, so that what it is learned to be more

meaningful for him . In addition, Huggins (1999) stated that in order to enhance conceptual

understanding of mathematics, students can do to express mathematical ideas to others.

From the quantitative calculations results, showed that there were not difference the

enhancement student’ mathematical communication, reasoning, and connections ability between

students that follows instruction with scientific debate with the students who follow the conventional instruction.

Although the results of the calculations did not indicate any difference, but we mush compare between

the scientific debate instructional model with conventional instructional. The scientific debate

instructional model challenged students to learn actively. Now it the question is whether an active

learning is better than passive learning? If the goal of students’ learning result only in terms of the

ability to answer the questions which are generally from the cognitive, the result was not much different.

Highest actively learning student will be positively correlated with academic achievement but low.

However, another result seen that everyday life requires activity of that to study and to work with active

or it's more fun, that active learning will broaden your horizons and others, the current study is very

important. Minimum to help realize human beings are most active will not be needed in the days to

come (Ruseffefendi, 1991).

There are difference the enhancement student’ mathematical connections ability between

students that follows instruction with scientific debate with the students who follow the conventional instruction. This

means, students acquire instruction with scientific debate has the enhancement in the mathematical

connections ability are better than conventional instructional. This rationale is because student should

be solved applications mathematics problems, the students’ knowledge become more insight to interpret

mathematics not only in mathematics itself but also its role in other fields. This is consistent with

research Harnisch (in Qohar, 2011) that students can get an overview of the concepts and big ideas

about relations between mathematics and science, as well as students gain more experience.

Additionally, NCTM (2000) states that mathematics is not a collection of separate topics, but it is a

network of ideas that are related very closely. The problems were solved student is able to lead students

to improve their mathematical connections. Thus, students are trained to perform well in mathematics

as well as linkages to other areas so that students recognize the importance of the mathematical

connections ability.

It this consistent with Piaget's statement that "knowledge is actively constructed by the leaner,

not passively received from the environment" (Dougiamas, 1998). Piaget’ statement means that the

students do not passively receive the knowledge of the environment but must be active to discover

knowledge on their own again. While the role of the teacher only drove students to seek knowledge and

to understand significantly. This is consistent with the Geoghegan’ statement (2005) that learning and

teaching become a reflective phenomenon based on the inter-connections between teachers and students

together to co-instructor in the search to mean and to understand. Salamon and Perkins (in Dewanto,

2007) suggested that the 'acquisition' and 'participation' in learning to relate and to interact with each

other in a synergic way.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

180

In the implementation of scientific debate instructional model, students are challenged to

express and to reflect their knowledge and to related with their the studied material. Students should

also be able to ask things that are not known or are still hesitant to both other students and the lecturer.

Huggins (1999) states that one of the forms of mathematical communication is speaking, this is identical

to the debate or discussion. Baroody (1993) some outlines of the advantages of discussion include: (a)

to accelerate the understanding of learning materials and proficiency using strategies, (b) to help

students construct mathematical understanding; (c) to informs that mathematicians typically do not

solve the problem on their own but building ideas with other experts in the team, and (4) to help students

analyze and solve problems wisely.

b. Mathematical Prior Ability

Student’ intelligence can be measured or kumulatif achievement index (GPA) although sometimes

misses, but the GPA can be a guarantee that the student will be able to attend a particular school. In this

study, student’ mathematical prior ability (PAM) is the students’ ability in the Calculus 1 material. PAM

was used to see the readiness of students to receive course materials and to relate with the other subject

matter that has been received. The students’ success in their lessons depends on the readiness of students

in their lessons. PAM classified into three groups: high, meddle, and low.

From the the quantitative analysis results, PAM provides significant effect on the enhancement

student’ mathematical communication ability. This is consistent with the Arends’opinion (2008a: 268),

that the student's ability learn new ideas rely on their prior knowledge previous and to exist cognitive

structures. Meanwhile, according to Ruseffendi, the student’ success in learning is almost entirely

influenced by the intelligence of students, student’readiness, and students’ talent (1991). The effect of

PAM that was ubased on the instructional model difference (scientific debate and conventional) indicate

that the instructional models differences do not provide a significant effect on the enhancement student’

mathematical communication ability.

The quantitative analysis result, showed that the difference in the level of PAM did not give

significant effects on the enhancement student’ mathematical connection ability.The difference learning

model gives significant effects on the enhancement student’ mathematical connection ability. This

means that, average the enhancement students’ mathematical connections with scientific debate modelis

always better than the student with conventional instructional. This gives an indication that the

application of scientific debate instructional model produces a better effect than the application the

conventional instructional to student’ mathematical connection ability.

There was not interaction between model instructional with PAM on the enhancement student’

mathematical communication and connection ability. The results was consistent with the results of Nana

(2009) which concluded that there is not interaction between learning approach with PAM students in

mathematical problem solving ability. Thus, the utility factor learning model does not provide a strong

influence the enhancement student’ mathematical communication and connection ability. The scientific

debate instructional model can enhance students’ mathematical reasoning ability evenly to the various

levels of PAM. This it wass possible, because at the scientific debate instructional model, each student

is challenged to put forward ideas and to reflex their answer in the debate. This condition may spur

increased student‘ mathematical reasoning and understanding in depth. Huggins (1999) statement that

in order to enhance conceptual understanding of mathematics, students can do to express mathematical

ideas to others. Brenner (1998: 108) and Kadir (2008: 346), through discussions with teachers and

activity partner, students are expected to gain a better understanding of the basic concepts of

mathematics and become better problem solvers. Increasing students' mathematical conceptual

understanding will eventually lead to increase of mathematical reasoning ability. Goldberg and Larson

(2006: 97) statement that, the discussion can improve reasoning ability, human relations skills, and

communication ability.

The average enhancement students’mathematical reasoning ability in the conventional

instructional learning based on PAM significantly different. This means that in the conventional

instructional, rate students’ PAM has a strong influence on increasing student’ reasoning ability. This

is possible, because in the conventional instructional, students receive teaching materials passively, so

that the enhancement reasoning will rely on its prior knowledge. In addition to instructional

conventional, students will rely on the textbook to enhance knowledge in more depth. Yet according to

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

181

Baroody (1993: 99), teachers and textbooks often have the words and symbols little meaning for

children. Furthermore, Baroody said that, the students rarely asked to explain their ideas in any form.

These conditions certainly less developed students’ reasoning ability optimally, because students are

not challenged to discover and to construct knowledge independently.

c. Educational Background of Students

Students’ educational background in this study grouped the students come from Senior High School

(SMU) and Islamic Senior High School (MA). In the group with scientific debate instructional, the

student’ educational background differences does not have a strong influence on the enhancement of

students’ mathematical communication, reasoning, and connection ability. The absence of the influence

of educational background to the enhancement of student’ mathematical communication, reasoning,

and connections ability are great adaptability of each student. Adaptability of student in the learning is

affected by the application of scientific debate instructional, in which the instructional process is faced

on an application problems. This application mathematical problems are the focus and stimulus for

student’ instructional and as a vehicle for the development of problem solving ability. So the

instructional is focuse on students (students-centered), the lecturer is a facilitator or coaches, as well as

acquired the newly information or concepts through self-directed learning. The importance of students

are faced with the problems that this application are in accordance with the Walshs’opinion (2008),

which can form a positive attitude, to form creativity, to enhance deep understanding, and to develop

problem solving ability or to investigative ability which can be applied in various fields of life.

Scientific debate instructional emphasis students' learning activities. Students work

collaboratively to identify what they need to develop solutions and to find relevant sources, to share

and to synthesize of findings, and to ask questions that lead to further learning. In this case, teachers act

as facilitator who facilitate student in the learning. According to, the CIDR (2004), as a facilitator,

teacher can ask questions to the students to sharpen or deepen students' understanding of the relationship

between concepts that they built. Lecturer seeks balance between activities provide direct guidance and

to encourage students the self-directed learning. These conditions will trigger to the enhancement of

students mathematical communication, reasoning, and mathematical connections ability equally,

because the scientific debate instructional challenged students to have greater adaptability.

In the group of students with conventional instructional, the results showed that the differences

in the students’ educational background have an influence on the enhancement of students’

mathematical communication, reasoning, and connections ability. It becomes logical because in the

conventional instructional, lecturers explain the subject matter actively, to give examples and to

exercises, while students act like machines, students listen, to take notes and to do the exercises

givenlecturer. In these circumstances, students are not given much time to find his own knowledge

because learning is dominated lecturer. The discussions are not often implemented, so the interaction

and communication between students and other students, students and lecturer did not show up. This

results in mathematical communication, reasoning, and connections less develop students’ ability

optimally. Students feel less mathematical applications in social life, whereas mathematical literacy is

very important in today's information era. This results in the conventional instructional, student’

educational background can not enhance mathematical communication, reasoning, and connections

evenly because the students will learn and to receive course materials according to his ability.

7. Conclusion

Based on the formulation of the problem, results, and discussion presented in the previous chapter can

be concluded as follows:

a. The enhancement students’ mathematical communication, reasoning, and connections ability with

scientific debate instructional has criteria middle. The enhancement students’ mathematical

communication and reasoning ability with scientific debate instructional is not significantly

different than the group of students with conventional instructional. The enhancement students’

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

182

mathematical connections ability with scientific debate instructional is better than conventional

instructional.

b. There is no interaction between the PAM (high, middle, low) by a factor instructional model

(scientific debate and conventional) to increase students’ mathematical communication and

connection ability.

c. The average enhancement of mathematical reasoning abilities in the group of students with

scientific debate instructional based on PAM did not differ significantly. The average enhancement

of mathematical reasoning ability in the student group with conventional instructional based on

PAM significantly different.

d. The difference in the student’ educational background does not have an influence on the

enhancement mathematical communication, reasoning, and connections ability for the students that

follow scientific debate instructional. In the group of students with conventional instructional, the

differences student’ educational background have a significant effect on increasing mathematical

communication, reasoning, and connection ability.

8. References Alibert, D., Legrand, M. & Richard, F., (1987) ‘Alteration of didactic contract in codidactic situation’, Proceeding

of PME 11, Monteral, 379-386.

Alibert, D., (1988), ‘Towards New Customs in the Classroom’, For the Learning of Mathematics, 8(2), 31-35.

Arends, R. I. (2008). Learning to Teach. New York: McGraw-Hill Companies, Inc.

Brenner, M. E. (1998) Development of Mathematical Communication in Problem Solving Groups by Language

Minority Students. Bilingual Research Journal, 22:2, 3, & 4 Spring, Summer, & Fall 1998.

Ferrini-Mundy, Joan and Karen G. Graham, (1991). An Overview of the Calculus Curriculum Reform Effort:

Issues for Learning, Teaching, and Curriculum Development, American Mathematical Monthly, 98 (7) 627-

635.

Huggins, B., & Maiste, T.(1999). Communication in Mathematics. Master’s Action Research Project, St. Xavier

University & IRI/Skylight.

http://blog.elearning.unesa.ac.id/m-saikhul-arif/makalah-pembelajaran-dengan-pendekatan-teori-

konstruktivistik.

http://www.distrodocs.com/16186-cara-belajar-efektif-belajar-akuntasi.

Kusumah, Y.S., (2012). Current Trends in Mathematics and Mathematics Eduacation: Teacher Professionsl

Development in The Enhancement of Students’ Mathematical Literacy and Competency. Makalah Seminar

Nasional: Universitas Pendidikan Indonesia.

Legrand, M., et al, (1986), “Le debat scientifique”, Actes du Collaques franco-allemands de Marseille, 53-66.

NCTM (2000). Principles and Standards for School Mathematics, Reston, Virginia.

Niss, G. (1996). Goals of mathematics teaching. In A.J. Bishop, K. Clementa, C. Keitel, J. Kilpatrick,& C. Laborde

(Eds.). International handbook of mathematical education. Netherlands: Kluwer Academic Publisher.

Orton, A. (1983). Student’understanding of Integration. Educational Studies in Mathematics, 14, 1-18.

Polya, G. (1973). How to Solve It. A New Aspect of Mathematical Method. New Jersey: Princenton University

Press.

Pugalee, D.A. (2001). Using Communication to Develop Student’s Literacy. Journal Research of Mathematics

Education 6(5) , 296-299.

Romberg, TA, dan Fredric W. Tufte, (1987). Kurikulum Matematika Rekayasa: Beberapa Saran dari Cognitive

Science, Monitoring Matematika Sekolah: Latar Belakang Makalah.

Ruseffendi, E. T. (1991). Pengantar kepada Membantu Guru Mengembangkan Kompetensinya dalam Pengajaran

Matematika untuk Meningkatkan CBSA. Bandung: Tarsito.

Sabella, Mel S. dan Redish, E. F. Student Understanding of Topics in Calculus, University of Maryland Physics

Education Research Group.

http://www.physics.umd.edu/perg/plinks/calc.htm.

Suryadi. D (2005). Penggunaan Pendekatan Pembelajaran Tidak Langsung Serta Pendekatan Gabungan

Langsung dan Tidak Langsung dalam Rangka Meningkatkan Kemampuan Berfikir Matematik Tingkat Tinggi

Siswa SLTP. Disertasi pada PPS UPI: tidak diterbitkan.

Suryadi. D (2010). Model Antisipasi dan Situasi Didaktis dalam Pembelajaran Matematika Kombinatorik

Berbasis Pendekatan Tidak Langsung. Bandung: UPI.

Wahyudin, (2003), Kemampuan Guru Matematika, Calon Guru Matematika, dan Siswa dalam Mata Pelajaran

Matematika. Disertasi pada PPS UPI: tidak diterbitkan.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th, 2013

183

Rice Supply Prediction Integrated Part Of

Framework For Forecasting Rice Crises Time

In Bandung-Indonesia

Yuyun HIDAYATa, Ismail BIN MOHDb, Mustafa MAMATc ,Sukonod, a Department of Statistics FMIPA ,UniversitasPadjadjaran, Indonesia

b Department of of Mathematics FST Universiti Malaysia Terengganu, Malaysia c Department of of Mathematics FST Universiti Malaysia Terengganu, Malaysia

d Department of of Mathematics FMIPA ,UniversitasPadjadjaran, Indonesia

[email protected]

Abstract: In an effort to give policymakers of food crisis alerts, IFPRI have developed a new

tool for the early warning system for identifying extreme price variability of agricultural

products. This tool measures excessive food price variability. By providing an early warning

system to alert the world of price abnormalities for key commodities in the global agricultural

markets, policymakers can make better informed plans and decisions, including when to release

stocks from emergency grain reserve. The World Bank is currently developing a framework of

monitoring the food price crisis in the global and national levels. This framework focused on

price and did not examine other factors that govern the food crisis. Development is not directed

to define the time of food crisis but for operational indicators that can monitor how close food

prices to a level which is categorized crisis. The indicators are used to predict global food price

spike that occurred in June 2008 and February 2011. Analyzes were conducted to select

indicators that sounded an alert about the crisis. The best one is then used to monitor where

current global prices stand with respect to the selected crisis threshold .The weakness of the

monitoring frame work done by World Bank and IFPRI above is that maybe we will never seen

rice price hike, but we may can still see the rice crises. We may have no natural disaster but we

may still have the rice crises caused by miss management. To overcome the weakness of the

models studied on ten papers. The author’s proposed different approach called: Quasi Static

Rice Crises with Quasi Static Prediction Model And Justifiable Action. Keywords: crisis, quasi static, interval forecasting, supply forecasting

1. Introduction

Extensive shrinkage of rice rate fields in Indonesia areas such as Tangerang, Serang, Bekasi, Karawang,

Purwakarta, Bandung and Bogor as a regional center for rice national production is much higher

compared to other regions. In these areas have been established thousands of industries. In Bekasi,

thousands of hectares of rice fields converted to the industry, even rice fields with the technical

irrigation. Central Bureau of Statistics indicate, paddy fields conversion rate reached 110 thousand

hectares per year since 2000, and the printing of the new rice fields is only about 30-52 thousand

hectares per year. In Kabupaten Bandung, shrinkage of rice rate is up to 30 hectares per year, paddy

fields were converted into residential or industrial complex (http://oc.its.ac.id). According to Bandung

mayor reports, in 2012 the city of Bandung has 14,725 hectares of land specialized planted with rice.

Rice production in the city of Bandung was at 4.5% of the total requirement, (Hidayat Yuyun, 2012).

Conditions of rice durability Bandung longest lasted for 7 years since 2011, 7th year supply of rice is

still considered sufficient to prevent social unrest due to shortage of rice. In the year to 8 City of

Bandung predicted rice is in crisis or public turmoil associated with rice shortage. It must be realized

that the rice crisis year-to-8 occurs when the Extent Program to the gardens and dry field and

Intensification of all the land was done in 2011. If the program is not done in year 2011, the predicted

figures will be even worse and rice crisis will occurs sooner or later. Assuming the conditions of rice

supply , the rate of decrease in production (due to decreased land and or reduced productivity, and rice

prices constant as it is today, (HidayatYuyun, 2011). Rice crisis sooner or later it will happen so that

accurate information about the predicted time of crisis in Bandung become very pressing.

To overcome these issues, the aim of this research is to identify rice crisis time in Bandung Indonesia.

To support the research aim, the authors outlined five specific objectives; (1) to develop rice crisis

criteria; (2) to forecast rice demand in Bandung; (3) to forecast rice supply in Bandung; (4) to determine

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

184

simultaneous model of demand-supply-price of rice in Bandung; and lastly (5) to forecast rice crisis

time in Bandung. However, in this paper, the discussion is limited to objective 1 and methods used to

achieve the objective. Expected results of this study will provide a wide and deep insight of the rice

scarcity situation in Bandung, Indonesia. The results are expected to give these benefits: (1) Preventive

policy to slow down a crisis with rice in city of Bandung or to delay rice crisis time; and (2) Bandung

city government will have a clear agricultural strategy to strengthening the resilience of rice is-oriented.

Urgency or primacy of this study was food crisis is not the matter of existence, the matter is when. This

thought inspired by the theory of Malthus; in this case the demand will follow a geometrical

phenomenon, while supply will indicate behavior arithmetically. This is a consequence of thinking that

human population grew faster than food production. Malthusian theory clearly emphasizes the

importance of balance in the number of population geometrically the supply of food by arithmetically.

The Malthusian theory has actually questioned the environmental carrying capacity and the capacity of

the environment. Land as a component of the natural environment is not able to provide agricultural to

meet the needs of a growing population. Carrying capacity of the land as components of the environment

decline, because the burden of more and more human. In line with the Malthusian theory, then sooner

or later a crisis will occur, thus there is a strong basis for conducting research with great question, when

the rice crisis will happen? Perhaps the crisis could be happen tomorrow or near future. Thoughts on

the occurrence of a crisis with rice in 2012 also supported of data the low internal production of

Bandung, at 4.5% residents. The other data that strengthens the idea is that the crisis the paddy field

occurred because rice farming is not financially attractive. The data show that the local minimum wage

per month of Bandung per worker amounted IDR.1,271,625 while the wages of farm laborers of

Bandung per person per month, amounting IDR.395,390. This means that the wages of the farm laborers

of Bandung far below the minimum wage of Bandung. For the wages of farm workers is much smaller

than the minimum wage then how can we expect the supply of rice in the city of Bandung will increase.

The trend is a decline, this is where the role of the Department of Agriculture and Food Security of

Bandung to seek so that resilience of rice rose. Of course if crisis with rice will occur in the near future,

the effective action must be done quickly. It demands knowing when the time crisis with rice will occur.

Predictions that the City of Bandung will have a time of crisis in 2018 as reported in the study

(HidayatYuyun, 2011) functions as a trigger for a more profound research. Remembering preliminary

nature of these studies over which shows that it is possible to predict the time crisis with rice. Previous

research has a lot of drawbacks in the assumption that the conditions of supply of rice, the rate of

production decline [due to decreased land and or reduced productivity], and rice prices constant like

now. Of course this is unrealistic.

The Limitations of Study: (1). This study is geographically limited to the Bandung city at Republic of

Indonesia; (2).Under the assumption that rice demand behavior of people in Bandung are “coded in

their DNA”, modified demand behavior is not possible or has small probability and not challenging.

Prediction of rice crisis time is determined when other food substitute is not available; and (3).Data for

the study were collected from a number of secondary sources except for model of price and number of

rioters. Most of the data for the analysis were obtained from various publications of Indonesian Central

Bureau of Statistics .

To show the originality of the research the authors have done literature study to some papers efficiently

and off course effective. The papers are:

(IRRI-INTERNATIONAL RICE RESEARCH INSTITUTE,2008), (David Dawe& Tom

Slayton,2010), (Saifullah a.,2008), (Won W. Koo MHK, Gordon W. Erlandson,1985), (Dawe D,2010),

(Parhusip U.,2012), ( Cuesta J.,2012), Can The Next Crisis Be Prevented , Dawe D. (2010). Supply and

Demand Analysis of Rice in Indonesia ,Parhusip U. (2012).Global Price Trend Cuesta J. (2012),

(ATENEO ECONOMICS ASSOCIATION ,2008), ( Berthelsen J.,2011), and (Maximo Torero,2012).

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

185

2. Methodology

2.1 Modeling of Rice Crises Forecasting

This research has limitations and is not to answer a lot of things. In other words, a state that we take

care of is not to be the cause of the poor management of the rice riots. It means when people come to

the streets what the overwhelming problem is, if the overwhelming problem is the rice, now it's our

responsibility.

Step1. Crisis Definition

What are crises anyway? There is no standard definition. In this paper, crisis is defined as the situation

when the cops cannot control the riot caused totally or partially by rice scarcity. Riot caused by other

factors except rice scarcity is beyond concern. This is the best and robust definition.

Step2. Riot Control Model

Next question is when will it happen? Immediate answer of it must be riot control model. Its because

we are worry or afraid of uncontrollable riot causing the crises. So we must know riot control model

and its validity period. This model is not for the present moment. Its validity period must cover when

crises is happen and not for now. So it involved time variable.

Step3. People Stress Model

Let assume we have riot controlling model, what next? Riot somehow caused by people stress. Crucial

thing in developing people stress model are how many people are rioting in and in what condition, or

because of what condition. So the questions include magnitude and duration. Why people stress? In this

research the authors assume that the rice price caused stress of the people. As a consequence of it then

we must develop price model

Step4. Price Econometric Model

We need price model and I don’t want to model it in time series approach because the forecast sometime

is too late and some time contaminated. Another reason, price does not indicate scarcity dan

abundances. Does cheap rice prices showed a lot of rice? An example when the dollar being held at a

price of IDR 2000.One day he jumped from IDR 2000 to IDR 9000. Time series price data lose the

dynamic of dollar. Time series then Contaminated. Price does not show dynamic. Price dynamic cannot

be seen through the time series.

The most important thing I realize that price is effect not a caused. Then time series modeling in prices

resulting in too late. That why 1 use econometric model instead of time series model in forecasting the

price dynamic. There are two reasons for not practicing price forecasting using the time series method.

Firstly, Contaminated and secondly it’s too late. Too late because price is just an effect and

contaminated because there are changes in state regulated prices (administered price) resulting not the

true price. Economically speaking price is determined by the interaction of demand and supply. Rice

Price is controlled by a dynamic system which has numerous of factors. Time series is not the

appropriate way of analyzing thing. Prices are controlled by at least demand and supply variable. As is

well known, the price of a commodity and the quantity sold are determined by the intersection of

demand-and-supply curves for that commodity. Thereof we must develop simultaneous-equation

model, involving demand, supply, and price.

Step5. Time Series Forecasting Model for Demand and Supply

As a consequence of econometric models in Step4 then the authors should seek the best time series

forecasting model for demand and supply of rice separately. Forecasting demand and supply output in

step 5 will be the input to Step 4, Step 3 and Step 2 and ended up with activities to determine when the

rice crisis happen

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

186

2.2 Forecasting of Rice Supply At Bandung City-Indonesia [1]

As an integral part of modeling the rice crises forecasting this paper shows genuine approach of rice

demand forecasting in Bandung city. Here is the process.

2.2.1 Data The data used is rice supply to Bandung city (internal and external) from 2004 until 2012. Data compiled

from three sources, namely Bandung Central Bureau of Statistics, Logistics Agency Sub Division West

Java, the Department of Agriculture and Food Security in Bandung (Indonesia).

2.2.2 Nine Forecasting Model

In this study the authors have identified nine forecasting models.

a. The equation of Naïve model is :[20]

tt XF 1 (2.1)

b. Model Naive Rate of Change

The equation of naive rate of change model is , [20]

1

1

t

ttt

X

XXF (2.2)

c. Simple Linear Regression Model

The equation is:

)(tbaFt . (2.3)

d. Double Moving Averages Model

The equations used are:

N

XXXXS Ntttt

t121' ...

. (2.4)

N

SSSSS Ntttt

t

'1

'2

'1

''' ... . (2.5)

''''''' 2)( tttttt SSSSSa . (2.6)

)(1

2 '''

ttt SSN

b

(2.7)

mbaF ttmt . (2.8)

e. Single Exponential Smoothing

The equation of Single Exponential Smoothing is written as follow [21]

ttt FXF )1(1 (2.9)

ttt FXe (2.10)

f. Double Exponential Smoothing : Brown Liniear Method One-Parameter

The Equation are used to implement the method are shown [21] '

1' )1( ttt SXS (2.11)

''

1

''' )1( ttt SSS (2.12)

''''''' 2)( tttttt SSSSSa (2.13)

)(1

'''

ttt SSb

(2.14)

mbaF ttmt (2.15)

g. Double Exponential Smoothing : Holt Two-Parameter Method

The Forecast are obtained using the following equations[21]

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

187

))(1( 11 tttt bSXS (2.16)

11 )1()( tttt bSSb (2.17)

mbSF ttmt (2.18)

h. Triple Exponential Smoothing :Brown one-parameter Quadratic Method

The Equation are used namely[21]

'

1' )1( ttt SXS (2.19)

''

1

''' )1( ttt SSS (2.20)

'''

1

''''' )1( ttt SSS (2.21)

'''''' 33 tttt SSSa (2.22)

''''''

2)34()810()56(

12tttt SSSb

(2.23)

)2()1(

''''''

2

2

tttt SSSc

(2.24)

2

2

1mcmbaF tttmt (2.25)

i. Chow Adaptif Control Method

Chow Method has the additional feature in which this method can be used for the non-stationary data.

The Equation of Chow Adaptive control method, are: [21]

1)1( ttt SXS (2.26)

11 )1()( tttt bSSb (2.27)

ttt bSF

)1(1

(2.28)

3. Forecasting Quality

Forecasting quality are determined by three critical parameter namely Accuracy, Precision, and

Visibility. These three parameters are measure for selection of the best forecasting methods.

Visibility is the ability of a model to predict the future. Measure of the accuracy and precision of

forecasting methods will vary depending on how far the method can predict the future. Visibility of the

forecasting model in this study is 3 years. So that for the 4th year and the subsequent need re-examine.

Accuracy is the degree of closeness to actual value. Accuracy is a must, accuracy exist in any

forecasting activity. Forecasting accuracy represent the forecast error it is is the difference between the

actual value and the forecast value of the time series and we call it error measure. The forecast error is

simply, et= Yt – Ft , regardless of how the forecast was produced. Yt is the actual value at period t, and

Ft is the forecast for period t. The most commonly used metric is Mean Absolute Percentage Error

(MAPE) = mean(|pt |) .The percentage error is given by pt = 100et / Yt [12].

Percentage errors have the advantage of being scale independent, so they are frequently used

to compare forecast performance between different data series. A measurement system can be accurate

but not precise, precise but not accurate, neither, or both. Accurate but no precision, is not meaningful.

Forecast precision associated with a wide interval of the forecast results. A very wide forecasting

interval indicates low precision. Thus, the narrower forecast interval is the better. This study attempted

to forecast the demand variable in forecasting interval format so that it can assess the results of

forecasting precision.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

188

4. Rice Supply Forecasting

In this paper, focus has been on fitting a forecasting model as close as possible to time series data. The

ultimate test is of course to see how the forecasts fit the future, but these tests can only be done

retrospectively. Therefore it is common practice to do “blind tests” by splitting the data in two series,

the first part is used to fit the model to the data, and the recent part of the time series is used as a hold

out test, to see how accurate the model forecast the “unknown” future. The model that most accurately

describes the holdout series is then selected for making the actual forecast. These previous examples

may be regarded as this last step.

The rationale behind this practice is the belief that the model that most accurately prolonged its

fitted pattern in the initial series into the holdout series also will be most accurate when prolonging the

pattern from the complete data series into the real future. Just selecting the model that has the best fit to

the complete time series may result in selecting a model that “over-fits” the data—incorporating random

fluctuations that do not repeat themselves [11]. The stand is we don’t believe in accuracy at model

building phase. I believe in accuracy at testing phase. Whatever performance measure in model building

phase to me means nothing. It is because the models should be tested in the way they will be used. How

will we test the models? We use hold out as testing ground. This study makes sure the model is tested

in the same way.

4.1. Results and Discussion

4.1.1. Description of the Study

The data used was the nine yearly time series from three sources as listed below. Forecasting

methodology developed involves three phases, namely the model building, testing and forecasting

models. In line with that for the purposes of Bandung rice supply forecasting data is divided into two

sections with the following allocations:

(a) Models Building: Using data from 2004 to 2008

(b) Model Testing: Using data from 2009 to 2012

Table 1 Rice supply quantity to Bandung

Sources: Regional Planning Development Board of Bandung in co-operations with Central Bureau of

Statistics of Bandung, Bandung logistics agency, and Department of Agriculture and Food Security of

Bandung.

Data Set Year Rice Supply (Ton)

(Model Building)

2004 171.858

2005 206.383

2006 221.874

2007 219.238

2008 216.604

(Model Testing)

2009 242.086

2010 221.938

2011 208.955

2012 217.713

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

189

4.2. Pattern Identification and Forecasting Methods

Look at the data plot in Figure 1.It can be seen that time series data shows upward trend pattern. Test

results also indicate stationarity of the time series data.

Figure 1. Rice supply to Bandung

4.3. Selecting Best Forecasting Model Based on Data Sets Model Building [First Stage]

Based on the use of forecasting method on the first set of data (model building) are obtained the value

of MAPE. Table 2 gives recapitulation MAPE values for nine models based on annual rice supply of

data. The MAPE has received some criticism (see Rob J. Hyndman, 2006).

Table 2.MAPE Comparisons Of Nine Forecasting Model.

No. Model MAPE

1 Naive 6,53

2 Naive rate of change 6,84

3 Simple Linear Regression 5,15

4 Double Moving Averages 10,53

5 Single Exponential Smoothing 18,99

6 Double Exponential Smoothing : Brown Linear

Method One-Parameter 17,94

7 Double Exponential Smoothing : Holt Two-

Parameter Method 24,98

8 Triple Exponential Smoothing :Brown one-

parameter Quadratic Method 16,20

9 Chow Adaptive Control Method 17,71

Average 13,87

Standard Deviations 6,86

Table 2 give information that largest MAPE produced by double exponential smoothing models: Holt

Two-Parameter. The smallest MAPE value is given by a model of simple linear regression of 5.15.The

model that most accurately describes the hold out series is then selected for making the actual forecasts

[11]. We have nine candidates. Now how to choose the best model of the nine models? The model will

be used to predict 3 years ahead which 3 is visibility number. We don’t need tracking signal to measure

visibility. Why? Because the test results show we can predict 3 years ahead accurately then we claim 3

to be the visibility number. Why do not we just use a visibility of 3 years and are looking for models

with the smallest MAPE? MAPE is actually getting smaller is not necessarily better. Although we must

be careful to the hold out ground. We need a fact as ground reason that the future is the copy of the hold

out. There is still threat of over-fit. Consideration used is we should not choose a model that has the

error is too small also too big. In other words, not too fit not too loose. This is the ultimate problem in

forecasting. To overcome this difficulty we used control chart concept in statistical quality control.

Control chart can be used to screen out the model without extreme value in error forecast. Following

that mechanism we get the results shows at Figure 2. Of the nine MAPE above are created control chart

to get rid of the extreme value of MAPE.

0

100000

200000

300000

Suplai Beras Kota Bandung

suplai

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

190

Figure2. Control chart for nine MAPE

The control chart formed by upper control limit (UCL), Central Limit (CL) and Lower Control Limit

(LCL). The control limits is created using the following formula:

20,746,8613,87 MAPEMAPEXUCL

7,016,8613,87 MAPEMAPEXLCL

The control charts shows there are four models with a value of MAPE is outside of control limits ,

namely: Naïve Model, Naive rate of change, Simple Linear Regression , Double Exponential Smoothing

model: Holt Two-Parameter Method. The fourth model is not used for the next stage, while other models

(Double Moving Averages, Single Exponential Smoothing, Double Exponential Smoothing: Brown

Linear Method One-Parameter, the triple Exponential Smoothing Brown one-parameter Quadratic

Method, Chow Adaptive Control Method) is used in the next step

4.4. Forecasting Model with Good Accuracy and Precision (Hidayat Yuyun,2013)

The models passes the selection phase 1 then tested using the hold out data set (data are darkened) as

much as 2 times. Precision assessment is then performed on the 5 models that are within the control

limits. As already stated, forecast precision associated with a bandwidth of the forecast results. Interval

forecasting width indicates low precision. Therefore the narrower the interval forecasting showed better

precision, then we will select the best model to use the smallest bandwidth criteria. Precision

calculations using the concept of interval forecasting are presented in Table 3.

Table 3. Calculation of Bandwidth for 5 Model

Model Bandwidth

Double Moving Averages 138407,63

Single Exponential Smoothing 172924,41

Brown Linear Method One-Parameter 142740,73

Brown one-parameter Quadratic Method 80636,72

Chow Adaptive Control Method 137183,02

Having obtained the value of the bandwidth of each model, then the model are selected based on the

value of the bandwidth. For this purpose, control charts are used to dispose of forecasting models that

have extreme bandwidth value. Here is the control chart

0.00

10.00

20.00

30.00

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

191

Figure 3. Bandwidth Control Chart for 5 Forecasting Model

Based on the control chart above, there are two models that bandwidth is out of control, namely: Single

Exponential Smoothing Model, Triple Exponential Smoothing: One-Parameter Quadratic method of

Brown, this model is not used in the later stages. There are 3 models that can be used to predict the rice

supply to the city of Bandung. Models that qualify are the Double Moving Averages, Exponential

Smoothing Double: One-Parameter Linear Method of Brown, Adaptive Control of Chow. Chow

Adaptive Control is the best model because it has the smallest bandwidth.

4.5. The Best Forecasting Model and its Results

Using Chow Adaptive Control Method , forecast results are presented below in the form of intervals .

Table 4. Interval Forecast Using Chow Adaptive Control Method

Years Point Forecast Interval Forecast

2013 253858,62 185267,1 - 322450,13

2014 253858,62 185267,1 - 322450,13

2015 253858,62 185267,1 - 322450,13

4.6. Conclusions

What is crises anyway?There is no standard definitions. In this paper crisis is defined as the situation

when the cop can not control the riot caused totally or partrially by rice scarcity. Riot caused by

other factors except rice scarcity is out of concern.

The weaknes of the existing forecasting frame work is that may be we will never seen price hike,

but we may can still see the crises. We may have no natural disasterbut we may still have the crises

caused by miss management.

This paper outlines different approach whereby good forecast are found by screen out the models

using control chart.The advantage is it will avoid from extreme value of error in data test to

overcome overfit problem.The paper also advice to make forecast in interval format to get

precission of the forecasting models.

We found that the Chow Adaptive Control Method showing the best results achieved for both

accuracy and precision.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

192

Acknowledgements

We would like to thank all the people who prepared and revised previous versions of this document.

References

Abraham, Bovas and Ledolter, Johannes. (1983). Statistical Methods For Forecasting, John Wiley &

Sons, Inc. Canada.

ATENEO ECONOMICS ASSOCIATION (AEA). (2008). Analyzing the Rice Crisis in the Philippines

31 may 2008.

Berthelsen J. (2011). Anatomy of a Rice Crisis.Vol.3, No. 2(Global Asia):6.

Cuesta J. (2012).Global Price Trends - Toward New Crisis? April 2012 April 2012.

David Dawe & Tom Slayton. (2010).The Rice Crisis - Markets, Policies and Food Security. The World

Rice Market Crisis of 2007–2008. 2010 (FAO):368.

Dawe D. (2010). The Rice Crisis - Markets, Policies and Food Security. Can the next Rice Crisis be

Prevented. 2010;393(The food and agriculture organization of the united nations and earthscan).

Hidayat Yuyun. (2011). Data Base Produksi Pangan Kota Bandung 2011].

Hidayat Yuyun. (2012). Data Base Produksi Pangan Kota Bandung 2012

Hidayat, Yuyun.(2013). RICE DEMAND FORECASTING WITH sMAPE ERROR MEASURES ,

INTEGRATED PART OF THE FRAMEWORK FOR FORECASTING RICE CRISES TIME

IN BANDUNG-INDONESIA. PROCEEDING:The International Conference on Applied

Statistics , 2013.Department Of Statistics Universitas Padjadjaran, Indonesia

http://oc.its.ac.id, Sawah Digusur Petani Menganggur

http://international.cgdev.orgt, Asian Rice Crisis Puts 10 Million or More at Risk: Q&A with Peter

Timmer, April 21, 2008.

Hanke, John E. and Wichern, Dean, W. (2005). Business Forecasting Eight Edition. Pearson Education,

Inc. New Jersey.

IRRI-INTERNATIONAL RICE RESEARCH INSTITUTE. (2008).Responding to the Rice Crisis.

2008 (IRRI):20.

Martumpal Chandra P.S., Drs. Yuyun Hidayat, MT.,(2013). Menentukan Model Peramalan Terbaik

Untuk Meramal Suplai Beras Kota Bandung.

Makridakis, Spyros., Wheelwright, Steven C., Mcgee, Victor E. (1999). Metode dan Aplikasi

Peramalan, .Edisi Kedua. Jakarta :Binarupa Aksara.

Maximo Torero. (2012). International Food Policy Research Institute [IFRI], Food Security Portal, Rice

Excessive Food Price Variability Early Warning System,2012

Parhusip U. (2012).Supply and demand analysis of rice in Indonesia [1950-1972]. (Departement of

agricultural economics Michigan state university).

Rasmus Rasmussen. (2004).On time series data and optimal parameters, Omega 32 (2004) 111 – 120

Rob J. Hyndman. (2006).Another Look At Forecast-Accuracy Metrics For Intermittent Demand,

FORESIGHT Issue 4June 2006

Saifullah a. (2008). The Rice Crisis - Markets, Policies and Food Security. Indonesia’s rice policy and

price stabilization programme:managing domestic prices during the 2008 crisis. (The food and

agriculture organizationof the united nations and earthscan).

Won W. Koo MHK, Gordon W. Erlandson. (1985). Analysis of Demand and Supply of Rice in

Indonesia.1985 (Dept. of. Agricultural Economics, Nort Dakota Agricultural Experiment Station,

NORTH DACOTA STATE UNIVERSITY):24.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

193

Global Optimization on Mean-VaR Portfolio

Investment Under Asset Liability by Using

Genetic Algorithm

Sukonoa*, Sudradjat SUPIANb, Dwi SUSANTIc a,b,cDepartment of Mathematics FMIPA Universitas Padjadjaran, Indonesia

*Email : [email protected]

Abstract: Management and liabilities to be very important for any bank or other financial

institution. Accordingly, banks or other financial institutions should have a system that can

formulate a function that can connect entrepreneurs or owners of capital with the real business

sector. This paper studied and formulated a model that deals with the problem global

optimization of the portfolio under the asset liability models. Formulation of the model was

conducted on the modeling of the distribution of asset returns, equation liabilities, the risk

measure Value-at-Risk, and local and global optimization equation. Furthermore, to find local

and global optimum solution is done by using a genetic algorithm. Formulation results expected

in this research is a model that can be used effectively in the management of assets and

liabilities.

Keywords:, global optimization. Asset liability, Value-at-Risk, portfolio investment

1. Introduction

In general financial institutions, such as banks, insurance companies, mutual fund companies, pension

fund management companies, pawnshops, and other, essentially an intermediary institution, either

directly or indirectly, between the depositors and the owners of capital (shareholders) by the employer

or sector real. Financial institution to receive deposits and or capital from shareholders, and then

distributed again in the form of loans or investments and other, which can turn a profit. The advantage

gained is then partially distributed to depositor and or shareholders in the form of interest and or

dividends. Depositors and or shareholders are willing to give up their money because they believe that

the financial institution is able to choose a professional alternative of investments that can generate

profits quite interesting.

Investment selection process itself should be done carefully, because if there is an error in the

selection of the investment will result in the financial institution cannot meet its obligations to pay

interest and dividends to the depositary or to provide and or shareholders. A common investment by

asset-liability management committee is in the form of shares in the capital market. In investing, asset-

liability management committee will usually dealing with investment risk. To anticipate the movement

of the investment risk, investment risk measurement can be performed with quantile approach, or better

known as Value-at-Risk (VaR). Investment risk cannot be basically eliminated, but can be minimized.

The strategy is often done in minimizing investment risk is by forming portfolios. The essence of the

establishment of a portfolio is allocated on a variety of alternative investment or investment

diversification, so that the overall investment risk can be minimized.

Investment diversification will effectively reduce the risk if the investor is able to form efficient

portfolios. An efficient portfolio is categorized as a portfolio when the portfolio lies on the efficient

surface. Efficient portfolio selection is influenced by the level of risk preferences of each investor, so

that efficient surface along the curve will be many locally optimum of portfolio (individual). But among

local optimum portfolios are portfolios that there will be a global optimum. Therefore, risk is measured

using Value-at-Risk (VaR), formulated in this paper a portfolio that can maximize return and minimize

the level of risk, both locally and globally, under the asset-liability characteristics. Portfolio better,

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

194

known as Mean-VaR portfolio optimization under asset-liability models. To resolve the portfolio

model is done by using a genetic algorithm. As a numerical illustration, the models used to analyze

some of stocks that traded on the capital market in Indonesia. The goal is to get the proportion of funds

allocated to each stock which analyzed.

2. Methodology

This analysis discusses several methods of mathematical models which are very useful for formulation

of Mean-VaR portfolio model under the asset-liability characteristics and find a solution. In this section

the models discussed included: models of calculation of stock returns, stock return distribution model,

asset-liability models, models of Mean-VaR portfolio optimization, and genetic algorithms.

2.1 Calculation Model of Stock Return

Suppose itP stock price i ( Ni ,...,1 with N the number of stocks that analyzed) at time t ( Tt ,...,1

with T the number of stock price data that observed). Suppose also itr is the stock return i at time t

can be calculated using the following equation:

1lnln ititit PPr (1)

Stock return data will then be analyzed and the distribution model estimated the expected values and

the variance of each as follows.

2.2 Distribution Model

Suppose itR random variable of stocks return i ( Ni ,...,1 with N the number of stocks that

analyzed) at time t ( Tt ,...,1 with T the number of stock price data that observed) which has

a certain continuous distribution. Function )( itrf is probability density of a random variable

koninu itR , defined over the set of all real numbers R , when: 1) 0)( itrf for all Rrit ; 2)

1)( itit drrf ; and 3)

b

aititit drrfbRaP )()( .

Expectation or mean value of itR is

ititititit drrfrRE )(][ , and the variance of itR is

itititititititit drrfrrERVar )()(])[(][ 222 . Expectation and the variance has

several important properties in the study, which are: i) ][][][ jtitjtit RqERpEqRpRE ; and

ii) when itR and jtR random variables with joint probability distribution ),( jtit rrf , then

][][][ 22jtitjtit RVarqRVarpqRpRVar ),(2 jtit RRpqCov . Where

)])([(),( jtjtititjtit rrERRCov .

2.3 Asset-Liability Model

Suppose 0L the initial liability and 1L the liability value after one period. Comparison of the growth

of liabilities given by a random variable 1 0 0( ) /LR L L L . Generally, LR depending on changes of

the structure of interest rates, inflation, and real wages. Suppose also 0A the initial value of assets

liabilities assets liabilities and 1A the value after one period. Assume that all investment opportunities

1,...,i N are risky. Suppose the investment strategies of funds (deposit) conducted by forming a

portfolio x . Therefore, the value of assets after one period is given by 1 0[1 ( )]AA A R x , where

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

195

1( )

NA i ii

R x x R

with ix the weights of assets to liabilities of i (Gerstner et al, [4]; Keel & Muller,

[8]; Panjer et al, [10]).

Initial surplus 0S of a fund (deposit) is given by 0 0 0S A L . Surplus after a period calculated

by the equation 1 1 1 0 0 .[1 ( )] [1 ]A L tS A L A R x L R , so the increased surplus can be calculated

as 1 0 0 0 .( )A L tS S A R x L R . Furthermore, according to Sharpe and Tint in year of 1990, carried out

the normalization, so the surplus return obtained as follows:

0

1 0 1.

0( )S A L tf

S SR R x R

A

, (2)

where 0 0 / Lf A L , indicates early funding ratio. Based on equation (2), the mean return of surplus is

0

1.1

[ ( )]N

S S i i L ti fE R x x

. (3)

where i the mean of asset i and .L t the mean growth of a liability at the time t . Variance of the

return of surplus is

2

00

2 2 21 2..1 1

[( ) ]N N

S S i j ij AL tL tS i j ffE R x x

. (4)

with

. 1

( , )N

AL t A L i iiCov R R x

, (5)

where ( , )i A LCov R R (Gerstner et al, [4]; Keel & Muller, [8]; Panjer et al, [10]).

2.4 Mean-VaR Portfolio Optimization Model

It is assumed that the vector of mean assets was 1( ,..., )TN μ , with ( )i iE R , 1,...,i N . The

covariance matrix of asset is ( )ijΣ , , 1,...,i j N with ( , )ij i jCov R R . Covariance of asset

liability vectors is 1( ,..., )TN γ , 1,...,i N , i covariance between stocks with economic index.

It is assumed that the value of 0 1f . Return portfolio weight vector is, 1( ,..., )TNx xx with

11

Nii

x

or 1T x e , where (1,...,1)T e the unit vector. Based on these assumptions, equation (5)

can be expressed as

LT

S xμ , (6)

and equation (6) as

xγΣxx TL

TS

22 . (7)

Value-at-Risk (VaR ) surplus of the portfolio can be formulated as

LTT

LT

SSS zzVaR xμxγΣxx 2/1)()( , (8)

where z the percentile of the standard normal distribution for the of significance level

(Khindanova & Rachev, [9]; Tsay, [13]).

A surplus of the portfolio *S is said (Mean-VaR) efficient if there is no surplus portfolio S

with *S S and *S S (Panjer et al, [10]; Sukono et al, [11]). Selection of efficient portfolio

surplus can be performed using the objective function: Max {2 }S SVaR with, 1

1N

iix

or

1 Subject to

)2(2Maximize 2/1

xe

xμxγΣxxxμ

T

LTT

LT

LT z

, (9)

where the factor of risk tolerance (or risk avoidance factor).

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

196

2.5 Genetic Algorithm

According to Hermanto (2007), genetic algorithm developed by Goldberg in 1989, is a computational

algorithm that inspired Darwin's theory of evolution. In Darwin's theory of evolution, that the survival

of an organism can be maintained through the process of reproduction, crossover, and mutation, and

follow the rules that the strong are winning. Darwin's theory of evolution has adopted a computational

algorithm to find the solution of a problem in a more "natural".

A solution generated by the genetic algorithm named chromosome, a collection of

chromosomes called a population. A chromosome consists of compiler components called genes, and

its value can be any number of numeric, binary, symbols, or characters, depending on the issues to be

resolved. Chromosomes will evolve continuously, called generations. In each generation of

chromosomes are evaluated the success rate of the value of the solution to a problem that was about to

be solved (the objective function), using a measure called the fitness. To select a chromosome will be

retained for the next generation do the selection process.

The new chromosomes called offspring, formed by mating between chromosomes in a

generation is called crossover. The number of chromosomes in the population who are determined

based crossover parameter called crossover_rate. The mechanism of changes in the constituent

elements of living things as a result of natural factors is called a mutation. Mutations are represented

as the change in the value of one or more genes in the chromosome by a random value. The number of

genes that have mutations in the population is determined by a parameter called mutation_rate. Account

after a few generations the value will be generated chromosomes genes converge to a certain value

which is the optimal solution generated by the genetic algorithm to the problem (objective function) is

about to be completed.

Application of the research here, genetic algorithm is used to estimate portfolio weight, ie, used

in maximizing the portfolio objective function is expressed as equation (11). To find the maximum

solution equation (11) by using genetic algorithms, can generally be arranged following steps:

1) Establishment of chromosome. Since the sought value is iw ( Ni ,...,1 ), then used as a weight iw

forming chromosome genes. The weight iw thresholds are members of real numbers.

2) Initialization. Initialization process is done by providing an initial value of genes with random

values corresponding restrictions set.

3) Evaluation of chromosome. The problems were solved is to determine the value of iw in equation

(11), therefore the objective function as a chromosome that can be used is equation (11).

4) Selection chromosome. The selection process is done by making chromosome which has an

objective function value is small, has likely chosen a large or have a high probability. In this case,

the fitness function can be used )_1/(1 functionobjectiffitness , plus 1 divider necessary

to avoid the possibility of division by zero. While it can be used to find the probability formula

fitnesstotalifitnessiP _/][][ . For the selection process can be used roulette wheel (random

number generator), it is necessary to look first cumulative probability ][iC . Having calculated the

value of the cumulative probability and random numbers generated R in the range [0, 1]. If

]1[][ CiR then select chromosome 1 as the parent, except that select i chromosomes as the parent,

provided that ][]1[ iCRiC . Do as much of the population.

5) Crossover. After the selection process, the next process is the process of crossover. Methods used

one of them is a one-cut-point, which randomly selects one position in the parent chromosome, then

exchanged genes. Chromosome which is used as the parent chosen at random, and the number of

chromosomes that undergo crossover influenced by parameters crossover_rate (ρc

). Suppose

specified crossover probability of 25%, it is expected that within a generation there is a 50%

chromosome from one generation undergo crossover.

Mutation. The number of chromosomes that have mutations in a population is determined by the

parameters mutation_rate. The process of mutation is done by replacing a randomly selected genes with

a new value obtained randomly. The process is as follows: first, the total length calculated previously

existing genes in a population. In this case the total length of the gene is total_gen = (number of genes

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

197

in a chromosome) * (total population). To select the position of the mutated genes is done by generating

a random number between 1 to integer total_gen. If the random number generated is less than the

variable mutation_rate (ρm

), then select the position as a sub-chromosome mutation. After the

mutation process, meaning it has completed one iteration of the genetic algorithm, also called a

generation. This process will be repeated until a predetermined number of generations, and ultimately

be acquired chromosome as the optimum objective function.

Pseducode genetic algorithms in general are given as shown in Figure-1 below.

Figure-1 Pseducode Genetic Algorithm

3. Illustrations

3.1 Price and Asset Return

Data assets are analyzed in this paper is accessible through the website: http://www.finance.go.id//.

Data include six (6) stock and a data of the rupiah exchange rate against the USD dollar, for the period

January 2, 2010 until June 4, 2013. stock data includes the names of the stock: INDF, DEWA, AALI,

LSIP, ASII, and TURB . Name of the stocks respectively given symbols: 1S , 2S , 3S , 4S , 5S , and

6S , while the rupiah exchange rate against the USD dollar given symbols LD . Stock prices is includes

the opening price, highest price, lowest price and closing price, but analyzed only closing prices. In

general, stock prices and the rupiah exchange rate against the USD dollar are analyzed magnitude

fluctuated up and down. In fact, sometimes rising sharply and then down again, sometimes also dropped

sharply and then rose again. Both stocks price data as well as the rupiah exchange rate against the USD

dollar, furthermore determined each the return value by using equation (1). Subsequently, both stock

return data as well as the return of the rupiah exchange rate against the USD dollar, was estimated form

each distribution as discussed in section 3.2 below.

3.2 Modeling of Asset Return Distributions

In this section, was estimated distributions of stocks return data: 1S , 2S , 3S , 4S , 5S , and 6S , and

the value of the rupiah against rate against the USD dollar LD . Estimates performed using the method

of Maximum Likelihood Estimator. As for the distribution model fit test performed using statistical

Anderson Darling (AD). Based on the form of the distribution estimator can be estimate the values of

mean estimator i and variance estimator i ( 6,...,1i and L ). Estimation process of distribution,

distribution fit test, and the estimated parameter values i and i ( 6,...,1i and L ) done with the

Initialization of the population, K chromosomes

Loop

Loop for K chromosomes

Decoding the chromosome

Evaluation of chromosome

End

Create one or two copies of the best chromosome (elitism)

Loop to be obtained K a new chromosome

Select two chromosomes

Crossovers

Mutation

End

End

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

198

assistance software of Minitab 14. The results of the estimation process and fit test are given in Table-

1 below.

Table-1 Distributions, Mean and Variance Estimators Asset Distribution Statistic

( AD )

Fitness Mean

( i )

Variance

( i )

1S Normal 0.570 0.00000 0.004501 0.000215

2S Skew-Student-t 8.280 0.00101 0.002873 0.000421

3S Logistic 8.199 0.00020 0.001580 0.000135

4S Logistic 2.708 0.00000 0.002693 0.000135

5S Logistic 3.597 0.00000 0.009728 0.000118

6S Skew-Student-t 2.271 0.00110 0.001510 0.000303

LD Logistic 1.764 0.00000 0.009607 0.000111

Estimator of parameter values of mean i and variance i will then be used to form mean

vector and covariance matrix, the portfolio optimization modeling as discussed in Section 3.3 as

follows.

3.3 Mean-VaR Portfolio Optimization

In this section conducted the Mean-VaR portfolio optimization using genetic algorithms.

Portfolio optimization models which determined the solution was referring to equation (9). First, by

using the data in Table-1 column of mean i , formed a vector of mean stock

0.001510) 0.009728 0.002693 0.001580 0.002873 004501.0(Tμ . Secondly, also using the data in Table-

1 column of variance i , together with the values of the covariance between stocks, the covariance

matrix is formed as follows:

0003030.00000073.00000058.00000025.00000023.00000081.0

0000073.00001180.00000016.00000039.00000063.00000042.0

0000058.00000016.00001350.00000009.00000032.00000019.0

0000025.00000039.00000009.00001350.00000044.00000027.0

0000023.00000063.00000032.00000044.00004210.00000053.0

0000081.00000042.00000019.00000027.00000053.00002150.0

Σ

Third, because the number of stock that were analyzed consisted of six stock, then the unit vector

defined as 1) 1 1 1 1 1(Te . Fourth, based on the calculation of the covariance between stock returns

with a return value of the rupiah exchange rate against the USD dollar, then the covariance of asset

liability vector is formed as 0.000724) 0.000812 0.000626 0.000447 0.000371 000542.0(Tγ .

Furthermore, substituting the values of the estimator L and L , as well as the vectors of Tμ

, Te , Tγ , and matrix Σ into equation (9), can be used to calculate the composition of the weight vector

),...,( 61 xxT x , that can maximize the objective function (9).

3.4 Determination Optimization Solution Using Genetic Algorithm

Determination process optimization solutions performed using a genetic algorithm, and the results are

given in Table-2 below.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

199

Table-2 Optimization Solution Using a Genetic Algorithm

1x 2x 3x 4x 5x 6x ix p pVaR Max pp VaR/

0.0 0.1368 0.0167 0.1611 0.0767 0.0023 0.6064 1.0000 0.0021 0.0161 0.0327 0.1277 0.2 0.0447 0.6442 0.0174 0.2528 0.0061 0.0348 1.0000 0.0029 0.0196 0.0378 0.1468 0.4 0.1473 0.6414 0.0798 0.0482 0.0757 0.0076 1.0000 0.0035 0.0187 0.0391 0.1877 0.6 0.0529 0.6848 0.0907 0.1085 0.0385 0.0247 1.0000 0.0031 0.0204 0.0436 0.1497 0.8 0.0246 0.7238 0.0546 0.0192 0.1185 0.0592 1.0000 0.0036 0.0212 0.0459 0.1687 1.0 0.0520 0.1091 0.0767 0.0034 0.0119 0.7468 1.0000 0.0019 0.0200 0.0510 0.0963 1.2 0.0805 0.6605 0.1291 0.0251 0.0159 0.0889 1.0000 0.0028 0.0200 0.0517 0.1412 1.4 0.0009 0.6797 0.0523 0.0682 0.0028 0.1961 1.0000 0.0025 0.0212 0.0562 0.1200 1.6 0.0332 0.1358 0.2342 0.0460 0.0246 0.5262 1.0000 0.0021 0.0145 0.0557 0.1422 1.8 0.0873 0.6510 0.1745 0.0027 0.0429 0.0416 1.0000 0.0030 0.0195 0.0587 0.1553 2.0 0.0470 0.3774 0.1607 0.0155 0.0018 0.3976 1.0000 0.0022 0.0154 0.0617 0.1438 2.2 0.1747 0.0166 0.2148 0.0166 0.0132 0.5641 1.0000 0.0022 0.0152 0.0646 0.1444 2.4 0.0473 0.0432 0.2219 0.1089 0.0029 0.5758 1.0000 0.0019 0.0155 0.0694 0.1208 2.6 0.0998 0.1180 0.1838 0.0123 0.0288 0.5573 1.0000 0.0022 0.0150 0.0701 0.1492 2.8 0.1743 0.0197 0.1928 0.0079 0.0158 0.5895 1.0000 0.0022 0.0158 0.0738 0.1400 3.0 0.0719 0.0170 0.1689 0.0237 0.0133 0.7052 1.0000 0.0019 0.0188 0.0809 0.1011 3.2 0.0372 0.2926 0.0997 0.0401 0.0218 0.5086 1.0000 0.0023 0.0157 0.0793 0.1439 3.4 0.0727 0.1598 0.0965 0.0655 0.0281 0.5774 1.0000 0.0023 0.0156 0.0822 0.1452 3.6 0.0285 0.0038 0.2392 0.1792 0.0138 0.5354 1.0000 0.0019 0.0146 0.0868 0.1329 3.8 0.0254 0.0461 0.4194 0.1711 0.0043 0.3336 1.0000 0.0019 0.0113 0.0879 0.1696 4.0 0.0956 0.1082 0.1399 0.1042 0.0138 0.5383 1.0000 0.0022 0.0144 0.0907 0.1522 4.2 0.0163 0.1810 0.1904 0.0008 0.0218 0.5897 1.0000 0.0020 0.0165 0.0968 0.1214 4.4 0.0249 0.1400 0.3152 0.0010 0.0181 0.5008 1.0000 0.0019 0.0145 0.0989 0.1344 4.6 0.0202 0.0842 0.4471 0.1462 0.0024 0.3000 1.0000 0.0019 0.0111 0.1002 0.1725 4.8 0.0013 0.1846 0.1144 0.0430 0.0253 0.6315 1.0000 0.0020 0.0174 0.1063 0.1170 5.0 0.0073 0.1233 0.3229 0.0350 0.0167 0.4948 1.0000 0.0019 0.0143 0.1085 0.1331 5.2 0.0602 0.0607 0.4351 0.0049 0.0067 0.4324 1.0000 0.0019 0.0134 0.1114 0.1387 5.4 0.0142 0.0588 0.3955 0.0312 0.0050 0.4954 1.0000 0.0017 0.0146 0.1167 0.1189 5.6 0.0220 0.1505 0.4103 0.0041 0.0005 0.4126 1.0000 0.0018 0.0134 0.1181 0.1354 5.8 0.0612 0.0555 0.4074 0.0057 0.0011 0.4691 1.0000 0.0018 0.0141 0.1217 0.1289 6.0 0.0447 0.1214 0.2790 0.0567 0.0015 0.4967 1.0000 0.0019 0.0141 0.1237 0.1355 6.2 0.0060 0.1098 0.1913 0.0266 0.0420 0.6244 1.0000 0.0021 0.0167 0.1266 0.1237 6.4 0.0128 0.0106 0.1511 0.1284 0.0174 0.6798 1.0000 0.0019 0.0181 0.1332 0.1032 6.6 0.0547 0.0099 0.4747 0.0447 0.0108 0.4051 1.0000 0.0019 0.0131 0.1329 0.1418 6.8 0.0425 0.0494 0.4145 0.1672 0.0005 0.3259 1.0000 0.0019 0.0111 0.1337 0.1745 7.0 0.1105 0.0344 0.2138 0.0172 0.0072 0.6169 1.0000 0.0020 0.0166 0.1398 0.1197 7.2 0.0369 0.1262 0.2234 0.0050 0.0007 0.6078 1.0000 0.0018 0.0168 0.1453 0.1086 7.4 0.0101 0.3316 0.2454 0.0388 0.0037 0.3704 1.0000 0.0021 0.0142 0.1426 0.1466 7.6 0.0509 0.0967 0.3855 0.0830 0.0034 0.3804 1.0000 0.0019 0.0120 0.1463 0.1627 7.8 0.0209 0.1515 0.2463 0.0375 0.0017 0.5421 1.0000 0.0019 0.0153 0.1531 0.1210 8.0 0.0174 0.0348 0.2136 0.0174 0.0183 0.6984 1.0000 0.0018 0.0188 0.1597 0.0957 8.2 0.0609 0.0118 0.4690 0.0028 0.0242 0.4312 1.0000 0.0019 0.0136 0.1566 0.1433 8.4 0.1027 0.0279 0.2647 0.0888 0.0112 0.5047 1.0000 0.0021 0.0138 0.1576 0.1502 8.6 0.1145 0.0686 0.3222 0.0572 0.0030 0.4346 1.0000 0.0021 0.0126 0.1600 0.1640 8.8 0.0824 0.0156 0.1842 0.1753 0.0067 0.5359 1.0000 0.0021 0.0144 0.1644 0.1426 9.0 0.0029 0.1247 0.4423 0.1312 0.0086 0.2903 1.0000 0.0019 0.0111 0.1673 0.1753 9.2 0.0081 0.2452 0.2670 0.0274 0.0025 0.4498 1.0000 0.0019 0.0144 0.1725 0.1350 9.4 0.0131 0.0438 0.3679 0.1741 0.0186 0.3825 1.0000 0.0020 0.0117 0.1729 0.1698 9.6 0.0131 0.0523 0.3877 0.1364 0.0051 0.4053 1.0000 0.0019 0.0125 0.1792 0.1484 9.8 0.0021 0.0266 0.3247 0.3041 0.0021 0.3402 1.0000 0.0020 0.0112 0.1795 0.1739

10.0 0.0845 0.0141 0.4380 0.1838 0.0007 0.2790 1.0000 0.0020 0.0105 0.1804 0.1945 10.2 0.0085 0.1628 0.2839 0.0859 0.0033 0.4557 1.0000 0.0019 0.0135 0.1880 0.1408 10.4 0.0417 0.0355 0.4411 0.0857 0.0017 0.3942 1.0000 0.0018 0.0126 0.1921 0.1449 10.6 0.0019 0.2844 0.1778 0.0530 0.0145 0.4685 1.0000 0.0021 0.0149 0.1910 0.1404 10.8 0.0101 0.0433 0.2278 0.0490 0.0382 0.6316 1.0000 0.0020 0.0168 0.1978 0.1180 11.0 0.0028 0.3241 0.1380 0.0469 0.0040 0.4842 1.0000 0.0021 0.0160 0.1986 0.1289 11.2 0.0730 0.0471 0.1644 0.0430 0.0082 0.6644 1.0000 0.0019 0.0177 0.2060 0.1088 11.4 0.0352 0.1066 0.1666 0.0934 0.0317 0.5666 1.0000 0.0021 0.0151 0.2021 0.1420 11.6 0.0037 0.0508 0.1673 0.0139 0.0046 0.7597 1.0000 0.0017 0.0205 0.2205 0.0809 11.8 0.0516 0.1163 0.3259 0.0428 0.0177 0.4458 1.0000 0.0020 0.0130 0.2090 0.1573 12.0 0.0129 0.0846 0.7341 0.0064 0.0156 0.1463 1.0000 0.0019 0.0132 0.2168 0.1403 12.2 0.0427 0.1320 0.3293 0.0449 0.0253 0.4258 1.0000 0.0021 0.0126 0.2134 0.1667 12.4 0.0100 0.1801 0.3662 0.1218 0.0022 0.3197 1.0000 0.0020 0.0115 0.2189 0.1717 12.6 0.1187 0.0514 0.3215 0.0831 0.0010 0.4243 1.0000 0.0021 0.0123 0.2201 0.1679 12.8 0.0123 0.0395 0.3524 0.3118 0.0044 0.2796 1.0000 0.0020 0.0103 0.2228 0.1967 13.0 0.1315 0.0508 0.0315 0.0153 0.0234 0.7476 1.0000 0.0022 0.0197 0.2283 0.1110 13.2 0.0077 0.0821 0.3480 0.1509 0.0200 0.3912 1.0000 0.0020 0.0118 0.2303 0.1700 13.4 0.0413 0.1186 0.1899 0.1156 0.0186 0.5159 1.0000 0.0021 0.0140 0.2324 0.1496 13.6 0.0439 0.0598 0.3722 0.1548 0.0010 0.3684 1.0000 0.0019 0.0115 0.2382 0.1680 13.8 0.0049 0.1794 0.3734 0.0005 0.0108 0.4311 1.0000 0.0019 0.0138 0.2442 0.1368 14.0 0.0558 0.1615 0.1377 0.0773 0.0052 0.5625 1.0000 0.0020 0.0155 0.2440 0.1320 14.2 0.0071 0.0455 0.3731 0.0212 0.0353 0.5179 1.0000 0.0019 0.0148 0.2497 0.1310 14.4 0.1146 0.1093 0.3022 0.0018 0.0123 0.4597 1.0000 0.0021 0.0132 0.2461 0.1611 14.6 0.0310 0.0619 0.1349 0.2126 0.0076 0.5520 1.0000 0.0020 0.0149 0.2536 0.1352

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

200

14.8 0.0049 0.0361 0.1478 0.2053 0.0115 0.5944 1.0000 0.0019 0.0160 0.2601 0.1202 15.0 0.0163 0.1952 0.2865 0.1512 0.0031 0.3477 1.0000 0.0020 0.0116 0.2564 0.1760 15.2 0.0274 0.0174 0.4892 0.0142 0.0296 0.4222 1.0000 0.0019 0.0136 0.2649 0.1408 15.4 0.0165 0.0347 0.1858 0.0008 0.0454 0.7168 1.0000 0.0020 0.0190 0.2693 0.1050 15.6 0.0009 0.1823 0.0268 0.0241 0.0420 0.7239 1.0000 0.0021 0.0196 0.2683 0.1090 15.8 0.0174 0.0665 0.4197 0.0383 0.0051 0.4530 1.0000 0.0018 0.0138 0.2788 0.1280 16.0 0.0615 0.0324 0.1890 0.0549 0.0073 0.6550 1.0000 0.0019 0.0175 0.2811 0.1073 16.2 0.0168 0.0383 0.4727 0.0231 0.0173 0.4317 1.0000 0.0018 0.0137 0.2835 0.1323 16.4 0.0691 0.0605 0.3054 0.0137 0.0029 0.5485 1.0000 0.0019 0.0153 0.2862 0.1218 16.6 0.0046 0.1248 0.3112 0.1576 0.0125 0.3894 1.0000 0.0020 0.0119 0.2822 0.1688 16.8 0.0240 0.0677 0.4645 0.0163 0.0264 0.4012 1.0000 0.0019 0.0130 0.2880 0.1500 17.0 0.0279 0.0224 0.5071 0.0915 0.0173 0.3338 1.0000 0.0019 0.0120 0.2916 0.1587 17.2 0.0672 0.0360 0.3427 0.0331 0.0019 0.5190 1.0000 0.0018 0.0147 0.2989 0.1250 17.4 0.0475 0.0199 0.4232 0.0303 0.0381 0.4410 1.0000 0.0021 0.0132 0.2933 0.1558 17.6 0.0264 0.0495 0.4280 0.0132 0.0205 0.4624 1.0000 0.0019 0.0140 0.3035 0.1340 17.8 0.0134 0.2300 0.2525 0.0759 0.0038 0.4244 1.0000 0.0020 0.0135 0.3016 0.1487 18.0 0.0180 0.0311 0.3048 0.1272 0.0352 0.4838 1.0000 0.0021 0.0134 0.3022 0.1542 18.2 0.0430 0.2299 0.2554 0.0048 0.0056 0.4614 1.0000 0.0020 0.0143 0.3075 0.1416 18.4 0.0072 0.0662 0.3502 0.1341 0.0208 0.4214 1.0000 0.0020 0.0124 0.3110 0.1588 18.6 0.0418 0.0575 0.2842 0.1474 0.0012 0.4678 1.0000 0.0019 0.0132 0.3167 0.1455 18.8 0.0220 0.1401 0.3664 0.0746 0.0108 0.3862 1.0000 0.0020 0.0122 0.3172 0.1608 19.0 0.0438 0.1669 0.0734 0.0034 0.0118 0.7007 1.0000 0.0020 0.0190 0.3249 0.1037 19.2 0.0061 0.1532 0.4306 0.0386 0.0268 0.3446 1.0000 0.0020 0.0121 0.3207 0.1687 19.4 0.0161 0.0198 0.4002 0.3096 0.0018 0.2525 1.0000 0.0020 0.0103 0.3242 0.1933 19.6 0.0500 0.0314 0.4100 0.0823 0.0080 0.4184 1.0000 0.0019 0.0128 0.3328 0.1481 19.8 0.0244 0.0842 0.1338 0.1775 0.0161 0.5640 1.0000 0.0020 0.0151 0.3312 0.1357 20.0 0.0624 0.0377 0.3820 0.1233 0.0004 0.3942 1.0000 0.0019 0.0121 0.3373 0.1593

Based on the optimization process whose results are shown in Table-2, it appears that for every value

of different risk tolerances, resulting in a composition of different investment allocation weights. Due

to the weight of the composition of different investment allocation, resulting in acquired return and

Value-at-Riks of portfolio differently. In the portfolio optimization process above the global optimum

of portfolio achieved when the value of risk tolerance = 12.8, with the composition of the allocation

investment funds in 1S , 2S , 3S , 4S , 5S , and 6S are 0.0123, 0.0395, 0.3524, 0.3118, 0.0044, and

0.2795 respectively. The global optimum of portfolio produces the expected return of the portfolio value

is 0.0020, with the Value-at-Risk is 0.0103. At the global optimum of portfolio the ratio between

portfolio return with Value-at-Risk of 0.1967 is the largest compared to other ratio values. This course

can be used as a reference for investors in making investment decisions on stocks that were analyzed.

4. Conclusion

In this paper we used the genetic algorithm to find a value of the global optimum for Mean-VaR portfolio

investment under asset liability. On global optimum position, we obtained the expected return value of

portfolio is 0.0020, with the Value-at-Risk is 0.0103. The global optimum obtained, when the portfolio

weights of 1x , 2x , 3x , 4x , 5x and 6x are 0.0123, 0.0395, 0.3524, 0.3118, 0.0044, and 0.2795

respectively, with the risk tolerance is 12.8.

References

Caglayaa, M.O. & Pinter, J.D. (2010). Development and Calibration of Currency Market Strategies by

Global Optimization. Working Paper. Faculty of Economics and Administrative Sciences, 2:

Faculty of Engineering Ozyeğin University, Kusbakisi Caddesi, No.2, 34662 Altunizade,

Istanbul, Turkey. [email protected], [email protected]

Dowd, K.. (2002). An Introduction to Market Risk Measurement, John Wiley & Sons, Inc., New Delhi,

India.

Elton, E.J. & Gruber, M.J. (1991). Modern Portfolio Theory and Investment Analysis, Fourth Edition,

John Wiley & Sons, Inc., New York.

Froot, K.A., Venter, G.G. & Major, J.A. (2007). Capital and Value of Risk Transfer, Working Paper,

New York: Harvard Business School, Boston, MA 02163. Http://www.people.hbs.edu/kfroot/.

(Downloaded in December 2012).

Gerstner, T., Griebel, M. & Holtz, M. (2007). A General Asset-Liability Management Model for the

Efficient Simulatrion of Portfolios of Life Insurance Policies, Working Paper, Institute for

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

201

Numerical Simulation, University of Bonn. Email: [email protected]. (Downloaded in May

12, 2011).

Jinwen Wu. (2007). The Study of the Optimal Mean-VaR Portfolio Selection, International Journal of

Business and Management, Vol 2, No 5, 2007. http://www.ccsenet.org/journal/

index.php/ijbm/article/view/2051. (Downloaded in December 2012).

Keel, A. & Muller, H.H. (1995). Efficient Portfolios in the Asset Liability Context, Astin Bulletin, Vol.

25, No. 1, 1995. (Downloaded in March 2011).

Khindanova, I.N. & Rachev, S.T. (2005). Value at Risk : Recent Advances, Working Paper, University

of California, Santa Barbara and University of Karlsruhe, Germany. http://www.econ.ucsb.edu/

papers/wp3-00.pdf... (Down-loaded in November 2008).

Leippold, M., Trojani, F. & Vanini, P. (2002). Multiperiod Asset-Liability Management in a Mean-

Variance Framework with Exogenous and Endogenous Liabilities. Working Paper. University of

Southern Switzerland, Lugano.

Panjer, H.H., Boyle, D.D., Cox, S.H., Dufresne, D., Gerber, H.U., Mueller, H.H., Pedersen, H.W., &

Pliska, S.R. (1998). Financial Economics. With Applications to Investments, Insurance and

Pensions, the Actuarial Foundation, Schaumberg, Illinois.

Pang, J.S. & Leyffer, S. (2004). On the Global Minimization of the Value-at-Risk. Working Paper.

Department of Mathematical Sciences, Rensselaer Polytechnic Institute, Troy, New York 12180-

3590, USA. Email: [email protected]

Sukono, Subanar & Rosadi, D. (2009). Mean-VaR Portfolio Optimization under CAPM by Non

Constan Volatility in Market Return, Working Paper, Presented in 5th International Conference

on Mathematics, Statistics and Their Application at Andalah University, Padang, West Sumatera-

Indonesia, June 9-11, 2009.

Weise, T. (2009). Global Optimization Algorithms-Theory and Application-. Second Edition. Version:

2009-06-26. Newest Version: http://www.it-weise.de/

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

202

Controlling Robotic Arm Using a Face

Asep SHOLAHUDDINa, Setiawan HADIb a Department of Informatics Engineering, Universitas Padjadjaran, Indonesia b Department of Informatics Engineering, Universitas Padjadjaran, Indonesia

a [email protected] b [email protected]

Abstract: In this paper, methods for controlling a robotic arms by using face have been

inspected and applied successfully. Based on method of Face detection and Tracking using KLT

Algorithm, the methods have been applied by utilizing 5 servo motors with three degrees using

Arduino microcontroller platform. Computer software consists of two parts. In the first part, a

program for moving robotic arm which is stored in microcontroller, and the second is a program

for detecting the presence of facial position which is stored in computer storage. Face detection

application has been applied based on image streaming from a webcam. The result of this

detection is face position that becomes input to selected servo number. Servo number and

direction of movement then is transferred to the microcontroller via a serial communication to

trigger the movement of the robotic arm. One of real application of this system is to move object

from one place to another place. This method has been implemented on a robotic arm using

processed camera and Microcontroller.

Keywords: Robotic arm, Face detection, Microcontroller

1. Introduction

Similar to informatics, robotics has developed tremendously in various aspects of human life. The

combination of the two fields of studies –informatics and robotics, is very useful to the improvement of

human beings’ life quality, especially those who have limited ability (the disabled). A person with

limited ability in functioning the part of his head, for instance, is likely to move using his head

movement, particularly the part of his face.

With its rapid development, robots, conscious and unconsciously, have been present in the life of

human beings in great variety of forms. There is even a simple design of robot which can carry out

simple or repeated activities. Generally, public defines robot as ‘a living creature’ in shape of man or

animals made from metal and powered by electricity. In a broader sense, robot means a device which

in limited capacity is able to work automatically according to instructions given by its designer. In this

sense, there is a close relationship between robot and automatization so that it can be understood that

almost any modern activity depends more and more on robots and automation.

In the case of limited movement of parts of the body in which only part of the head can be moved,

robotic arm can be a solution. Further problem would be how can this robotic arm be controlled by the

head or the face?

2. LITERATURE REVIEW

2.1 Face Detection

Face detection can be carried out by using toolbox vision on the developed matlab (Viola, Paul and

Jones, Michael, 2001), (Bruce D. Lucas and Takeo Kanade, 1981), ( Carlo Tomasi and Takeo Kanade,

1991), (Jianbo Shi and Carlo Tomasi, 1994), (Zdenek Kalal, Krystian Mikolajczyk and Jiri Matas,

2010). The function to be used is to detect face which result is stored in the variable of face detector.

Therefore, the function is as follow:

faceDetector = vision.CascadeObjectDetector();

This function detects a face location in a frame. On the other hand, to get a face box which will be stored

in “bbox” can be done with this fucntion

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

203

bbox = step(faceDetector, videoFrame);

meanwhile, the conversion from box to polygon (BP) can use this equation

x = bbox(1, 1); y = bbox(1, 2); w = bbox(1, 3); h = bbox(1, 4);

BP = [x, y, x+w, y, x+w, y+h, x, y+h] ( 2.1)

For example

𝑥1=BP(1,1); 𝑦1=BP(1,2); 𝑥2=BP(1,3); 𝑦2=BP(1,4);

𝑥3=BP(1,5); 𝑦3=BP(1,6); 𝑥4=BP(1,7); 𝑦1=BP(1,8); ( 2.2)

As a result, if described, then the face position in array polygon will take form as Figure 2.1

Figure 2.1 position of point box on detected face

2.2 Robotic Arm

Hardware of robortic arm available in market is Lynxmotion (http://www.lynxmotion.com)

Figure 2.2 Robotic Arm

Robotic arm mover uses servo motors which can be controlled by using microcontroller.

3. RESEARCH METHODS

This research employed literature study and experimentation. It was conducted by creating robotic arm

which can be moved from the face to aid people with limited function of body parts. The scheme can

be seen in Figure 3.1.

(𝑥3, 𝑦3) (𝑥4,𝑦4)

(𝑥2,𝑦2) (𝑥1,𝑦1)

Face

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

204

Figure 3.1 Scheme of Face Detection and Robotic Arm

Meanwhile, the research procedure can be seen in Figure 3.2.

Figure 3.2 Research Procedure

Developing Software of Face Detection

The software to detect face used KLT Algorithm method which is available in Matlab 2013. According

to equation (2.2), the central point of face (𝑥𝑐 , 𝑦𝑐) can be obtained through the formula of :

𝑥𝑐 = 𝑥1 +𝑥2−𝑥1

2 (3.1)

As for the central point of 𝑦𝑐

𝑦𝑐 = 𝑦2 +𝑦3−𝑦2

2 (3.2)

Developing Software of

Face Detection

Developing Hardware of

Robotic Arm

Controlling the device

Combining Software and

Hardware

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

205

The value of central point position (𝑥𝑐 , 𝑦𝑐) determines the position of the face which 𝑥𝑐 is x position

in the center, while 𝑦𝑐 is y position in the center.

Criteria for model and servo positions were determined as follow:

There were two models: model= 1 to select a particular servo motor; and model= 2 to move servo motor

to the left or to the right. Its equation formula is:

𝑀𝑜𝑑𝑒𝑙(𝑥𝑐 , 𝑦𝑐) = {1, 40 < 𝑥𝑐 < 50, 25 < 𝑦𝑐 < 35 2, 90 < 𝑥𝑐 < 100, 25 < 𝑦𝑐 < 35

(3.3)

Developing Hardware of Robotic Arm

Robotic arm which was developed used 5 unis servo motor with specification as seen in Table 3.1.

Table 3.1 Robotic Arm Specification

NO

NAME SPECIFICATION QUANTITY

1 SERVO GWS - S04

-Torsi 10 Kg

2

2 SERVO Hitech -HS5685

-11,3Kg (6V)

-12 Kg (7,4V)

3

3 Griper 1

4 Microcontroller Arduino UNO 1

5 Robotic Frame Almunium 1

The formula for five units of servo motor is

𝑆𝑒𝑟𝑣𝑜(𝑥𝑐 , 𝑦𝑐) =

{

1, 35 < 𝑥𝑐 ≤ 45, 45 < 𝑦𝑐 < 502, 45 < 𝑥𝑐 ≤ 55, 45 < 𝑦𝑐 < 503, 55 < 𝑥𝑐 ≤ 65, 45 < 𝑦𝑐 < 504, 65 < 𝑥𝑐 ≤ 75, 45 < 𝑦𝑐 < 505, 75 < 𝑥𝑐 ≤ 85, 45 < 𝑦𝑐 < 50

(3.4)

Combining the Device and Controlling

Combining of device was carried out by combining WebCam with a computer to be connected with the

robotic arm. The controlling system design of the robotic arm using face can be seen in Figure 3.3.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

206

Figure 3.3 Design of Robotic Arm Controlling

First of all, face detection was carried out through the WebCam to be processed in the computer so that

it resulted in a face box. In the face box then there was computation of face’s central point, which is

(xc,yc) to determine model selection, which is “1” or “2” and select the number of servo motor (1 to 5).

Secondly, after getting the model and servo number, message was sent to the robotic arm through serial

cable. Thirdly, Microcontroller Arduino would respond it and move the robotic arm according to its

whereabout and model.

4. RESULT

The device developed in this research is a face’s central position detection to determine a particular

servo and model positions. The result of the captured image can be seen in Figure 4.1.

Face Program

FACE DETECTION

Robotic Arm

DETERMINING

CENTRAL POINT

OF THE FACE

INPUT

SERIAL DATA

SERVO MOTOR

MOVEMENT

Controlling robotic arm with face

FACE’S CENTRAL POINT

POSITION

SERVO MOTOR’S NUMBER

POSITION

DETERMINING

POSITION OF

SERVO MOTOR

STATUS

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

207

(a) (b)

Figure 4.1 (a) face detection in model “1” (b) face detection in model “2”

Meanwhile, the Robotic Arm is on Figure 4.2.

Figure 4.2 Robotic Arm with 5 servo motor

Figure 4.1(a) shows movement of the face’s central point which is indicated with red color in model

“1”. This is the process to select the number of servo motor. Picture 4.1.(b) shows the model used (in

the picture it is seen model=2). Model “2” is used to wind selected servo motor (for example is servo

motor “4”). Servo is wound clockwise or counter-clockwise.

5. CONCLUSION

This research developed software and hardware to aid people with limited body parts, which is only

part of the head can be moved. The device developed is able to move robotic arm through face. The

robotic arm’s movement is controlled through face’s movement to select servo motor and move it to

the left or right as well as going up or down. This research used 5 units of servo motors which are

controlled through microcontroller Arduino. This robotic arm can move an object from one place to

another within range of the robotic arm.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

208

References

Bruce D. Lucas and Takeo Kanade. (1981). An Iterative Image Registration Technique with an Application to

Stereo Vision . International Joint Conference on Artificial Intelligence

Carlo Tomasi and Takeo Kanade . (1991). Detection and Tracking of Point Features. Carnegie Mellon University

Technical Report CMU-CS ,91-132.

Jianbo Shi and Carlo Tomasi . (1994). Good Features to Track. IEEE Conference on Computer Vision and Pattern

Recognition..

Viola, Paul A. and Jones, Michael J. (2001). Rapid Object Detection using a Boosted Cascade of Simple Features,

IEEE CVPR.

Zdenek Kalal, Krystian Mikolajczyk and Jiri Matas. (2010). Forward-Backward Error: Automatic Detection of

Tracking Failures. International Conference on Pattern Recognition.

http://www.lynxmotion.com

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

209

Twofish Algorithm Implementation To

Payment Gateway As A Safety Device Againts

Information In Web E-Commerce Based

Akik HIDAYAT, Drs., M.koma ,Detik Pristiana WARJAYAb aLecturer at mathematical department Padjadjaran University, Indonesia bStudent at mathematical department Padjadjaran University, Indonesia

[email protected], [email protected]

Abstract :So many developed website which based Ecommerce, online transaction is one of

the options to buy goods or services. In this condition therefore, The development of Payment

Gateaway Infrastructure is needed to make this process easier and safe. This Process trigger off

toward buyer and seller of informations, so to secure this process is really needed. This research

used twofish algorithm,which is one of cryptography algorithm that implemented payment

gateway as a tool to secure seller and buyer information.

Keywords: Cryptography, twofish, RSA, SHA-1, Payment Gateway, Ecommerce.

1. Introduction

In the last few years, the security about in payment system on purchase processing at e-commerce is

become the main concern. In the present, many payment gateway corporations appear to give service

at many web E-commerce based. To secure the purchasing data from variuos attacks and for the

integrity of data, to keep the data integrity, the data must be encrypted before being transmitted or

stored. Cryptography is the science of protecting clear, meaningful information using mathematical

algorithms [1]. Cryptography’s methods will be used to secure buyer’s data and seller’s data. This

process not only to encrypt data of purchasing, but also to give authorization between buyer and seller.

In this research, three cryptography methods is used in security and authentification process,the method

are Twofish, RSA, and SHA-1.

2. Payment gateway

Payment gateway is defined as any type of payment methods that use Information and Communications

Technology (ICT), including cryptography and telecommunication networks[2]. There are seven

participants be participant on the process payment gateway [3] :

1. Cardholder/client is person who use credit card or account of bank tu purchase the product to

merchant.

2. Issuer is bank who make account for client

3. Merchant/seller is a person who provide the product.

4. Acquirer is bank who make the merchant account.

5. Payment gateway is a tool which operated by acquirer to process message of purchasing

6. Brand holders is company who make and develope credit card

7. Third party, sometimes acquirer need third party to run payment gateway

In this research there are just three participant which is cardholder, merchant, and payment gateway.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

210

3. Protocol

The existency of Protocol Communication is needed in order tobuild the secure payment processing.The

four kinds of protocol communication that important to be implemented are:

1. Hypertext Transfer Protocol Secure (HTTPS) is a protocol communication that

secure communication over a computer network, with especially wide deployment on

the Internet [4]

2. Secure Electronic Transaction (SET) is a standard protocol communication for

securing credit card transactions over insecure networks, specifically, the Internet [3]. This

research DES subtitute by twofish as symmetric cryptography methods

3. TSL/SSL are cryptographic protocols that are designed to provide

communication security over the Internet[5].

4. Cryptography

Cryptography is the science of protecting clear, meaningful information using mathematical algorithms

[1]. For the general Cryptography’s method devided by two forms, symmetric-key and asymmetric key

(public key cryptosystem)

1. Symmetric-key cryptography refers to encryption methods in which both the sender and

receiver share the same key. In this research will use twofish algorithm as symmetric key [6]

2. Public key cryptosystem a cryptographic technique that enables users to securely

communicate on an insecure public network, and reliably verify the identity of a user

via digital signatures [6]. In this research will use RSA algorithm as PKC.

5. Twofish

Twofish uses a 16-round Feistel-like structure with additional whitening of the input and output[1].

Figure 1. Twofish Algorithm 1

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

211

5.1 Whitening

Whitening is the technique of xoring key material before the first round and after the last round [7].

5.2 The Function-F

The function F is a key-dependent permutation on 64-bit values. It takes three arguments, two input

words R0 and R1, and the round number r used to select the appropriate subkeys[7].

5.3 The function-G

The function g forms the heart of Two_sh. The input word X is split into four bytes. Each byte is run

through its own key-dependent S-box[7].

5.4 The key Schedule

The key schedule has to provide 40 words of expanded key K0,…..,K39, and the 4 key-dependent S-

boxes used in the g function [7].

6. RSA

RSA stands for Ron Rivest, Adi Shamir and Leonard Adleman, RSA is one of method from PKC,

RSA’s algorithm as follow:

1. Compute N as the product of two prime numbers p and q (N=p.q)

2. r=(p-1).(k-1)

3. Choose an integer e such that 1< e <r, e and r are coprime.

4. e.d≡1mod r, e is private key and d is public key

for the encrytion, c≡memod N with c is Chiper text and m is plain text.

for decryption m≡cd mod N,in this research RSA will be use to secure the key that be used in

symmetric key and as signature scheme

7. SHA-1

SHA-1 is a one of method to “hash” a message(plain text), the result is called “message digest”, this

method not same with symmetric key and asymmetric key that are reversible,but in hash function is

irreversible so message digest can’t be a message (plain text)again. In this research SHA-1 will use as

Signature scheme.

8. Concept and implementation

In this research the participants that participate in payment gateway are just client(buyer), merchant,

and payment gateway, actually client, merchant and payment gateway must have certificate from

certificate authority (CA) but in this research,all participants are considered have certificate and all

process that related to CA are considered has been completed.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

212

Figure 2. Scheme 1a

At the Figure 2, a scheme of cryptography method from client to merchant. A message (form

of purchasing) hash with sha-1 the result is called message digest. Message digest signing with client’s

private key then we get digital signature. Message and digital signature encrypt with twofish algorithm

than the result is called encrypted message. The key was used from twofish algorithm encrypt with

merchant’s public key to get digital envelope. So the result from this scheme is encrypted message and

digital envelope.

At the Figure 3, digital envelope decrypt with merchant’s private key, the result is key of

twofish algorithm. The key is used to decrypt an encrypted message than the result is message and

client’s digital signature. The message hash with sha-1 than obtained “message digest a”. Client’s

digital signature verify with client’s public key than obtained “message digest b”. “Message digest a”

compare with “message digest b” if the result is same authorization process succeed, but if it is not same

authorization process failure.

If authorization process between client and merchant is success, the next step is process

authorization between merchant and payment gateway. The process is similar like process between

client and merchant, but in digital signature process digital signature merchant signing with private key

merchant. For process verification payment gateway verify with public key merchant, after that verify

with public key buyer end the result is “message digest B”. the next step is just compare “message

digest A” and “message digest B” if the results is same the the authorization is completed but if the

result is not same, the authorization failure.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

213

Figure 3 scheme 1b

9. Conclusion

Cryptography’s method must be impemented to secure information,then to secure the information must

use more then one cryptography’s method.

Reference

[1]Irfan L., Tasneem B. and Pooja N.,Encryption and decryption of data using twofish algorithm,2012

[2] J. Raja and M.S. Velmurgan, “E-Payments: Problem and Prospects,”Journal of Internet Banking and

Commerce, vol. 13, no. 1, April 2008.

[3] Secure Electronic Transaction, Journal (2000)Volume 6

[4] HTTPS Everywhere FAQ. May 2012.

[5]Eric Rescorla (2001). SSL and TLS: Designing and Building Secure Systems. United States: Addison-Wesley

Pub Co. ISBN 0-201-61598-3.

[6] Kartik Krishnan (2004). Computer Networks and Computer Security

[7] Bruce Schneier (1998). Twofish: A 128-bit Block Cipher

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

214

Image Guided Biopsies For Prostate Cancer

Bambang Krismono TRIWIJOYO

STMIK Bumigora Mataram

Abstract: Systematic transrectal ultrasound-guided biopsy is a promising technique for the detection of

prostate cancer but failed to detect a variety of tumors, while Magnetic resonance (MR)-guided biopsy

technique has been widely used, but there is no optimal technique. This paper is proposed to develop the

technique of multi-modal image-guided biopsy technique for detecting prostate cancer accurately.

Develop prostate multimoal image registration method based on information theory and automated

statistical shape analysis to find pieces that closely match the MR axial slice and TRUS. Design and

evaluate efficient and accurate segmentation of the prostate in 2D TRUS image sequence to facilitate

multi-modal image fusion between TRUS and MRI to improve the malignant tissue samples for biopsy.

Design and evaluate techniques for determining and reducing the correction for movements of the

prostate tissue during the biopsy procedure by incorporating some biomechanical moddeling.

Keywords: Ultrasound-guided, biopsy, Magnetic-resonance , prostate, cancer

1. Introduction

TYPE of prostate cancer is most often diagnosed in men [1]. Incidents increased dramatically after the

introduction of prostate specific antigen (PSA) test [2],[3]. However, urologists face the dilemma of

patients with elevated PSA levels and / or increased and negative biopsy results. Because serum PSA

levels, used for early diagnosis of prostate cancer, is a test that is very sensitive but not specific, other

tests are needed to diagnose prostate cancer is transrectal ultrasound (TRUS) was introduced in 1968 as

a means for diagnostic imaging of prostate cancer [4]. The sensitivity of this technique for the detection

of prostate cancer is low (20-30% [5]) because more than 40% of prostate tumors isoechoic and only

the peripheral zone can be accurately detected [6],[7].

Doppler TRUS and application of contrast agents increase the detection rate of prostate cancer

74-98% [8]-[12]. More than 1.2 million prostate needle biopsy is run each year in the United States

[13]. TRUS-guided systematic biopsy (TRUSBx) is the gold standard for detecting prostate cancer. A

systematic approach is characterized by a low sensitivity (39-52%) and high specificity (81-82%) [14].

In case of doubt, the additional biopsy session needs to be done. In some cases, systematic protocol

extended with additional targeted biopsies detected by TRUS hypoechoic area, which increases the

detection rate slightly [4].

The role of magnetic resonance imaging (MRI) in the detection of prostate cancer is increasing

but highly debated [15]. Anatomical T2-weighted MRI has been disappointing in detecting and

localizing prostate cancer. Estimate the sensitivity of MRI for the detection of prostate cancer using T2-

weighted sequences and endorectal coil varied from 60% to 96% [16]. Several groups have

convincingly shown that dynamic contrast enhancement and spectroscopy to improve detection and that

the sensitivity of MRI is comparable to and can exceed of transrectal biopsy [16]. Various MRI

techniques, such as proton magnetic resonance (MR) spectroscopy and dynamic contrast-enhanced

MRI, has been applied for more accurate detection, localization, and staging of prostate cancer [17].

A new study shows the area under the receiver operating curve of 0.67 to 0.69 for prostate

localization with regular 1.5 T MRI anatomy [17]. Localization accuracy increased to 0.80 and 0.91

using MRI spectroscopy and by applying a contrast agent, respectively [17]. Diffusion-weighted MRI

is increasingly being used, which can lead to increased detection rates [18]-[20]. A prostate

segmentation approach based on establishing some means of parametric models derived from principal

component analysis of shape and posterior probability in multi-resolution framework to reach an

average Dice similarity coefficient of 0.91 ± 0.09 for the 126 images containing 40 images of the

summit, 40 images of the base and 46 images of the central region in a leave-one-patient-out validation

framework. The average time segmentation procedure was 0.67 ± 0.02 s [51].

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

215

A new approach of some form of statistical models and Information posterior probability of the

prostate region with the aim of 2D prostate segmentation in TRUS images has been proposed. Our

approach is accurate and robust to significant shape, size and variations in image contrast when

compared to traditional TRUS AAM. Changing the intensity of the posteriors are assisted in

automatically initialization and a significant improvement in segmentation accuracy. Some models on

average compared to single AAM models further improve the accuracy of segmentation. This model

has shown significant improvement in segmentation accuracy for basic and slices peak compared to

AAM. execution time Process is 0.67 with MATLAB code. We look forward speeds up execution time

models with optimized C + + coding.

A new method for non-rigid registration of transrectal ultrasound and magnetic resonance

images of prostate based on a regularized framework of non-linear point correspondences obtained from

statistical measure of shape-context. Registration accuracy of this method was evaluated in 20 pairs of

prostate mid-gland ultrasound and magnetic resonance images. The results obtained in terms of Dice

similarity coefficient indicates the average 0.980 ± 0.004, an average of 95% Hausdorff distance of 1.63

± 0.48 mm and the average enrollment targets and target localization error of 1.60 ± 1.17 mm and 0.15

± 0.12 mm, respectively [52].

Framework diffeomorphic non-linear novel with TPS to be Transformation of the underlying

has been proposed to register the prostate picture multimodal. A method for establishing point

correspondence on a pair of TRUS and MR images has also been proposed that By calculation

Bhattacharyya distance to shapecontext point representation contours. [52]. The bijectivity of

diffeomorphism which maintained by integrating over a set of non-linear functions for both the moving

image fixed and changed. It regularized bending energy and fault localization point correspondence is

established between the fixed and moving image has further added to the system non-linear equations

added to the constraint TPS. This additional constraint ensures deformation regularized of anatomical

structures of local in the prostate that is meaningful to clinical intervention such as a prostate biopsy.

Performance of the proposed method has been compared with two variations of a non-linear

transformation of TPS where the control point has been uniformly placed on grid for the first and the

control points established by using proposed a method of correspondence points for the second. Second

method does not involve regularization and only rely transformation function non-linear. Results

obtained on dataset of patients actually concluded that the overall performance The proposed method

in terms of accuracy of registration of global and local better than the two variations as well as traditional

TPS and B-splines based method of registration deformable, and Therefore the coming can be applied

to prostate biopsy procedure. The proposed method has been validated against the amount varies control

points which concluded that the control points within the prostate nodes required to maintain the

deformation clinically meaningful and that 8 point boundary catch inflexions of curve prostate

optimally suited than the limit or less control points. The proposed method has been shown to be

affected by inaccuracies automatic segmentation thanks resistance method of automatic segmentation

is used. Endorsement registration methods on the base and non mid-gland slices have demonstrated

high accuracy registration of global and local describe the robustness of the method.

Proposed TPS framework of non-linear regularization can be applied to 3D prostate registration

volume. However, point correspondence slice-by-slice can be formed after resampling prostate volume.

The TRUS-MR slice correspondence manually selected in our experiment can also be selected

automatically using the tracker EM attached to the TRUS probes that will provide the spatial position

pieces TRUS in pre-derived prostate TRUS / MR volume for needle biopsy. An automated method

based on information theory and statistics form analysis to find pieces of MR that closely match axial

slice TRUS is currently being investigated. Algorithm can be parallelized if programmed on GPU and

therefore may useful for real-time fusion of multimodal image the prostate during biopsy.

MR-guided biopsy of the more advanced techniques, but there is no current consensus on the

optimal technique [21-23]. Open and closed MRI settings used together. Several types of biopsy robot

[24], some of the software complex, used to guide the needle.

Target area was determined using a combination of different MRI techniques. Some doctors

use a transrectal approach, while others prefer transperineal methodology. Prostate motion during the

biopsy procedure is one of the biggest challenges in taking a biopsy of the prostate [25]. Some solutions

to this problem have been proposed, of fixation using a needle for rendering real-time images [26].

Some of MR-guided prostate biopsy approaches have been investigated, but so far, there is no consensus

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

216

for this technique. The purpose of this research is to integrate real-time TRUS and MRI fusion-guided

biopsy and to develop methods for using high-contrast MR data is sensitive for detecting tumors and

real-time character to follow TRUS prostate motion during the biopsy

2. State of the Art

There is a great need for accurate imaging technique for prostate cancer. Ease of access to the prostate

gland allows high resolution imaging of the prostate gland. Nowadays technology has evolved

radiography, magnetic resonance imaging (MRI) is considered the most suitable for this application,

where the method offers superior spatial resolution. With further improvements, MRI is expected to be

able to give the shape and position of the tumor and will play a central role in guiding the treatment of

prostate cancer. Image-guided procedures is also required, although a variety of instrumentation in the

holes of magnetic resonance (MR) scanner full of technical challenges.

Pondman and colleagues [45] provide an excellent overview of the engineering and clinical

efforts published to date involving MRI-guided prostate biopsy instrumentation and techniques. Several

groups have focused on this field and produce a variety of prototype devices capable of safely and

accurately place a needle biopsy in the prostate gland tissue to allow sampling of the prostate were

identified from the MR images. These preliminary results need support with many design elements of

this device as well as the validation study of MRI in characterization of prostate pathology. The true

benefits of these efforts can not be realized.

Ghose and colleagues [53] also provide an excellent overview of the methods developed for

segmentation of TRUS prostate gland, MR and CT images, the three major imaging modalities that aid

in the diagnosis of prostate cancer and treatment. The purpose of this paper is to study the similarities

and differences between the main different methods, highlighting their strengths and weaknesses to help

segmentation methodology right choice. Defining a new taxonomy for prostate segmentation strategy

that allows the grouping algorithm first and then to show The main advantages and disadvantages of

each strategy. Given a comprehensive overview of the existing methods in all TRUS, MR and CT

modalities, highlighting their key-points and features. Finally, a discussion of selecting the most

appropriate segmentation strategy for given imaging modality provided

Development of MR-compatible instrumentation for imaging guided procedures provide a

springboard for a variety of diagnostic and therapeutic glands in benign and malignant disease. In

addition to biopsies, needle-based procedures, such as network-based focal ablation or injection with

therapeutic monitoring the effects of treatment, which may now equipped with sophisticated imaging

techniques, such as MR thermography. Similarly, other pelvic procedures based endocavity, including

gynecological and colorectal procedures, may be adjusted by using the device. The addition of remote

actuation, or'' robot'' automation, further extends the potential of this approach to include other forms

of the procedure and the use of the organ systems.

A large number of studies [46],[47] have recently shown that magnetic resonance (MR), in

addition to the proton 1H-spectroscopic analysis and dynamic contrast-enhanced imaging (DCEMR),

could represent a powerful tool for managing aspects of prostate cancer, including early diagnosis ,

localization of cancer, the road map for surgery and radiotherapy, and early detection of local

recurrence. Ultrasound-guided biopsy is considered the preferred method for the detection of prostate

cancer, however, most studies have reported that sextant biopsies missed up to 30% of cancers, and

biopsy results showed 83% positive predictive value and negative predictive value of 36% when

compared with radical prostatectomy for tumor localization [48]. Although MR and MR spectroscopic

imaging (MRSI) is not currently used as a first approach to diagnose prostate cancer, they can be useful

for directing targeted biopsy, particularly for cases with antigen (PSA) levels of prostate-specific are

indicative of cancer and negative biopsy previously.

In this article, Pondman et al [45] summarized the technical applications and current clinical

MR-guided prostate biopsy. In some experiences [49],[50], MR-guided biopsy techniques are widely

available, but there is no current consensus on the optimal technique. In addition, relevant issues in

particular, prostate motion during the biopsy is one of the biggest challenges in taking a biopsy of the

gland. Assistance robot for MR guided prostate interventions to improve outcomes, but the cost could

be more relevant issues. For a long time, a valid diagnostic imaging for prostate cancer is not yet

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

217

available. MRSI can reduce the rate of false negative biopsies and reduce the need for more extensive

biopsy or repeat biopsy procedure. MR-guided prostate biopsy also will have an increasing role in this

field. Extensive clinical studies are very important to analyze the real value and benefits of MR guidance

for biopsy.

A. Classification Supervised And Un-supervised Algorithm

In pattern recognition feature can be defined as a measurable quantity that can be used to distinguish

two or more areas. [59]. More than one feature can be used to distinguish various regions and various

features of the known as feature vectors. Vector space associated the feature vectors are known as

feature space. Supervised and classification (PR)-based techniques aimed at un-supervised get partition

feature space into a set of labels for different areas. Especially the classifier and / or grouping based on

techniques used for this purpose. Classifiers using a set of Training data with objects labeled a priori

information build a predictor to assign labels un-label observational future. In contrast, the method of

grouping a set of feature vectors given and the goal is to identify groups or similar groups objects on

the basis of the feature vector associated with each. Proximity measures are used to group the data into

clusters Similar types.

1. Classifier-based Segmentation

In classifiers based prostate segmentation is viewed as prediction or learning problems. Each object in

the training set associated with the response variable (class label) and features vector. The training set

used to build predictors that can specify the class label for the object on the basis of observed feature

vectors.

a. TRUS.

Intensity heterogeneity, texture features dependable and imaging artifacts pose a challenge to feature

space for the partition. Zaim [60] used texture features, spatial values and gray-level information in self

organizing map neural network to segment the prostate. In more recent work [61] the author uses energy

entropy and symmetric, orthonormal, and the second order wavelet coefficients [62] overlapping

windows in support vector machine (SVM) classifier. Mohammed et al. [63] used spatial and frequency

Domain information of Gabor filters and multi-resolution prior knowledge of the location of the prostate

in TRUS images to identify prostate. Parametric and non-parametric estimation Fourier transform

power spectrum density along with ring and wedge filters [64] of the region of interest (ROI) were used

as a feature vector to classify images to TRUS prostate and prostate area non use non-linear SVM.

b. MRI.

The use of the edge detector operator characteristic MR images can produce many false edges due to

high soft tissue contrast. Therefore, Zwiggelaar et al. [98] used first and second-order derivative

direction Lindeberg [99], in polar coordinate system to identify the edges. An inverse transformation

The longest of the selected curve after non-maximum suppression interrupted curve in the vertical

direction is used to getting prostate boundary. On the other hand, Samiee et al. [100] using prior

information from the prostate to improve form prostate boundary. Average gradient values obtained

from moving mask (guided by previous forms of information) is used to browse the prostate boundary.

In the same way, Flores-Tapia et al. [101] used a priori information form of prostate to explore the

limits by a small mask movement the feature space constructed from the product detail Haar wavelet

coefficients in multi-resolution framework.

2. Clustering Based Segmentation

The purpose of clustering-based method is to determine the intrinsic grouping of data in a set of un-

labeled by some distance measures. Any data associated with vector features and The task is to identify

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

218

groups of similar objects or groups at the basis of the set of feature vectors. Number of groups implicitly

assumed to be known and we have to choose Relevant features, and algorithms to measure the distance

is used.

a. TRUS

Richard et al. [65] using an average shift algorithm [66] in the texture space to determine the mean and

covariance matrix for each cluster. A probabilistic label is assigned to each pixel pixel determines

membership with respect to each cluster. Finally, the compatibility coefficient and pixel spatial

information is used for probabilistic relaxation and refinement of the prostate area.

3. Hybrid Segmentation

Combines a priori limit, shape, area, and features information of the prostate gland increases

segmentation accuracy. The method robust to noise and produce superior results with variations in shape

and texture of the prostate.

a. TRUS

Mid prostate gland images in the axial slices in TRUS images often characterized by a hypoechoic mass

surrounded by hyperechoic halo. In order to capture this feature, Liu et al. [67] proposes to use a radial

search of a center prostate to determine prostate edge points. The key identified boundary points of

greatest variation in gray values in each row. Average shape models built of manually segmented

contours are used to fix the lock points. A similar scheme was adopted by Yan et al. [68]. In this case

case, the contrast in profile variations normal vector perpendicular to the PDM is used to automatically

determine the prominent points and the prostate boundary. Important points determined by removing

the points that fall in the shadow areas.

Previous forms of information helped determine the shape of prostate points are lost in shadow

areas on TRUS images. Optimal search through profiles perpendicular vector for key points are used to

determine prostate boundary with discrete deformable models in a multiresolution, energy minimization

framework. Modeling shape and texture features and use them to segment the new image has been used

by many researchers. It scheme mainly vary in approach adopted in the creation models of shape and

texture. For example, Zhan et al. [69] proposed to model the texture space by grouping into prostate

and non-prostate region texture features captured by rotation invariant Gabor filter with a way SVM.

This space is then used as a feature classified external force in a deformable model framework to

segment prostate. In consequence their work [70], the authors propose to speed up the process by using

Zernike moments [71] to detect the edges in the low and medium resolution and maintain texture

classification using Gabor feature and SVM. In different ways [72], the authors also propose to reduce

number of support vectors by introducing a penalty term in the objective function of SVM, which punish

and reject outliers. Finally, Zhan et al. [57] proposed to combine texture and edge information to

improve segmentation accuracy. Gabor feature rotation invariant multi-resolution of prostate and non-

prostate area is used to train Gaussian kernel SVM classification system for prostate texture areas. In

the deformable segmentation procedures, SVM is used to label voxels around the surface of the

deformable models as prostate or non-prostate tissue.

Furthermore, the surface deformable models are encouraged to limit the deformation forcing

prostate tissue labeled. Step Network labeling and label-based measure surface deformation dependent

on each other, the process is carried out iteratively convergence. A similar scheme was adopted by Diaz

and Castañeda [73]. Asymmetric sticks and anisotropic filter is first applied to reduce spots in TRUS

images. A DDC produced using cubic interpolation of four points is initialized by the user. It DDC

deformed under the influence of the force, the gradient large and damping forces to produce contour

prostate. Features such as the average intensity, variance, output filter back projection, and stick Filters

are used to build the feature vector. Pixels are classified into prostate and non-prostate region using

SVM.

Furthermore, DDC is automatically initialized from the prostate boundary and used to obtain

the final contours of the prostate. Cosio et al. [58] using the position and value of the gray scale of the

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

219

prostate in TRUS images in a Gaussian mixture model of three Gaussian clusters prostate, and non-

prostate tissue and identifying halo around the prostate in TRUS images. Bayes classifier is used to

identify regions of the prostate. After pixel classification ASM initialized with a binary image using a

global optimization Method. Optimization problem consists of finding The optimum combination of

four posing and two shape parameters, which corresponds to an estimate of the prostate boundary in

binary image.

A multi-population genetic algorithm with four posing and ten parameters are used to optimize

the shape ASM in a multi-resolution framework to segment the prostate. Another common hybrid

approach is to use both forms and prostate intensity distribution segment. Medina et al. [74] using AAM

framework [75] for a model of the form and texture of the prostate area. In this framework Gaussian

shape and intensity models are made from PCA analysis combined to produce a model of the combined

average. The prostate is segmented exploit prior knowledge nature of space optimization in minimizing

the difference between the target image and the model average. Ghose et al. [76] using the estimated

coefficients Haar wavelet for reducing speckle to improve segmentation accuracy. Then, Ghose et al.

[77] developed the model further by introduced a textural contrast invariant features extracted from log

Gabor quadrature filters.

Recently Ghose et al. [78] Used information obtained in the framework of probabilistic

Bayesian to build a model of appearance. Furthermore, some means the model shape and appearance

priors used for improve the accuracy of segmentation. Gong et al. [79] proposed using super-elliptical

deformable shape models to produce prostate. Using deformable super-ellipse as previously the model

for the prostate, the ultimate goal is to find optimal parameter vector that best describes the prostate in

given unsegmented images. Search formulated as maximum posterior criterion using Bayes rule. It

initial parameters used in the maximum a posteriori (MAP) framework to obtain optimized parameters

for the ellipse. Then, Tutar et al. [80] using an average of three manual described prostate contours to

construct a three-dimensional mesh with spherical harmonic to represent the average prostate models.

With 8 harmonics, vector feature Element 192 was reduced to 20 using PCA. Users initialize algorithm

to decipher the prostate boundary in mid-gland axial and sagittal images. Therefore, the problem of

finding shape parameter vector that will segment the prostate in the spatial domain is reduced to finding

the optimal shape parameter in the parametric domain that maximized the posterior of the probability

density function of the cost, which measures the level of agreement between the model and the edge of

the prostate in the picture. Yang et al. [81] proposes to use the min / max flow [82] for the smooth

contour of a 3D model of the prostate is made of delineation 2D users.

The main mode of the form variations identified with PCA and morphological filters used to

extract the prostate region-based information gland. The model, and region-based information then

combined in a Bayesian framework to generate energy function, which reduced the level set framework.

Garnier et al. [83] used 8 user defined points to initialize 3D prostate mesh. Two algorithms are used to

determine end prostate segmentation. First, DDC with an edge as external forces and 6 user defined

point central gland as a landmark that is used to destroy the hole into segments prostate. Furthermore,

the initial mesh is used to create a graph and in the second stage of image features such as gradient

introduced to construct a cost function. Finally, the graph-cut is used to determine the prostate volume.

Graph cut results enhanced with DDC to improve results.

b. MRI

Before the shape and size of the prostate that information exploited by vikal et al. [84] to build a model

of the form of the average from manually drawn contours. The authors use

Canny edge filter to determine after the pre-processing drawing with sticks filter to suppress noise and

enhance contrast. On average the model which is used to dispose of pixels that do not follow the same

orientation as the model. Contours obtained further enhanced by removal gap using polynomial

interpolation. Segmented contours obtained in the middle of the slices are used to initialize the slices

lie above and below the central slice. The use of Bayesian framework to model the texture prostate is

common in MR images. For example, Allen et al. [85] proposed to segment the prostate within the

framework of EM treat three distinctive peaks in the intensity distribution as a mixture of three

Gaussians (background, middle region and the outskirts of the prostate).

A limited form of deformable models with pixels clustered as deformation strength then used

to segment the prostate. Similarly, in makni et al. [55], intensity of the prostate region is modeled as a

mixture Gaussians. They propose a Bayesian approach where prior probability of the voxel labeling is

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

220

achieved by using a limited form of deformable models and Markov fields modeling. Conditional

probabilities associated with modeled intensity values, and segmentation is achieved to estimate the

optimal label for prostate boundary pixels within the framework of the MAP decision. Although the

atlas-based registration and segmentation prostate has become popular in recent time, obtained

Segmentation results should be improved by deformable model to improve the accuracy. Martin et al.

[54] used a hybrid registration minimize energy intensity and geometry to register the atlas.

Minimization of intensity based energy devoted to image template matching with reference geometric

images while minimizing energy fit the model points of the template image to the scene points belong

to the reference image. Finally, the form of restricted Deformable models are used to enhance the

results.

More recently, Martin et al. [86] using a probabilistic atlas for impose further limits the spatial

and prostatic segments in three dimensions. Shape and texture modeling of the prostate have been

combined in the work of Tsai et al. [87], which uses shapes and regions level set based framework to

segment the prostate on MR images. One contour fixed and used as a reference system where all other

contour affine transformed minimize their differences in a multi-resolution approach. PCA of the main

mode captures shape variability variation and also incorporated in the level set function, with with the

area-based information such as the area, the amount of intensity, average intensity and variance

information. Minimization level sets of the objective function produces segmented prostate. The authors

also suggest that the level sets of coupled models of the prostate, rectum, and internal obturator muscles

from MR images to segment these structures simultaneously [88]. Algorithm is made robust by allowing

forms overlap each other, and the last is the segmentation achieved by maximizing the mutual

information of three areas. Similarly, Liu et al. [56] use the elliptical for deformable

Prostate boundary segment after Otsu thresholding [89] of images in the prostate and non-

prostate area. A limited form level set initialized from the ellipse fitting of the prostate is used to further

refine the results. Finally, post processing gradient map of prostate and rectum produce a final

segmentation. Firjani et al. [90] modeled background and foreground with pixel Gaussian Mixed

Markov random field and using information from probability of pixels into the prostate in building form

Models. Shape and intensity jointly optimized with graph cut based algorithm. The authors extended

their work for 3D segmentation of the prostate [91]. Zhang et al. [92] propose an interactive

environment for prostate segmentation. Area and edge-based level set is used to segment the prostate

from the background depends on the foreground and background information provided by area-based

users. Gao et al. [93] represents the form of a series of training as a point cloud. Particle filters are used

to register the cloud points made from a common reference prostate volume to minimize the difference

in the pose.

Prior Shape and local Statistics images incorporated in the energy function minimized to

achieve the segmentation of the prostate at the level of set framework. Recently, Toth et al. [94] using

a series of 50 Gaussian kernel variables to extract texture size prostate features. ASM is built from

contours manually drawn training images are automatically initialized depending on The most likely

location of the prostate boundary to reach segmentation. Then, Toth et al. [95] in addition to the intensity

values, use the mean, standard deviation, range, slope, and kurtosis values in the intensity of the local

environment to deploy ASM automatically initialized from magnetic resonance spectroscopy (MRS)

information. Information clustered MRS using replicated k-means clustering to identify prostate in mid-

slice to initialize the ASM multi-feature. Khurd et al. [96] Localized prostate gland center with a mixture

of Gaussian Model-based clustering and expectation maximization after reducing the magnetic bias in

the image. Thresholding on Prostate probabilistic maps obtained by random walker-based segmentation

algorithm [97] to segment prostate.

4. Future Trends

Diagnostic imaging has become an indispensable procedure in medical science. Patient anatomical

imaging methods structure has improved the diagnosis of pathology,create new avenues of research in

the process. Automatic segmentation of anatomical structures from different imaging modalities such

as U.S., MRI and CT has become an important step for reduce the variability of inter and intra-observer,

improving contouring time thereafter. This paper examines the methods involved with prostate

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

221

segmentation. Strengths and limitations segmentation methodology has been discussed along with the

presentation of validation and performance evaluation same. Finally, the discussion in choosing the

right segmentation methodologies for specific imaging modalities have has been done. It has been

underlined that the segmentation of the prostate must use a geometric technique, spatial, intensity,

texture, and prior imaging physics to improve accuracy.

Prostate segmentation is still an open issue and with advances in technology for diagnosis,

treatment and follow-up of prostate diseases new requirements must be met. Multimodal image fusion

least two imaging modalities provide valuable information. For example, a fusion MRI and TRUS

imaging should be helpful in getting a more accurate sample during the biopsy. However, for these

methods to work in a real scenario, automatic, accurate and real time fusion of the two imaging

modalities is needed. In such circumstances automatic real time segmentation of prostate and

registrations will increase the accuracy of prostate contour and efficiency. Automatic segmentation and

accurate real time prostate can be achieved with efficient algorithms designed for graphics processing

unit. Moreover, the purpose prostate segmentation in each frame can be modified by Prostate tracking

purposes on every frame. Enhancement in 3D prostate segmentation method will become a trend in

coming years due to the increasing use of 3D imaging modalities, efficient and accurate algorithm where

necessary. In sense, information from dynamic contrast-enhanced MRI, and MR spectroscopy will be

used as an additional features for automatic segmentation. In addition, registration performed on

prostate contours for the same modality during the period time can also provide valuable information

about the development of the of prostate disease.

Ghose et al. [102]. have validated the accuracy and robustness of their approach with 15 MR

public dataset with image resolution of 256x256 pixels of MICCAI prostate challenge [103] in a leave

one out evaluation strategy. During validation the test dataset is removed and the probabilistic atlas and

multiple mean models of the apex, central and the base regions are constructed with the remaining 14

datasets. To determine the region of interest for atlas based registration the center of a central slice is

manually provided by the user. Such an interaction is necessary to minimize the influence of intensity

heterogeneities around the prostate [104],[105]. The probabilistic atlas produces an initial soft

segmentation of the prostate. The centroid of each of the 2D slices of the prostate volume is computed

from probabilistic values of the soft segmentation. All the mean models of the corresponding regions

(apex, central and base) are initialized at the centroid of each of the slices to segment the prostate in

that slice. The segmentation result of the mean model producing the least fitting error is selected as the

final segmentation in 2D. The 2D labels are rigidly registered to the 3D labels generated using

probabilistic atlas to constrain pose variation and generate valid 3D shapes. Their method is

implemented in Matlab 7 on an Intel Quad Core Q9550 processor of 2.83 Ghz processor speed and 8

GB RAM. They have used most of the popular prostate segmentation evaluation metrics like Dice

similarity coefficient (DSC), 95%Hausdorff distance (HD),mean absolute distance (MAD), specificity,

sensitivity, and accuracy to evaluate our method. They have compared their method with the results

published in MICCAI prostate challenge 2009 [104],[106] and with the work of Gao et al. [107] . They

observe that their method performs better than some of the works in literature. It is to be noted that

[106] used a probabilistic atlas for their segmentation. However, our hybrid framework of probabilistic

atlas and multiple SSAM improves on overlap and contour accuracies. The accuracy of their method

may be attributed to the use of the hybrid framework of optimized 2D segmentation that incorporate

local variabilities and 3D shape restriction to produce a valid prostate shape. Multiple mean models of

shape and intensity priors for different regions of the prostate approximate the local variabilities better

as each of these models are capable of producing new instances in a Gaussian space of shape and

appearance. Also, SSAM being a region based segmentation technique performs well in the base and

apex regions of the prostate for low contrast images.

5. Robotics

Robotics is a new field in medicine because of stringent safety criteria. The most commonly used robot

in minimally invasive procedures, such as the heart, bladder, prostate, and neurosurgery. By using MRI

to guide the robot to cause some additional problems: limited patient access, and MR compatibility is

very important. Robots may not interfere with the image obtained using the MR scanner. For this reason,

ferromagnetic and electronic devices can not be used in the magnet. In addition, the robot must be

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

222

registered with MR images using the coordinate system to be able to target anatomical areas. These

challenges include manual and traditional electromechanical robot handling a needle biopsy. Therefore,

new methods and devices have been developed, including real-time, in-scanner guidance methods to

operate the device.

6. MR-guided Biopsy In An Open Bore

Open-bore MR scanner simultaneously allowing access to patients and near-real-time MRI.

Unfortunately, open MR scanners are known for the ratio of signal-to-noise associated with low field

strength (typically 0.5 T). The result is that the image quality is too low to adequately localize the tumor.

To reliably identify the target area, 1.5 T images must be obtained before biopsy procedure by way of

a closed bore scanner. One of the first devices for the placement of the needle used by D'Amico et al

[22] in a 0.5 T MR system open a patient who had undergone proctocolectomy. The researchers

performed a needle biopsy with holes placed against the perineum. The transperineal approach is ideal

for patients who have undergone rectal surgery, but this approach is more invasive than conventional

transrectal approach and requires general or regional anesthesia [27].

Hata et al [27] also used needle guidance template with holes spaced 5 mm apart in an open

MR system for targeted biopsy. Template has been registered to the MR scanner uses optical tracking

system. Utilizing planning software, which will guide the selected hole biopsy needle through the

perineum into the target. Image obtained during needle insertion to confirm the position of the needle

and the suspicious area. Zangos et al [28] using a 0.2 T open MR scanner for real-time needle tracking

for transgluteal prostate biopsy in 25 patients. Tumors were not seen during the intervention on T1-

weighted images 0.2 T, so the data must be extracted from previously acquired 1.5 T images to find the

location of targets. Insertion point is marked by record markers or by using a finger as an external

marker for imaging.

Haker et al [29] using interventional MR system for transperineal biopsy. Images obtained with

a 0.5 T open interventional magnet. The researchers developed a MR-compatible robotic assistants

mounted on the head surgeon. Two long, parallel connections form a stiff arm to manipulate the guide

needle-holder. This device is coupled with planning and tracking software. Optical sensors measure the

displacement of each stage of the motion. All coordinates are automatically calculated and used to guide

the needle position. When the guide is in position, the doctor inserted the needle manually.

7. MR-guided Biopsy In A Closed Bore

Fichtinger et al [30] developed a transrectal prostate biopsy device that can be used in conventional

open MR scanner. Patient lying in a prone position, and the veil was introduced in the rectum, a needle

guidance device can be slid into the sheath. To guide the devices, microcoil antenna wound around a

small capsule contains gadolinium active fiducials used as a solvent. Position is used as a marker of the

rigid body of the device to compute the exact orientation of the entire device. Guiding needle device

has 3 degrees of freedom (df), motion, translation, and rotation endeffector in the rectum. Images

acquired continuously. Computers calculate continuous motion parameters, allowing real-time dynamic

control. Stages of manual motion today, but they may be motorized in the future.

An MR-biopsy device suitable for use in clinical practice is used by Beyersdorff et al [21]. A

guide needle is inserted into the rectum and connected to this device. This device allows rotation,

angulation, and translation needle. This guide does not contain a fiduciary active coils, but guides the

needle seen on both T1-and T2-weighted MR images. A fast imaging method used to detect the position

of the needle book. Beyersdorff et al [21] performed MR-guided biopsy with this device in 12 patients

with elevated PSA levels and previous negative TRUSBx round. In seven patients, the suspicious areas

can be defined in a fast sequence, in five other cases, areas of interest are clearly visible in the image

prebiopsi and can be marked on the images obtained during biopsy. Positioning the needle guide was

time consuming, the first guide to be identified on the localizer image, then, to adjust the position of the

device, the patient had to glide in and out of the scanner. After repositioning of the needle book, MR

images in two perpendicular plane should be obtained to guide to the target drive. Prostate cancer was

detected in 5 of 12 patients (42%).

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

223

Engelhard et al [31] using a similar device and modified in a study with 37 patients who

underwent a previous negative prostate biopsy. The researchers concluded that a suspicious lesions with

a diameter of 10mm can be successfully punctured using this device. Prostate cancer was detected in

14 of 37 patients (38%). In a study of 27 patients with previous negative TRUSBx, Anastasiadis et al

[32] also used this device and found prostate cancer in 15 (55.5%) patients. The level of detection after

one round of negative biopsies ranged between 38% and 55.5% [21],[31],[32], these data are promising

and demonstrate the potential clinical value of MR-guided biopsy. TRUSBx in the second half, only

15% to 20% of prostate cancers will be detected, in the third round of biopsies, only 8% will be detected

[33].

The transrectal needle guide system, called APT-MRI (standing for'' access to prostate tissue

under MRI guidance'') developed by Krieger et al [34] and Susil et al [35]. It can be used in closed pits

3 T magnet with the patient in the prone position. Needle placement devices, to some extent, similar to

the device used by Beyersdorff et al [21] and Fichtinger et al [30]. APT-MRI device combining

endorectal coil imaging and tracking method hybrid that consists of a fiduciary marker tracking passive

and MR compatible fiber-optic joint encoders. Coordinates to position interventional devices in the

scanner obtained from MR images with gadolinium fiduciary marker segmentation tube is inserted into

the main axis of the tube and the two markers are placed according to the needle tract.

Thin sheets of 1-mm isotropic, sagittal, proton-densityweighted TSE image obtained in a field

marker. Automatic segmentation markers achieved using custom targeting software, the sagittal image

format into the axial plane along the main axis of the device along the axis of intervention and needle.

3 df available to achieve the target include endorectal probe rotation, pitch (angle) of the needle, and

the needle insertion depth. Targeting software provide the necessary guiding parameters that are

controlled through a quick manual adjustment of the device. Commercially available MR-compatible

core needle biopsy can be used by APT-MRI device.

In three biopsy procedures performed by Susil et al, biopsy needle placement accuracy on

average was 1.8 mm (range 0.4 to 4.0 mm) [34],[35]. Using this device in a study with 13 patients with

at least one previous negative prostate biopsy 12 months earlier, Singh et al [20] found only one patient

with directed biopsy positive for prostate cancer.

DiMaio et al [29],[36] and Fischer et al [23] designed a robot manipulator to perform

transperineal biopsy with the patient in the supine position. This position may be more relaxing for

patients compared with the prone position is more commonly used. This device consists of visualization,

planning, and navigation and robotic placement machine needle. This robot can be adjusted by using a

pneumatic actuator. Position information provided by the passive tracking fiducials on the robot base.

This system has proven to be MRI compatible. In free space, the needle tip localization accuracy of

0.25mm and 0.5 mm. Zangos et al [37] using the device for transgluteal biopsy in a closed-bore system.

Biopsy Transgluteal minimize the risk of injury to the bladder, bowel, and iliac vessels, and

there is no intestinal bacteria into the prostate. Disadvantages of this method is the need for local or

general anesthesia and the longer lines biopsy. This device uses markers in guiding hydrolic arm, and a

basket system to control the drive and optical sensors. Innomotion Interactive software used to plan and

to control biopsy. The device used in the study body, where the body was placed in a prone or lateral

position. Insertion point and the target are marked on T2-weighted MR images, and the biopsy path is

calculated automatically. Needle is inserted manually by the doctor after the body was removed from

the scanner. The average deviation of the needle tip is 0.35mm (range 0.08 to 1.08 mm). In all cases,

the target is reached in the initial effort. This technique has not been used in clinical practice. As

mentioned above, immobilization of the prostate is a problem that affects the accuracy of the target.

Transrectal approach seems to have some difficulty with prostate movement.

Transrectal prostate biopsy device used by Fichtinger et al [30] and by Krieger et al [34] has

been modified to account for this problem. Devices used by Beyersdorff et al [21] is used to guide the

needle MRIcompatible prostate biopsy. This device has been described as having a problem with the

prostate motion during the biopsy, but the needle guide can be used to immobilize the prostate.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

224

3. Discussion

MR-guided prostate biopsy will have a higher role in diagnosing prostate cancer. One of the biggest

challenges in taking a biopsy of the prostate is a correction for the motion of the prostate tissue during

the biopsy procedure. Research is needed to design and evaluate techniques to determine and reduce

the movement. Lattouf et al [38] evaluated 26 patients to understand whether the use of endorectal MRI

before TRUSBx improved diagnosis of prostate cancer in high risk populations.They found that MRI

before TRUSBx tend to produce more cancer diagnosis, but the difference was not statistically

significant. The reason for this finding may include MRI findings suboptimal localization and biopsy

site. Real-time TRUS and MRI-guided biopsy fusion is proposed as a method for using high-contrast

MR data is sensitive for detecting tumors and real-time TRUS character to follow the movement of the

prostate during a biopsy [39],[40].

The technique used for biopsy can also be used for the treatment of prostate cancer, such as

with brachytherapy. In this case, accurate seed placement is very important to provide accurate dose

coverage, but only a few reports are available with the initial data. A study by Singh showed promising

results in patients with three-seed implantation using MR data is fused with computer-assisted

tomography (CT) data for treatment planning [41]. Future studies should investigate the role of the

robot guided treatment. Assistance robot for MR-guided interventions in the hole closed with prostate,

3 T magnets has been investigated by researchers in Urobotics Laboratory, Department of Urology,

Johns Hopkins University, led by Dr. Dan Stoianovici. MR-compatible robotic device using optical-

pneumatic actuation and sensing mechanisms and pneumatic motors (PneuStep) [42]. The motor is

separated from the magnet and electromagnet due entirely made of nonmagnetic and dielectric

components [42].

Pneumatic actuation is the perfect choice for MRI compatibility as well as to achieve very high

precision and reproducibility in accessing the target [43]. The robot was tested with impressive accuracy

for image-guided needle targeting mock-up, ex vivo, and animal studies [44]. Currently, the new holder

is being developed for the robot to handle the transperineal biopsy needle into the prostate to access.

Robot mountain in the imager table with the patient, who is placed in a decubitus position, and he was

able to orient and operate a needle biopsy under direct MRI guidance. T2-weighted TSE sequence data

is transferred from the imager used by custom software to determine the target and entry point for the

needle in three-dimensional coordinate system gambarMR. A software used to calculate the coordinates

for each position in the coordinate system of the robot guides the robot to perform automatic, needle

placement centered targets.

4.Conclusions

Prostate biopsy is an essential procedure for determining optimal treatment. systematic TRUSBx still

failed to detect a variety of tumors. Using the MR images during diagnostic biopsy procedure biopsi.

Various improve the quality of MR-compatible robot has been developed. Powered mechanical robots

can be adjusted from outside the scanner where the needle insertion should be executed manually

investigated all the robots. Unfortunately, little is known about the accuracy of this robot.

This paper proposed the technique of multi-modal image -guided biopsy technique for detecting

prostate cancer accurately. Develop prostate multimodal image registration method based on

information theory and automated statistical shape analysis to find pieces that closely match the MR

axial slice and TRUS. Design and evaluate efficient and accurate segmentation of the prostate in 2D

TRUS image sequence to facilitate multi-modal image fusion between TRUS and MRI to improve the

malignant tissue samples for biopsy. The use of multiparametric MRI with merged between

spectroscopy and both T1-and T2-weighted MR images to improve prostate detection. Design and

evaluate techniques for determining and reducing the correction for movements of the prostate tissue

during the biopsy procedure by incorporating some biomechanical moddeling. Implement software to

define the targets and the entry points for the needle on the three-dimensional coordinate system of the

image. Determine the correspondence points on a pair of TRUS and MR images and to calculate the

coordinates for the respective positions in the coordinate system of target-centered needle placements.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

225

5. Acknowledgment

The author would like to thank College of Computer of Bumigora, Mataram, West Nusa-Tenggara,

Indonesia for funding this preliminary research.

References

[1] Jemal A, Siegel R, Ward E, Murray T, Xu J, Thun MJ. Cancer statistics, 2007. CA Cancer

J Clin 2007;57:43–66.

[2] Barry MJ. Clinical practice. Prostate-specific-antigen testing for early diagnosis of prostate

cancer. N Engl J Med 2001;344:1373–7.

[3] Frankel S, Smith GD, Donovan J, Neal D. Screening for prostate cancer. Lancet

2003;361:1122–8.

[4] Heijmink SW, van Moerkerk H, Kiemeney LA, Witjes JA, Frauscher F, Barentsz JO. A

comparison of the diagnostic performance of systematic versus ultrasoundguided biopsies

of prostate cancer. Eur Radiol 2006;16: 927–38.

[5] Terris MK, Wallen EM, Stamey TA. Comparison of midlobe versus lateral systematic

sextant biopsies in the detection of prostate cancer. Urol Int 1997;59:239–42.

[6] Salo JO, Rannikko S, Makinen J, Lehtonen T. Echogenic structure of prostatic cancer

imaged on radical prostatectomy specimens. Prostate 1987;10:1–9.

[7] Ellis WJ, Brawer MK. The significance of isoechoic prostatic carcinoma. J Urol

1994;152:2304–7.

[8] de la Rosette JJ, Aarnink RG. New developments in ultrasonography for the detection of

prostate cancer. J Endourol 2001;15:93–104.

[9] Ismail M, Petersen RO, Alexander AA, Newschaffer C, Gomella LG. Color Doppler

imaging in predicting the biologic behavior of prostate cancer: correlation with disease-

free survival. Urology 1997;50:906–12.

[10] Kimura G, Nishimura T, Kimata R, Saito Y, Yoshida K. Random systematic sextant biopsy

versus power doppler ultrasound-guided target biopsy in the diagnosis of prostate cancer:

positive rate and clinicopathological features. J Nippon Med Sch 2005;72:262–9.

[11] Lavoipierre AM, Snow RM, Frydenberg M, et al. Prostatic cancer: role of color Doppler

imaging in transrectal sonography. AJR Am J Roentgenol 1998;171:205–10.

[12] Scattoni V, Zlotta A, Montironi R, Schulman C, Rigatti P, Montorsi F. Extended and

saturationprostatic biopsy in the diagnosis and characterisation of prostate cancer: a critical

analysis of the literature. Eur Urol 2007;52:1309–22.

[13] Haker SJ, Mulkern RV, Roebuck JR, et al. Magnetic resonance-guided prostate

interventions. Top Magn Reson Imaging 2005;16:355–68.

[14] Norberg M, Egevad L, Holmberg L, Sparen P, Norlen BJ, Busch C. The sextant protocol

for ultrasound-guided core biopsies of the prostate underestimates the presence of cancer.

Urology 1997;50:562–6.

[15] Schiebler ML, Schnall MD, Pollack HM, et al. Current role of MR imaging in the staging

of adenocarcinoma of the prostate. Radiology 1993;189:339–52.

[16] Kirkham APS, Emberton M, Allen C. How good is MRI at detecting and characterising

cancer within the prostate? Eur Urol 2006;50:1163–75, discussion 1175.

[17] Futterer JJ, Heijmink SW, Scheenen TW, et al. Prostate cancer localization with dynamic

contrast-enhanced MR imaging and proton MR spectroscopic imaging. Radiology

2006;241:449–58.

[18] Hosseinzadeh K, Schwarz SD. Endorectal diffusionweighted imaging in prostate cancer to

differentiate malignant and benign peripheral zone tissue. J Magn Reson Imaging

2004;20:654–61.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

226

[19] Shimofusa R, Fujimoto H, Akamata H, et al. Diffusionweighted imaging of prostate

cancer. J Comput Assist Tomogr 2005;29:149–53.

[20] Singh AK, Krieger A, Lattouf JB, et al. Patient selection determines the prostate cancer

yield of dynamic contrast-enhanced magnetic resonance imaging-guided transrectal

biopsies in a closed 3-Tesla scanner. BJU Int 2008;101:181–5.

[21] Beyersdorff D, Winkel A, Hamm B, Lenk S, Loening SA, Taupitz M. MR imaging-guided

prostate biopsy with a closed MR unit at 1.5 T: initial results. Radiology 2005; 234:576–

81.

[22] D’Amico AV, Tempany CM, Cormack R, et al. Transperineal magnetic resonance image

guided prostate biopsy. J Urol 2000;164:385–7.

[23] Fischer GS, DiMaio SP, Iordachita II, Fichtinger G. Robotic assistant for transperineal

prostate interventions in 3 T closed MRI. Med Image Comput Comput Assist Interv Int

Conf Med Image Comput Comput Assist Interv 2007; 10:425–33.

[24] Cleary K, Melzer A, Watson V, Kronreif G, Stoianovici D. Interventional robotic systems:

applications and technology state-of-the-art. Minim Invasive Ther Allied Technol

2006;15:101–13.

[25] Stone NN, Roy J, Hong S, Lo YC, Stock RG. Prostate gland motion and deformation

caused by needle placement during brachytherapy. Brachytherapy 2002;1:154–60.

[26] Dattoli M, Waller K. A simple method to stabilize the prostate during transperineal prostate

brachytherapy. Int J Radiat Oncol Biol Phys 1997;38:341–2.

[27] Hata N, Jinzaki M, Kacher D, et al. MR imaging-guided prostate biopsy with surgical

navigation software: device validation and feasibility. Radiology 2001;220:263–8.

[28] Zangos S, Eichler K, Engelmann K, et al. MR-guided transgluteal biopsies with an open

low-field system in patients with clinically suspected prostate cancer: technique and

preliminary results. Eur Radiol 2005;15:174–82.

[29] DiMaio SP, Pieper S, Chinzei K, et al. Robot-assisted needle placement in open MRI:

system architecture, integration and validation. Comput Aided Surg 2007;12:15–24.

[30] Fichtinger G, Krieger A, Susil RC, Tanacs A, Whitcomb LL, Atalar E. Transrectal prostate

biopsy inside closed MRI scanner with remote actuation, under real-time image guidance.

Lecture Notes in Comput Sci 2002;2488:91–8.

[31] Engelhard K, Hollenbach HP, Kiefer B, Winkel A, Goeb K, Engehausen D. Prostate biopsy

in the supine position in a standard 1.5-T scanner under real time MR-imaging control

using a MR-compatible endorectal biopsy device. Eur Radiol 2006;16:1237–43.

[32] Anastasiadis AG, Lichy MP, Nagele U, et al. MRI-guided biopsy of the prostate increases

diagnostic performance in men with elevated or increasing PSA levels after previous

negative TRUS biopsies. Eur Urol 2006;50:738–49, discussion 748–9.

[33] Roehl KA, Antenor JA, Catalona WJ. Serial biopsy results in prostate cancer screening

study. J Urol 2002;167: 2435–9.

[34] Krieger A, Susil RC, Menard C, et al. Design of a novel MRI compatible manipulator for

image guided prostate interventions. IEEE Trans Biomed Eng 2005;52:306–13.

[35] Susil RC, Menard C, Krieger A, et al. Transrectal prostate biopsy and fiducial marker

placement in a standard 1.5 T magnetic resonance imaging scanner. J Urol 2006; 175:113–

20.

[36] Dimaio S, Fischer GS, Haker S, et al. A system for MRIguided prostate interventions.

Proceedings for the First IEEE/RAS-EMBS International Conference on Biomedical

Robotics and Biomechatronics. 2006:68–73.

[37] Zangos S, Herzog C, Eichler K, et al. MR-compatible assistance system for punction in a

high-field system: device and feasibility of transgluteal biopsies of the prostate gland. Eur

Radiol 2007;17:1118–24.

[38] Lattouf JB, Grubb 3rd RL, Lee SJ, et al. Magnetic resonance imaging-directed transrectal

ultrasonography-guided biopsies in patients at risk of prostate cancer. BJU Int

2007;99:1041–6.

[39] Singh AK, Kruecker J, Xu S, et al. Initial clinical experience with real-time transrectal

ultrasonography-magnetic resonance imaging fusion-guided prostate biopsy. BJU Int

2008;101:841–5.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

227

[40] Kaplan I, Oldenburg NE, Meskell P, Blake M, Church P, Holupka EJ. Real timeMRI-

ultrasound image guided stereotactic prostate biopsy. Magn Reson Imaging 2002;20:295–

9.

[41] Singh AK, Guion P, Sears-Crouse N, et al. Simultaneous integrated boost of biopsy

proven, MRI defined dominant intra-prostatic lesions to 95 Gray with IMRT: early results

of a phase I NCI study. Radiat Oncol 2007;2:36.

[42] Stoianovici D, Song D, Petrisor D, et al. ‘‘MRI Stealth’’ robot for prostate interventions.

Minim Invasive Ther Allied Technol 2007;16:241–8.

[43] Stoianovici D. Multi-imager compatible actuation principles in surgical robotics. Int J Med

Robotics Comput Assist Surg 2005;1:86–100.

[44] Muntener M, Patriciu A, Petrisor D, et al. Transperineal prostate intervention: robot for

fully automated MR imaging system description and proof of principle in a canine model.

Radiology 2008;47:543–9.

[45] Pondman KM, Fu¨ tterer JJ, ten Haken B, et al. MR-guided biopsy of the prostate: an

overview of techniques and a systematic review. Eur Urol 2008;54:517–27.

[46] Kirkham APS, Emberton M, Allen C. How good is MRI at detecting and characterising

cancer within the prostate? Eur Urol 2006;50:1163–75.

[47] SciarraA, PanebiancoV, Salciccia S, et al. Role of dynamic contrast-enhancedmagnetic

resonance imagingandproton MR spectroscopic imaging in the detection of local

recurrence after radical prostatectomy for prostate cancer. Eur Urol 2008;54:589–600.

[48] Hricak H. MR imaging and MR spectroscopic imaging in the pre-treatment evaluation of

prostate cancer. Br J Radiol 2005;78:103–11.

[49] Hata N, Jinzaki M, Kacher D, et al. MR imaging-guided prostate biopsywith surgical

navigation software: device validation and feasibility. Radiology 2001;220:263–8.

[50] Zangos S, Eichler K, Engelmann K, et al.MR-guided transgluteal biopsies with an open

low-field systemin patient with clinically suspected prostate cancer: technique and

preliminary results. Eur Radiol 2005;15:174–82.

[51] Ghose S , Oliver A, Mitra J, et al. A supervised learning framework of statistical shape and

probability priors for automatic prostate segmentation in ultrasound images. Medical

Image Analysis 2013; 17:587–600.

[52] Mitra J, Kato Z, Martí R, et al. A spline-based non-linear diffeomorphism for multimodal

prostate registration. Medical Image Analysis 2012; 16:1259–1279.

[53] Ghose S, Oliver A, Martí R, et al. A survey of prostate segmentation methodologies in

ultrasound, magnetic resonance and computed tomography images. computer methods and

program sinbiomedicine 2012; 108:262–287.

[54] S. Martin, V. Daanen, J. Troccaz, Atlas-based prostate segmentation using an hybrid

registration, International Journal of Computer Assisted Radiology and Surgery 3 (2008)

485–492.

[55] N. Makni, P. Puech, R. Lopes, R. Viard, O. Colot, N. Betrouni, Combining a deformable

model and a probabilistic framework for an automatic 3d segmentation of prostate on MRI,

International Journal of Computer Assisted Radiology and Surgery 4 (2009) 181–188.

[56] X. Liu, D.L. Langer, M.A. Haider, Y. Yang, M.N. Wernick, I.S. Yetik, Prostate cancer

segmentation with simultaneous estimation of markov random field parameters and class,

IEEE Transactions on Medical Imaging 28 (2009) 906–915.

[57] Y. Zhan, D. Shen, Deformable segmentation of 3D ultrasound prostate images using

statistical texture matching method, IEEE Transactions on Medical Imaging 25 (2006)

256–272.

[58] F.A. Cosío, Automatic initialization of an active shape model of the prostate, Medical

Image Analysis 12 (2008) 469–483.

[59] N.N. Kachouie, P. Fieguth, A medical texture local binary pattern For TRUS prostate

segmentation, in: Proceedings of the 29th Annual International Conference of the IEEE

Engineering in Medicine and Biology Society, IEEE Computer Society Press, USA, 2007,

pp. 5605–5608.

[60] A. Zaim, Automatic segmentation of the prostate from ultrasound data using feature-based

self organizing map, in: H. Kalviainen, J. Parkkinen, A. Kaarna (Eds.),Proceedinggs of

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

228

Scandinavian Conference in Image Analysis, Springer, Berlin/Heidelberg/New York,

2005, pp.1259–1265.

[61] A. Zaim, Y. Taeil, R. Keck, Feature based classification of prostate US images using

multiwavelet and kernel SVM, in: Proceedings of International Joint Conference on Neural

Networks, IEEE Computer Society Press, USA, 2007, pp. 278–281.

[62] J.S. Geronimo, D.P. Hardin, P.R. Massopust, Fractal functions and wavelet expansions

based on several scaling functions, Journal of Approximation Theory 78 (1994) 373–401.

[63] S.S. Mohamed, A.M. Youssef, E.F. El-Saadany, M.M.A. Salama, Prostate Tissue

Characterization Using TRUS Image Spectral Features, Springer, Berlin/Heidelberg/New

York, 2006, pp. 589–601.

[64] T. Randen, J.H. Husoy, Filtering for texture classification: a comparative study,

Transactions on Pattern Analysis and Machine Intelligence 21 (1999) 291–310.

[65] W.D. Richard, C.G. Keen, Automated texture based segmentation of ultrasound images of

the prostate, Computerized Medical Imaging and Graphics 20 (1996) 131–140.

[66] D. Comaniciu, P. Meer, Mean shift: a robust approach toward feature space analysis, IEEE

Transactions on Pattern Analysis and Machine Intelligence 24 (2002) 603–619.

[67] H. Liu, G. Cheng, D. Rubens, J.G. Strang, L. Liao, R. Brasacchio, E. Messing, Y. Yu’,

Automatic segmentation of prostate boundaries in transrectal ultrasound (TRUS) Imaging,

in: M. Sonka, J.M. Fitzpatrick (Eds.), Proceedings of the SPIE Medical Imaging: Image

Processing’s, SPIE, USA, 2002, pp. 412–423.

[68] P. Yan, S. Xu, B. Turkbey, J. Kruecker, Discrete deformable model guided by partial active

shape model for TRUS image segmentation, IEEE Transactions on Biomedical

Engineering 57 (2010) 1158–1166.

[69] Y. Zhan, D. Shen, Automated segmentation of 3D US prostate images using statistical

texture-based matching method, in: R.E. Ellis, T.M. Peters (Eds.), Medical Image

Computing and Computer-Assisted Intervention – MICCAI, Springer,

Berlin/Heidelberg/New York, 2003, pp. 688–696.

[70] Y. Zhan, D. Shen, An efficient method for deformable segmentation of 3D US

PROSTATE IMAGes, in: G.-Z. Yang, T.-Z. Jiang (Eds.), Second International Workshop

on Medical Imaging and Augmented Reality, Springer, Berlin/Heidelberg/New York,

2004, pp. 103–112.

[71] S. Ghosal, R. Mehrotra, Orthogonal moment operators for subpixel edge detection, Pattern

Recognition 26 (1993) 295–306.

[72] Y. Zhan, D. Shen, Increasing efficiency of SVM by adaptively penalizing outliers, in: A.

Rangarajan, B. Vemuri, A.L. Yuille (Eds.), 5th International Workshop on Energy

Minimization Methods in Computer Vision and Pattern Recognition, Springer,

Berlin/Heidelberg/New York, 2005, pp. 539–551.

[73] K. Diaz, B. Castaneda, Semi-automated segmentation of the prostate gland boundary in

ultrasound images using a machine learning approach, in: J.M. Reinhardt, J.P.W. Pluim

(Eds.), Proceedings of SPIE Medical Imaging: Image Processing, SPIE, USA, 2008, pp.

1–8.

[74] R. Medina, A. Bravo, P. Windyga, J. Toro, P. Yan, G. Onik, A 2D active appearance model

for prostate segmentation in ultrasound images, in: 27th Annual International Conference

of the IEEE Engineering in Medicine and Biology Society, IEEE Computer Society Press,

USA, 2005, pp. 3363–3366.

[75] T. Cootes, G. Edwards, C. Taylor, Active appearance models, in: H. Burkhardt, B.

Neumann (Eds.), Proceedings of European Conference on Computer Vision, Springer,

Berlin/Heidelberg/New York, 1998, pp. 484–498.

[76] S. Ghose, A. Oliver, R. Martí, X. Lladó, J. Freixenet, J.C. Vilanova, F. Meriaudeau,

Texture guided active appearance model propagation for prostate segmentation, in: A.

Madabhushi, J. Dowling, P. Yan, A. Fenster, P. Abolmaesumi, N. Hata (Eds.), Prostate

Cancer Imaging, volume 6367 of Lecture Notes in Computer Science, Springer, 2010, pp.

111–120.

[77] S. Ghose, A. Oliver, R. Martí, X. Lladó, J. Freixenet, J. Mitra, J.C. Vilanova, J. Comet, F.

Meriaudeau, Statistical shape and texture model of quadrature phase information for

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

229

prostate segmentation, International Journal of Computer Assisted Radiology and Surgery

7 (2012) 43–55.

[78] S. Ghose, A. Oliver, R. Martí, X. Lladó, J. Freixenet, J. Mitra, J.C. Vilanova, J. Comet, F.

Meriaudeau, Multiple mean models of statistical shape and probability priors for automatic

prostate segmentation, in: A. Madabhushi, J.Dowling, H.J. Huisman, D.C. Barratt (Eds.),

Prostate Cancer Imaging, volume 6963 of Lecture Notes in Computer Science, Springer,

2011, pp. 35–46.

[79] L. Gong, S.D. Pathak, D.R. Haynor, P.S. Cho, Y. Kim, Parametric shape modeling using

deformable superellipses for prostate segmentation, IEEE Transactions on Medical

Imaging 23 (2004) 340–349.

[80] I.B. Tutar, S.D. Pathak, L. Gong, P.S. Cho, K. Wallner, Y. Kim, Semiautomatic 3D

prostate segmentation from TRUS images using spherical harmonics, IEEE Transactions

on Medical Imaging 25 (2006) 1645–1654.

[81] F. Wang, J. Suri, A. Fenster, Segmentation of prostate from 3D ultrasound volumes using

shape and intensity priors in level set framework, in: Proceedings of the 28th IEEE

Engineering in Medicine and Biology Society, 2006, pp. 2341–2344.

[82] R. Malladi, J.A. Sethian, Image processing via level set curvature flow, Proceedings of the

National Academy of Sciences 92 (1995) 7046–7050.

[83] C. Garnier, J.-J. Bellanger, K. Wu, H. Shu, N. Costet, R.Mathieu, R. de Crevoisier, J.-L.

Coatrieux, Prostate segmentation in HIFU therapy, IEEE Transactions on Medical Imaging

30 (2011) 792–803.

[84] S. Vikal, S. Haker, C. Tempany, G. Fichtinger, Prostate contouring in MRI guided biopsy,

in: J.P.W. Pluim, B.M.Dawant (Eds.), Proceedings of SPIE Medical Imaging: Image

Processing, SPIE, USA, 2009, pp. 7259–72594A.

[85] P.D. Allen, J. Graham, D.C. Williamson, C.E. Hutchinson,Differential segmentation of the

prostate in MR images using combined 3D shape modelling and voxel classification, in:

3rd IEEE International Symposium on Biomedical Imaging: Nano to Macro, IEEE

Computer Society Press, USA, 2006, pp. 410–413.

[86] S. Martin, J. Troccaz, V. Daanen, Automated segmentation of the prostate in 3D MR

images using a probabilistic atlas and a spatially constrained deformable model, Medical

Physics 37 (2010) 1579–1590.

[87] A. Tsai, J. Anthony Yezzi, W. Wells, C. Tempany, D. Tucker, A. Fan, W.E. Grimson, A.

Willsky, A Shape-based approach to the segmentation of medical imagery using level sets,

IEEE Transactions on Medical Imaging 22 (2003) 137–154.

[88] A. Tsai, W.M. Wells, C. Tempany, E. Grimson, A.S. Willsky,Coupled multi-shape model

and mutual information for medical image segmentation, in: C. Taylor, J.A. Noble (Eds.),

International Conference, Information Processing in Medical Imaging, Springer,

Berlin/Heidelberg/New York,2003, pp. 185–197.

[89] N. Otsu, A threshold selection method from gray-level histograms systems, IEEE

Transactions on System, Man and Cybernetics 9 (1979) 62–66.

[90] A. Firjani, A. Elnakib, A. El-Baz, G.L. Gimel’farb, M.A.El-Ghar, A. Elmaghraby, Novel

stochastic framework for accurate segmentation of prostate in dynamic contrast enhanced

MRI, in: A. Madabhushi, J. Dowling, P. Yan, A. Fenster, P. Abolmaesumi, N. Hata (Eds.),

Prostate Cancer Imaging, volume 6367 of Lecture Notes in Computer Science, Springer,

2010, pp. 121–130.

[91] A. Firjani, A. Elnakib, F. Khalifa, G.L. Gimel’farb, M.A.El-Ghar, J. Suri, A. Elmaghraby,

A. El-Baz, A new 3D automatic segmentation framework for accurate segmentation of

prostate from DCE-MRI, in: International Symposium on Biomedical Imaging: From

Nano to Macro, IEEE, 2011, pp. 1476–1479.

[92] Y. Zhang, B.J. Matuszewski, A. Histace, F. Precioso, J. Kilgallon, C.J. Moore, Boundary

delineation in prostate imaging using active contour segmentation method with

interactively defined object regions, in: A. Madabhushi, J.Dowling, P. Yan, A. Fenster, P.

Abolmaesumi, N. Hata (Eds.), Prostate Cancer Imaging, volume 6367 of Lecture Notes in

Computer Science, Springer, 2010, pp. 131–142.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

230

[93] Y. Gao, R. Sandhu, G. Fichtinger, A.R. Tannenbaum, A Coupled Global Registration and

Segmentation Framework with Application to Magnetic Resonance Prostate Imagery,

IEEE Transactions on Medical Imaging 10 (2010) 17–81.

[94] R. Toth, B.N. Bloch, E.M. Genega, N.M. Rofsky, R.E. Lenkinski, M.A. Rosen, A.

Kalyanpur, S. Pungavkar, A. Madabhushi, Accurate prostate volume estimation using

multifeature active shape models on t2-weighted mr, Academic Radiology 18 (2011) 745–

754.

[95] R. Toth, P. Tiwari, M. Rosen, G. Reed, J. Kurhanewicz, A. Kalyanpur, S. Pungavkar, A.

Madabhushi, A magnetic resonance spectroscopy driven initialization scheme for active

shape model based prostate segmentation, Medical Image Analysis 15 (2011) 214–225.

[96] P. Khurd, L. Grady, K. Gajera, M. Diallo, P. Gall, M. Requardt, B. Kiefer, C. Weiss, A.

Kamen, Facilitating 3D spectroscopic imaging through automatic prostate localization in

mr images using random walker segmentation initialized via boosted classifiers, in: A.

Madabhushi, J. Dowling, H.J. Huisman, D.C. Barratt (Eds.), Prostate Cancer Imaging,

volume 6963 of Lecture Notes in Computer Science, Springer, 2011, pp. 47–56.

[97] L. Grady, Random walks for image segmentation, IEEE Transactions on Pattern Analysis

and Machine Intelligence 28 (2006) 1768–1783.

[98] R. Zwiggelaar, Y. Zhu, S. Williams, Semi-automatic segmentation of the prostate, in: F.J.

Perales, A.J. Campilho, N.P. de la Blanca, A. Sanfeliu (Eds.), Pattern Recognition and

Image Analysis, Proceedings of First Iberian Conference, IbPRIA, Springer,

Berlin/Heidelberg/New York/Hong Kong/London/Milan/Paris/Tokyo, 2003, pp. 1108–

1116.

[99] T. Lindeberg, Edge detection and ridge detection with automatic scale selection, in:

Proceedings of Computer Vision and Pattern Recognition, IEEE Computer Society Press,

Los Alamitos/California/Washington/Brussels/Tokyo, 1996, pp. 465–470.

[100] M. Samiee, G. Thomas, R. Fazel-Rezai, Semi-automatic prostate segmentation of MR

images based on flow orientation, in: IEEE International Symposium on Signal Processing

and Information Technology, IEEE Computer Society Press, USA, 2006, pp. 203–207.

[101] D. Flores-Tapia, G. Thomas, N. Venugopal, B. McCurdy, S. Pistorius, Semi automatic

MRI prostate segmentation based on wavelet multiscale products, in: 30th Annual

International Conference of the IEEE Engineering in Medicine and Biology Society, IEEE

Computer Society Press, USA, 2008, pp. 3020–3023.

[102] S. Ghose, J. Mitra, A. Oliver, R. Marti, X. Llado, J. Freixenet, J. C. Vilanova, D. Sidibe,

F. Meriaudeau, A Coupled schema of probabilistic atlas and statistical shape and

appearance model for 3D prostate segmentation in MR images in: IEEE ICIP, United

States, 2012, pp. 0069-5550.

[103] MICCAI, “2009 prostate segmentation challenge MICCAI,”

wiki.namic.org/Wiki/index.php, accessed on [1st April, 2011].

[104] A. Gubern-Merida et al., “Atlas based segmentation of the prostate in MR images,”

wiki.namic.org/Wiki/images/d/d3/Gubern-Merida Paper.pdf, accessed on [20th July,

2011], 2009.

[105] S. Klein et al., “Automatic Segmentation of the Prostate in 3D MR Images by

AtlasMatching Using LocalizedMutual Information,” Med.Physics, vol. 35, pp. 1407–

1417, 2008.

[106] J. Dowling et al., “Automatic atlas-based segmentation of the prostate,” wiki.na-

mic.org/Wiki/images/f/f1/Dowling 2009 MICCAIProstate v2.pdf, accessed on [20th July,

2011], 2009.

[107] Y. Gao et al., “A Coupled Global Registration and Segmentation Framework with

Application toMagnetic Resonance Prostate Imagery,” IEEE Trans. Med. Imaging, vol.

10, pp. 17–81, 2010.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

231

Measuring The Value of Information

Technology Investment Using The Val It

Framework (Case Study : Pt Best Stamp

Indonesia Head Office Bandung )

Rita KOMALASARI a*, Zen MUNAWAR a

a Polytechnic LP3I Bandung, Indonesia

*[email protected]

Abstract: The research was carried out with a case study at PT Best Stamp Indonesia head

office Trade Center Metro A-10 Bandung which engaged in the manufacture of color stamp.

The result of this study was the level of maturity each IT investment management process, so

that PT Best Stamp Indonesia can have an overview of the processes that need further

development. The results showed that most of the maturity index are at maturity level 3

(defined).

Keywords: Investment IT, Val IT, Maturity Level

1. Introduction

Investment in information technology (IT) is a business strategy that should be carried out by the

company to remain competitive and not to be left behind from other similar companies. On this subject

(Hsin-Ginn Hwang, 2005) explains, (Lucas and Turner, 1982) Information technology can be used to

achieve the corporate strategy by helping companies to gain efficiency in operations, improve the

planning process, and opens new markets. In addition, the company's strategy should be considered in

the planning phase of information technology, as information technology plays an important role in the

implementation of corporate strategies, (Mc Farlan, 1984). Information technology investment when

viewed only from the weight of the cost while still in the procurement stage will make the company

become reluctant to invest, as stated by Wina et al (2007: 31) that the cost calculated not only at the

start of procurement, but continues during the maintenance or for as long as the investment is used.

In order PT Best Stamp Indonesia can compete among similar companies, PT Best Stamp

Indonesia should be aware that the best strategy for the implementation of the use of information

technology needs to be improved.

For companies such as PT Best Stamp Indonesia that engaged in the manufacture of stamps as

the core business, Information technology can help record transactions, track inventory, pay

employees, buy new merchandise as well as to evaluate sales trends. Furthermore, to obtain maximum

results and benefits for organizations in the development of technology, a thorough consideration

needed to be used as a framework for calculating the value, and one of them is Val IT. The interpretation

through the Val IT Approach is used to provide a clear picture of the information technology assets in

an organization.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

232

2. Literature Review

2.1 Value of Information Technology Investments

Value is based on the benefits derived from competition, which is reflected in the performance of

business today and in the future, which will increase the competitive advantage from the company

competitors, and force the management to invest. Information technology can be regarded as the science

needed to manage information so that the information can be found easily and accurately.

According to Ross and Beth in their study issued by MIT Sloan Management Review (2002),

titled Beyond the Business Case: New Approaches to IT Investment said that there are two general

approaches in investing in information technology, the Strategic Objective, that prioritizing in short-

term profits and long growth period and Technology Scope, which implement the information

technology infrastructure as a business solution.

2.2 VAL IT Framework

IT Governance Institute (ITGI), the agency that issued IT governance framework, in April 2006 issued

a complementary framework that can be used to measure the value of IT which is called Val IT.

Currently, Val IT focuses on new IT investments and will be expanded to include all services and IT

assets (Bell, Stephen, 2006). Val IT initiative goals include research, publications and support services

to help management understand the value of IT investments and ensure that organizations can obtain

the optimal value of IT investments in the context of cost and acceptable risk.

2.2.1 VAL IT Domain

There are three main domains to measure the value of IT investments. Each domain consists of multiple

processes and has a purpose each as follows:

1. Value Governance (VG), consisted of 11 process and aims to optimize the value of IT investments

obtained by :

a. Establish governance, control and monitoring framework.

b. Provide strategic direction for investment.

c. Defining the characteristics of an investment portfolio.

2. Portfolio Management (PM), consisted of 14 process and aims to ensure that all IT investment

portfolio align and contribute optimally to the organization's strategic objectives by:

a. Establish and manage the resource profile.

b. Defining the investment restrictions.

c. evaluating, prioritizing and selecting, delay or reject new investment

d. Manage the overall portfolio.

e. Monitor and evaluate the performance of the portfolio.

3. Investment Management (IM) consisted of 15 process and aims to ensure that IT investment program

in the organization can provide optimal results with reasonable cost and within an acceptable risk,

by

a. Identify the business requirements.

b. Developing a clear understanding of candidate investment program.

c. Analyze alternatives.

d. Define a program and documenting detailed business case including a clear and detailed

description of the program's benefits for the company.

e. Establish clear accountability and ownership of the program.

f. Monitor and report on program performance.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

233

2.2.2 Important Terms

On the concept of the Val IT framework, there are several terms related to IT investments (Definitions

of Key Terms Used in the Val IT Initiative, The Val IT Business Case, 2006):

1. Value: The expected results from investments that support the business.

2. Portfolio: The group of programs, projects, services or assets that selected, managed and

monitored to optimize the return value of the business.

3. Programme: A structured group consisting of a variety of projects. The investment

program is the main unit in the Val IT investments.

4. Project: A set of activities that focus on producing certain capabilities.

5. Implement: Covering an economical life cycle investment program.

2.2.3 Maturity Model

For Val IT differentiate maturity levels to 6 scale maturity as follows:

1. Level 0 ( Non - existents ) : The process has not been fully recognized.

2. Level 1 ( Initial ) : The organization has known issues or problems that exist and need to be

directed. Overall approach to management is not organized.

3. Level 2 ( Repeatable ) : The process has been developed to the stage where similar procedures

are followed by different people in carrying out the same tasks, but no formal training or

standard communication procedure.

4. Level 3 ( Defined ) : The procedure has been standardized , documented , and communicated

through training, but implementation still depends on the individual in terms of adherence to

procedures.

5. Level 4 ( Managed ) : The process has made it possible to monitor and measure compliance

with procedures so easily taken action when the process does not work effectively. Repair

process is done continuously and provide best practices.

6. Level 5 ( Optimized ) : the process has been selected at the level of best practices based on the

results of continuous improvement and measurement maturity model with other organizations.

2.2.4 Business Case

To implement Val IT framework can be done by building a business case for the project that will

measure the value of its investments. Through business case, we can evaluate how much value is offered

on a business proposal.

Making the business case consisted of 8 stages, as follows:

1. Creating fact sheets with relevant data and perform data analysis include the following:

2. Alignment analysis.

3. Financial benefit analysis.

4. Analysis of non-financial benefits.

5. Risk analysis, resulting of:

6. Assessment / valuation and optimization of result / risk generated by IT investments, that expressed

by:

7. Structured recording of the results of previous stage and business case documentation, as well as the

end results are always updated by:

8. Evaluating the business case for program execution, throughout the life cycle of the program.

3. Research Methodology

This section will explain the methodology used in conducting this study:

1. Literature study was conducted to collect all the data and information of a literature variety that

related to the management of IT and IT investments. Literature study and preliminary survey

are done to study the VAL IT governance in applying IT. VAL IT analysis focused on the

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

234

description of the application of IT applications in the VAL IT framework to get the result or

problem formulation. Analysis of the company was done to map the state of the company IT

applications as an IT investment assets. A literature study was conducted to obtain these results,

and to determine the profile of the company, the management or organizational structure and

work processes of IT applications.

The background and

the Problem Identification

Objectives and Limitations

Literature

Study

Selection of Research

Methodology

Planning

the questionnaireSelection of

Respondents

The reliability and validity

test

Data Collection Questionnaire

Data processing

Phase 1

Identification analysis

Value Governance Process

Analysis of Questionnaire Data Processing

Phase 2

Identification analysis

Portfolio Management Process

Phase 3

Identification analysis

Investment Management Process

Phase 5

Business Case Analysis

Recommendations for Maturity Levels Improvement

Phase 4

Maturity levels

Figure 1. Research Systematics Methodology

2. The VAL IT framework is used as a framework to analyse and measure the IT investment in

the company.

3. The questionnaire was used to survey the company. The questionnaire design was based on the

Key Management Practices and Val IT Maturity Model and the values result were used to

determine the level of maturity. The questionnaire used was a questionnaire to determine the

level of information systems governance maturity in the form of questions with answers ranging

of zero to five according to the level of Val IT maturity model.

4. Respondents that selected in this study were only the staff that work in PT Best Stamp Indonesia

head office A-10 MTC Bandung, due to time constraints and the cost of doing research.

5. Data processing of completed questionnaires filled by respondents then data was further

processed by finding the average value.

6. Mapping of the company's recent conditions examined in connection with the IT investment

management for VAL IT framework. Each standard value of the process contained in VAL IT

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

235

framework compared to the situation and condition of the company and its management of IT,

and then evaluated. The reference processes support VAL IT to produce the maturity analysis

for each IT application management processes that being studied. Reference for the company’s

processes that require further development derived from that gap analysis will provide input to

improve the process of managing IT so that the company has a picture of potential IT

applications in VAL IT framework.

7. Val IT maturity level recommendations used to improve each process were proposed in order

for PT Best Stamp Indonesia to reach level 5 (Optimised) where the process has been selected

at the level of best practices based on the results of continuous improvement.

8. This research has the final stages, making conclusions after the whole process is completed

and the evaluation of the findings in the study. Then the suggestions were given to be used as

a reference for future research.

4. Implementation of Information Technology Investment Value

4.1 Identification of The Val IT process

The data was collected by distributing questionnaires to 30 respondents, which during filling out the

questionnaire, the researcher accompanying the object of research in order to answer any questions that

might arise from the respondents.

Table 1. The respondents profile based on the level of management

Management level Number Percentage

High-level management 2 7%

Middle management 8 27%

Lower-level management 20 67%

Total 30 100%

Source: HRM PT Best Stamp Indonesia, 2012

Table 2. The Result of Value Governance Identification

Value Governance

Process Maturity Level

VG1. Ensure informed and committed leadership 4

VG 2. Define and implement processes 3

VG 3. Define roles and responsibilities 4

VG4. Ensure appropriate and accepted accountability 4

VG 5. Define information requirements 4

VG 6. Establish reporting requirements. 4

VG 7. Establish organisational structures 3

VG 8. Establish strategic direction. 4

VG 9. Define investment categories. 4

VG 10. Determine a target portfolio mix 4

VG 11. Define evaluation criteria by category 4

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

236

Table 3. The Result of Portfolio Management Identification

Portfolio Management

Process Maturity Level

PM1. Maintain a human resource inventory. 4

PM 2. Identify resource requirements 2

PM 3. Perform a gap analysis. 3

PM 4. Develop a resourcing plan. 2

PM 5. Monitor resource requirements and utilisation. 3

PM 6. Establish an investment threshold. 3

PM 7. Evaluate the initial programme concept business case. 3

PM 8. Evaluate and assign a relative score to the programme business case. 2

PM 9. Create an overall portfolio view 3

PM 10. Make and communicate the investment decision. 3

PM 11. Stage-gate (and fund) selected programmes 3

PM 12. Optimise portfolio performance. 2

PM 13. Re-prioritise the portfolio. 2

PM 14. Monitor and report on portfolio performance. 2

Table 4. The Result of Investment Management Identification

Investment Management

Process Maturity Level

IM 1. Develop a high-level definition of investment opportunity. 3

IM 2. Develop an initial programme concept business case. 2

IM 3. Develop a clear understanding of candidate programmes. 2

IM 4. Perform alternatives analysis 3

IM 5. Develop a programme plan. 3

IM 6. Develop a benefits realisation plan. 2

IM 7. Identify full life cycle costs and benefits 3

IM 8. Develop a detailed programme business case. 3

IM 9. Assign clear accountability and ownership 3

IM 10. Initiate, plan and launch the programme. 4

IM 11. Manage the programme 3

IM 12. Manage/track benefits 3

IM 13. Update the business case 3

IM 14. Monitor and report on programme performance. 3

IM 15. Retire the programme. 3

Based on the calculation of the maturity model level for all Value Governance, Portfolio Management

and Investment Management processes the result is index 3, meaning that PT Best Stamp Indonesia is

at the third level, Defined - Procedures have been standardized, documented and communicated, but

at this stage, implementation is still individual.

Initial/ Reapeatable Defined Managed and

Non-existent AdHoc but Intuitive Process Measurable Optimised

0 1 2 3 4 5

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

237

Symbol information : Description ranking:

Status of the company 0 - Management processes are not applied at all

1 - Processes are ad hoc and disorganized

2 – The process follow a regular pattern

3 – The process has been documented and communicated

4 – The process has been monitored and measured

5 – Good practices are followed and automated

Figure 2. The Val IT maturity scale of PT Best Stamp Indonesia

Figure 3. The mapping of IT investment positions of PT Best Stamp Indonesia

against each Val IT processes with a maturity model

4.2 Business Case Analysis

Business case can be made after analyzing the processes identified in the Val IT which describes the

information technology investment program. PT Best Stamp Indonesia according to the survey, have

invested in the form of a web system information system as a means of supporting the company's

operations. Results of the analysis are shown in Table 5 below.

43

44

4

4

3 4

4

4

4

42

32

33

32

333

2223

2

23

3

2

33

4 33

33 3

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

VG1VG2

VG3VG4

VG5

VG6

VG7

VG8

VG9

VG10

VG11

PM1

PM2

PM3

PM4

PM5

PM6

PM7PM8

PM9PM10

PM11PM12

PM13

PM14

IM1

IM2

IM3

IM4

IM5

IM6

IM7

IM8

IM9

IM10

IM11

IM12

IM13IM14

IM15

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

238

Table 5. Business Case analysis results

Results of a list of facts data analysis

Decision on the application program level Acceptable

risk

Financially

fulfilled

Non financial

apparent

Aligned with

the strategy

Y T Y Y

Put in priorities portfolio by considering of

further financial targets, quantification from

non-financial benefits should be pursued as far

as reasonably possible.

Source: Figure 12—Decision Matrix, The Business Case, ITGI (2006)

4.3 Recommendation

The results of the analysis IT investment implementation of maturity level model, produced a number

of recommendations for each process to be carried out by PT Best Stamp Indonesia to optimize the

management of IT investments using Val IT as follows:

Table 6. Recommendations for process improvement

No Processes Recommendations

1 Value Governance

1.1 Ensure informed and

committed leadership

Reports provided periodically and every executives have an

understanding of strategic IT issues

1.2 Define and implement

processes.

Consistent in following each process that has a clear link between

company’s strategy and IT investment program portfolio

1.3 Define roles and

responsibilities.

Define and communicate to share roles and responsibilities for

every personnel in the company relating with IT business

investment program portfolio in order to train the role of authorities

and responsibilities

1.4 Ensure appropriate and

accepted accountability.

establish appropriate support systems and monitoring framework

consistent with the overall environment from the company system

in accordance with generally accepted principles

1.5 Define information

requirements.

The process to collect accurate data and monitoring is done

regularly and using methods (such as the balanced scorecard)

1.6 Establish reporting

requirements. Results of the implementation of IT portfolio program is reported

accurately and timely to the directors and the executive

management

1.7 Establish organisational

structures.

Defined committees and support structures including IT strategy

committee and IT planning committee.

1.8 Establish strategic direction. Companies and IT departments agreed on the the impact of IT on

the vision, mission, business strategy and the role of IT in the

company.

1.9 Define investment categories. Categorizing IT investment

1.10 Determine a target portfolio

mix.

Set a target of mix portfolio and aligned it with the company’s

strategic goals

1.11 Define evaluation criteria by

category.

Establish criteria for the evaluation of investment in terms of both

profit and risk.

2 Portfolio Management

2.1 Maintain a human resource

inventory.

Maintain human resources in the IT departments and placing them

within their competence

2.2 Identify resource requirements. Identify and give attention to the shortage of human resources in

the IT departments

2.3 Perform a gap analysis Identifying demand because of IT resources shortage and business,

currently and in the future

2.4 Develop a resourcing plan. Planning the IT resources needed to support the program portfolio

of IT investments and IT strategic planning

2.5 Monitor resource requirements

and utilisation.

Periodically do a review of the IT departments and organizational

structure to match the needs of the staff and resources to achieve

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

239

strategic business objectives and respond to changes in the

expected state

2.6 Establish an investment

threshold.

Determine the overall budget for the IT investments program

portfolio

2.7 Evaluate the initial

programme concept business

case.

Determine whether the investment program concept has enough

potential to continue

2.8 Evaluate and assign a relative

score to the programme

business case.

Conduct a detailed assessment and grading business case program

to evaluate strategic alignment; benefits, both financial and non-

financial; risks and the availability of resources

2.9 Create an overall portfolio

view.

identifying each necessary changes to the other programs in the

portfolio as a result of the additional investment program

2.10 Make and communicate the

investment decision.

Communicate and review the decision with the Director about the

additional investment program applied that has the potential to be

continued and implemented in the portfolio

2.11 Stage-gate (and fund) selected

programmes.

Total financing of the investment programs, providing funding for

the next stage

2.12 Optimise portfolio

performance.

Review the portfolio on a regular basis to identify, exploit

opportunities, mitigate and minimize the risk.

2.13 Re-prioritise the portfolio.

Evaluate and review the portfolio to ensure that the portfolio is in

line with the business strategy and targets investment is maintained

so that the portfolio reach a maximum value

2.14 Monitor and report on

portfolio performance.

provide a status report that includes the achievement of planned

objectives, performance targets and risk mitigation and then

reviewed to see if the management is taking appropriate action

3 Investment Management

3.1 Develop a high-level definition

of investment opportunity.

Recognize the opportunity of the investment program to create

value to support business strategy or to solve operational issues

3.2 Develop an initial programme

concept business case.

Business case that describes the results of its operations in which

the potential program will give contribution, the nature of the

investment program contribution, and how that contribution will

be measured.

3.3 Develop a clear understanding

of candidate programmes.

Using appropriate methods and techniques, involving all

stakeholders, to develop and document mutual agreement about

the expected results from the program candidates that will be

measured and the full scope from initiatives that required to

achieve the expected results

3.4 Perform alternatives analysis. Choose an alternative action that has the highest potential value, at

an affordable cost and acceptable risk.

3.5 Develop a programme plan.

Define and document every projects, including business, business

processes, manpower, technology and project organizations, which

are required to achieve the business program expected results

3.6 Develop a benefits realisation

plan.

Identify and document achievement targets measurements of each

main results, methods to measure each outcome, the responsibility

for achieving outcomes, delivery schedules and the monitoring

process undertaken should include a list of benefits, with an

explanation of the risks that may threaten the achievement of

outcomes and how these risks will be mitigated

3.7 Identify full life cycle costs and

benefits

Prepare a budget that reflects the investment program economical

life cycle costs and benefits of financial and non-financial and

submit it to be reviewed, corrected and approved by the business

sponsor

3.8 Develop a detailed programme

business case.

Developing a Business Case that includes an executive summary,

a description of program objectives, approach and scope, program

dependencies, risks and milestones, the impact from organizational

programs changes; assessment value and program plans.

3.9 Assign clear accountability and

ownership.

Assign responsibility for achieving profitability, control costs,

manage risks, and coordination of activities and dependencies

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

240

3.10 Initiate, plan and launch the

programme.

Planning the necessary resource and the commission for the

projects to achieve program outcomes.

3.11 Manage the programme.

Managing and monitoring the performance of individual projects

relating to delivery, schedule, cost and risk to identify potential

impacts on program performance, and take timely corrective action

when necessary.

3.12 Manage/track benefits.

Monitor performance based on the target on a regular basis to

ensure that the benefit planned achieved, supported and optimized

3.13 Update the business case.

Renew the business case that reflects the status of the program

implementation that should be implemented at any time if there are

changes to be made to the projected costs or program benefit, while

the risk change and in the preparation stages of review

3.14 Monitor and report on

programme performance.

Make a report that includes the performance of the entire portfolio,

IT strategy in accordance with the policies and standards, benefits

realization, process maturity, end user satisfaction and IT internal

control status.

3.15 Retire the programme. Removal of program from the portfolio will be terminated with

the director approval and document it for further lessons

5 . Conclusions and Suggestions

5.1 Conclusion

From the assessment of PT Best Stamp Indonesia information technology investment review, can be

concluded as follows:

1. Measuring the value of IT investments is important so that the business strategy with the

implementation of the operational support system go in line with the company strategy.

2. Val IT Framework has detailed processes and can be customized to the characteristics of IT

investments in the form of web system application of PT Best Stamp Indonesia.

3. Business case analysis shows that the strategic objectives to be achieved of PT Best Stamp

Indonesia should match with the investment planning of information technology in the form of a

web system.

4. Based on the result of Val IT processes applied in web system technology application, PT Best

Stamp Indonesia only reached level 3 (Defined) of maturity level.

5. Improvement of the maturity level to reach level 5 (Optimized) on the results of the assessment

using Val IT can be done by performing operational process improvement procedures through

appropriate web application system continuously and provide recommendations of best practices

in the information user.

5.2 Suggestion

Suggestions for the results of the study can be stated as follows:

1. In this study, the information technology investments value measurement was done by using Val

IT Initiative framework, it is expected for further research studies can be developed using other

frameworks such as COBIT, Val IT 2.0, ITIL, Six Sigma or ISO 27001.

2. Due to the limited cost and time, this study only involve the staff management of PT Best Stamp

Indonesia head office that located in A-10 Metro Bandung Trade Center, it would be better if

further research can involve all staff that are in every branches in Indonesia and Sydney, Australia.

3. Data collection was done in the form of a forum meeting discussion and giving questionnaires but

there is a concern that there the answers given will be too good, primarily due to the obligation to

write the name and position of the staff at the beginning of the questionnaire so that respondents

did not dare to answer by stating the real situation. Further research is recommended to develop a

questionnaire that can be made for the respondents to be anonymous so that assessment can be

done as accurately as possible.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

241

References

Bell, Stephen (2006), Val IT: helping companies add dollar value with ICT, New Zealand, 8th June, Computer

World, ,

Hwang, Hsin-Ginn, Robert Yeh, James J. Jiang, and Gary Klein (2005), IT Investment Strategy And IT

Infrastructure Services. The Review of Business Information Systems Journal, Vol. 6, 56.

IT Governance Institute (2006), Enterprise Value: Governance of IT Investments, The Val IT Framework, the

United States of America, IT Governance Institute,.

IT Governance Institute (2006), Enterprise Value: Governance of IT Investments, The Business Case, , the United

States of America, IT Governance Institute.

Ross, Jeanne W. and Cynthia M. Beth. Beyond the Business Case: New Approaches to IT Investment. MIT Sloan

Management Review, USA, 2002

Winanti, Wina dan Falahah (2007), Val IT : Kerangka Kerja Evaluasi Investasi Teknologi Informasi,

Yogyakarta, Seminar Nasional Aplikasi Teknologi Informasi 2007 (SNATI 2007),.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

242

Assessment Information Technology

Governance Using Cobit Framework Domain

Plan and Organise and Acquire and Implement

(Case Study: Pt. Best Stamp Indonesia

Bandung Head Office)

Zen MUNAWAR

Polytechnic LP3I Bandung , Indonesia

*[email protected]

Abstract: The authors conducted a study on the use of the COBIT framework for assessing IT

governance using domain Plan and Organise And Acquire and Implement to measure the value

of information technology investments and align it with corporate goals. Assessment of

information technology governance is intended to make the mapping process of planning and

organization, acquisition and implementation of the maturity level. Using the level of maturity

then management can measure the position of the current information systems and assess what

is needed to improve it. At PT. Best Stamp Indonesia there are 17 processes and have the

information technology processes that are currently at maturity level 3 / defined. From the

results obtained, the implementation of IT governance models and models of information

systems audit COBIT can be applied to the process of information technology in the company,

however, it is necessary to make adjustments or modifications to the process.

Keywords: IT Investment, COBIT Framework, Maturity Level.

1. Introduction

In this globalization era, the use of Information Technology (IT) has become a necessity that plays an

important role in many aspects of life. IT also has provided many benefits for the development of

business processes. “Effective strategy development is becoming vital for today’s Organizations. As the

impact of IT has grown in organizations, IT strategy is finally getting the attention it deserves in

business”. (Heather, James and Satyendra, 2007). Companies are increasingly trying to be the best, so

does PT. Best Stamp Indonesia attempted to implement the best strategies in their use of IT.

The use of IT has been providing solutions and profit opportunities through the form of the

strategic role of IT in achieving the organization's vision and mission.

On the other hand the application of IT investment costs are relatively expensive. In which the

emergence of risk of failure. This condition requires consistency in the field of management so that an

IT Governance (IT Governance) are fine-tuned to be an essential requirement. Good IT governance is

able to make the company survive, and help develop strategies and realize the goals of the company.

COBIT (Control Objectives for Information and related Technology) is an IT Governance

standards developed by the IT Governance Institute (ITGI), which is an organization that conducts

research on IT Governance models based in the United States. In contrast to the standards of other IT

Governance, COBIT has a broader scope, comprehensive, and in-depth look at the process of managing

IT. In support of IT Governance, COBIT provides a framework that ensures IT has aligned with

business processes, IT resources have been used with responsibility, and IT risks have been dealt with

appropriately.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

243

2. Literature Review

2.1 Information Technology Governance

There are several definitions of IT governance such as: Definition of IT governance according to ITGI

(2007: 5):

IT governance is the responsibility of executives and the board of directors, and consists of the

leadership, organisational structures and processes that ensure that the enterprise’s IT sustains and

extends the organisation’s strategies and objectives.

Based on this definition IT governance is the responsibility of top management and executive

management of an enterprise. IT governance is part of the overall management of the company

consisting of the leadership and organizational structures and processes to ensure that there is a

continuation of the IT organization and the development of strategies and goals of the organization.

According to Grembeergen, Haes (2004:1):

IT governance is the organizational capacity excercised by the Board, excecutive management

and lT management to control the formulation and implementation of IT strategy and in this way ensure

the fusion of business and IT

The definition of information technology governance is the organizational measures undertaken

by the board, executive management and IT management to control the formulation and implementation

of IT strategy and how to believe in business and IT itself.

Opportunities created from the optimization of IT resources in the area of organizational

resources, such as data, application systems, infrastructure and human resources. As expressed by

Stacey Hamaker, CISA, and Austin Hutton, 2004, that:

“IT governance is an integral part of corporate governance in raising the bar of corporate

integrity and enhancing shareholder value. IT governance goes beyond IT audit and beyond what the

CIO can accomplish by himself/herself. Depending on the organization, IT governance may be the

enabler for an organization to move to the next level or it may be the only way an organization can

meet regulatory and legal requirements”.

Based the definitions above, it can be seen that the emphasis of IT governance is the creation

of a strategic alignment between the business of Information Technology with the management of a

company and also has a very important role in the application of information technology governance.

2.2 COBIT Framework

Control Objectives for Information and Related Technology (COBIT) is a framework developed by IT

Governance Institute, an organization that conducts research on IT management model based in the

United States.

COBIT provides a clear policy and good practice in the governance of information technology

to assist senior management in understanding and managing the risks associated with information

technology governance by providing a framework for information technology governance and control

objectives detailed guide / detailed control objectives for the management, business process owners,

users and auditors.

2.2.1 COBIT Domain

The focus of COBIT processes described by the model of information technology process that divides

into 4 sections and 34 processes that summarized by 210 detailed control objectives according to areas

of responsibility, ranging from planning, building, running and monitoring the information technology

implementation, and also provide end-to-end view of information technology.

The main characteristics of the COBIT framework is the focus on business process orientation,

and controlled by a control-based measurements.

The 34 COBIT IT processes (ITGI, 2007) are:

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

244

1. Plan and Organise (PO)

This domain concentrates on the process of planning and adjustment of IT strategy with corporate

strategy. The following high-level control objectives from this domain as follows:

1. PO1 Define a Strategic IT Plan

2. PO2 Define the Information Architecture

3. PO3 Determine Technological Direction

4. PO4 Define the IT Processes, Organisation and Relationships

5. PO5 Manage the IT Investment

6. PO6 Communicate ManagementAims and Direction

7. PO7 Manage Human Resources

8. PO8 Manage Quality

9. PO9 Assess and Manage IT Risks

10. PO10 Manage Projects

2. Acquire and Implement (AI)

This domain concentrates on the IT procurement process and implementation that is used. The

following high-level control objectives from this domain as follows:

11. AI1 Identify Automated Solutions

12. AI2 Acquire and Maintain Application Software

13. AI3 Acquire and Maintain Technology Infrastructure

14. AI4 Enable Operation and Use

15. AI5 Procure IT Resources

16. AI6 Manage Changes

17. AI7 Install and Accredit Solutions and Changes

3. Deliver and Support (DS)

This domain concentrates on technical-technical support to the IT service process. The high-level

control objectives from this domain as follows:

18. DS1 Define and Manage Service Levels

19. DS2 Manage Third-Party Services

20. DS3 Manage Performance and Capacity

21. DS4 Ensure Continuous Service

22. DS5 Ensure Systems Security

23. DS6 Identify and Allocate Costs

24. DS7 Educate and Train Users

25. DS8 Manage Service Desk and Incidents

26. DS9 Manage the Configuration

27. DS10 Manage Problem

28. DS11 Manage Data

29. DS12 Manage the Physical Environment

30. DS13 Manage Operations

4. Monitor and Evaluate (ME)

This domain concentrates on the problem of controls implemented within the company, internal and

external examinations. The following high-level control objectives from this domain as follows:

31. ME1 Monitor and Evaluate IT Performance

32. ME2 Monitor and Evaluate Internal Control

33. ME3 Ensure Compliance With External Requirements

34. ME4 Provide IT Governance

2.2.2 Control Base

As defined in the COBIT control policies procedures, practices and organizational structures as policies,

procedures, practices and organizational structures designed to provide an acceptable assurance that

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

245

business objectives will be achieved and unexpected events can be prevented or identified and repaired.

While the information technology control objectives are statements regarding the intent or result

expected by implementing control procedures in a particular IT activity.

The COBIT control framework, providing a clear link between the need for information

technology governance, information technology processes and information technology control, because

the control objectives are organized according to the process of information technology. Each process

of the information technology that contained in COBIT has a high level control objectives and several

detail control objectives. Overall control is a characteristic well-managed process.

2.2.3 Control by Measurement

Having an understanding of the status of information technology systems, is necessary for the

organization, in order to decide the level of management and control that must be given. Therefore,

companies need to know what should be measured and how the measurements are done, so it can be

required their performance level status.

One of the tools of measurement of the performance of a system is a model of information

technology maturity (maturity level). Maturity model for the management and control of the process of

information technology is based on an evaluation method that can evaluate their own organization from

a non-existent (0) to optimistic (5). Maturity model is intended to determine the presence of existing

problems and how to prioritize improvement. Maturity model is designed as a profile of information

technology processes, so that the organization will be able to recognize a description of the current

situation and future possibilities. The use of maturity models developed for each of 34 process enables

management of information technology to be able to identify (ITGI, 2007):

1. Current Condition of the company

2. Current conditions of the industry for comparison

3. Company desired condition

4. The desired growth between as-is and to.

Maturity model is built originated from the generic qualitative models, in which the principles of the

following attributes added by multilevel (ITGI, 2007):

1. awareness and communication

2. policies, plan and procedures

3. tools and automation

4. skills and expertise

5. responsibility and accountability

6. goal selling and measurement

Defining a process maturity model information technology refers to the COBIT framework in general

is as follows (ITGI, 2007):

1. Level 0: non-exixtent. There is absolutely no IT processes are identified. Companies have not

realized the issues to be discussed

2. Level 1: Initial / ad-hoc. There is evidence that showed the company was aware of the issues that

need to be addressed. There is no standard process, instead there is a specific approach that tends

to be applied per case. Management approach overall has not been in an organized manner.

3. Level 2: repeatable but intuitive. The process has been developed to the stage where similar

procedures are followed by different people perform the same task. There is no formal training and

communication of standard procedures, and responsibility given to the individual. There is a high

confidence in the knowledge for the individual, therefore errors often occur.

4. Level 3: defined process. Procedures have been standardized and documented, and communicated

through training. But it is up to the individual to follow these processes, and therefore will be

difficult to detect irregularities. The procedure itself is not complicated but it is a formalization of

the activities that have been carried out.

5. Level 4: managed and measureable. It is possible to perform monitoring and measuring compliance

with procedures and taking action if the process appears to be working effectively. The process of

constant development and provide best-practice. Automation and auxiliary devices are used in a

limited or fragmented.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

246

6. Level 5: optimized. The process of attaining the level of best practice, as the results and continuous

improvement and maturity modeling with other enterprises. Integrated information technology used

to automate the workflow, providing auxiliary devices to improve the effectiveness and quality will

enable companies to quickly adjust to changes.

3. Research Methodology

This section will explain the methodology used by the authors in conducting this research. Methodology

is the manner and sequence of work that will be used in this study

Literature study

Data Collection and

Corporate Document

The design of

Questionnaires I and IIThe selection of

Respondents

Data Collection

1. Questionnaires I and II

2.Interviews Documentation

Data processing

The design of

Maturity Levels Improvement proposal

phase 1

Identification of Business Goals

phase 2

Identification of IT Goals

phase 3

Identification of IT Process

Phase 4

Identification of Control Objectives

Phase 5

Maturity Level

Data Analysis of the Data Processing Results

Determining

the Research Methods

Figure 1. Research Methodology

The following research methods were used in the preparation of the research, namely:

1. Studying literature sources such as COBIT Framework, the audit stage, the system procedure in

PT. Best Stamp Indonesia.

2. Data collection and internal documents about the company such as vision, mission, goals,

objectives, organizational structure, IT architecture and development plan including IT

management policies.

3. Determining the research methods using case studies.

4. Designing the questionnaire accompanied by a selection of respondents

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

247

5. Data collection by identifying expectations / hopes for the implementation of IT management at

PT. Best Stamp Indonesia by distributing the first questionnaires containing questions with the

answer option 3 levels of performance : Less (L), medium (M) and good (H), when the results of

the questionnaire obtained is level M or H performance then the research will proceed to the next

stage.

6. Observing existing condition by distributing the second questionnaires that accordance to COBIT

4.1 framework to get an overview of the implemented IT conditions to measure the gap on

management's expectations and to determine the information system governance maturity level

based on the COBIT 4.1 on domain Plan And Organise dan Acquire and Implement.

7. Conducting an interview related to the research (the top-level management, middle, and bottom).

8. Perform data analysis and provide a proposal of the design to increase the level of maturity Number

tables and figures consecutively, not section-wise.

4. Analysis and Discussion

4.1 Management Awareness Evaluation

From the first data questionnaire / Management Awareness, the level of fulfillment of Detail Control

Objective (DCO) of the data processing can be seen. In Table1 there are part of management policies

that have been considered good or still need to be improved.

Based on the results of the first questionnaire recapitulation / management awareness the fact

can be seen that states the current conditions in the field of information technology performance,

namely:

1. The majority of respondents, 67.53% stated that the performance of the information technology in

the field are sufficient

2. Respondents who stated that performance of information technology in the field has been good are

much as 29.76% and

3. Information technology performance that is still lacking only declared by 2.59% of the respondent.

Table 1. Recapitulation Of the first Questionnaire Results I / Management Awarness

No

Object Distribution of answers

DCO

Target L(%) M(%) H(%)

Answers weights 1.00 2.00 3.00

1. PO1 0 63 37 2,37 3,00

2 PO2 0 72 28 2,28 3,00

3 PO3 2 63 35 2,33 3,00

4 PO4 1 67 32 2,31 3,00

5 PO5 1 62 37 2,35 3,00

6 PO6 3 74 23 2,21 3,00

7 PO7 2 65 33 2,31 3,00

8 PO8 1 71 28 2,27 3,00

9 PO9 2 56 40 2,38 3,00

10 PO10 3 72 25 2,22 3,00

11 AI1 19 44 37 2,18 3,00

12 AI2 1 82 17 2,15 3,00

13 AI3 2 64 34 2,33 3,00

14 AI4 2 78 20 2,18 3,00

15 AI5 3 64 33 2,31 3,00

16 AI6 0 77 23 2,36 3,00

17 AI7 2 74 24 2,22 3,00

Average 2,59 67,53 29,76 2,28 3,00

Sources: research, 2013 Description:

2,15 : The performance with the smallest Activity

2,38 : The performance with the Largest Activity

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

248

The recapitulation of the data showed that the rate of DCO are moderate or adequately with an average

value of 2.28. According to the COBIT framework, the best value is the value of 3.00, a conclusion

can be drawn that the level of performance of information technology is approaching the good position.

These conditions must be maintained and if possible to be improved so that the expectations of all

stakeholders can be achieved.

Sources: Research Result, 2013

Figure 2. Evaluation graphs Management Policy

In general, the interpretation of the results of the questionnaire 1, almost all the activities of the

data management process is adequately controlled, there is only 1 activity that needs to be fixed, where

the value of DCO = 2.15, is the activities to obtain and maintain data software. While other activities

are more than sufficient or approaching good value. This condition needs to be maintained and if

possible improved in order to achieve good value.

4.2. Maturity Level Analysis

Based on the results of the second questionnaire for the IT maturity level, answers are obtained from

the respondents as shown in the second table.. From 12 questions, each of the question which represents

6 (six) attributes according to the maturity of IT-COBIT framework. A total of 12 questions have

represented the condition of the current IT maturity (as is) and future expected (to be). From the results

of the tabulation of the questionnaire results, the data obtained as appeared in table 2.

Based on Table 2, several things can be identify:

1. Manage Changes / AI6 is the highest attribute of the maturity value which is the maturity value of

3.10. This can be interpreted process managing a formal change has been established, including

categorization, prioritization, incurred procedures, and changes in authorization. The results on the

effect of changes in IT on the implementation of the activity plan is formulated to support the

deployment of new applications and technologies

2. Define a Strategic IT Plan / PO1 is an attribute that has a maturity value of the smallest with the

maturity value of 2.89. the interpretation of questionnaire II, is nearly all the web data management

system process are in the defined process category but only one process, PO1 that needs to be fixed,

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

249

in this case the IT planning process is reasonably sound and ensures that proper planning may be

implemented. However, the authority was given to individual managers in connection with the

execution process, and there is no procedure to check the process, for future reference standard

procedure needs to be made to check the processes in the IT strategic plan.

Table 2 shows the value and the level of IT maturity. In this case the value and levels of IT

maturity are different, in which the index value in the form of fractions defines these conditions is a

process or a step toward maturity. While Maturity Level is an integer which is the size of the absolute

value of existing IT maturity.

Overall the level of IT maturity in PT. Best Stamp Indonesia are at level 4 or level 3. This

condition indicates that the company has defined a standard procedure in all activities associated with

the process of web data management system but still needs to more supervision to carry out the

procedure to avoid deviations.

Table 2. Results Recapitulation Of Questionnaire 2 Maturity Level

No Process Total

Answer

Total

Question Index

Maturity

Level

1 Plan and Organize

1.1 PO1 34,67 12 2,89 3

1.2 PO2 35,33 12 2,94 3

1.3 PO3 35,27 12 2,94 3

1.4 PO4 36,20 12 3,02 3

1.5 PO5 35,43 12 2,95 3

1.6 PO6 35,70 12 2,98 3

1.7 PO7 35,30 12 2,94 3

1.8 PO8 34,97 12 2,91 3

1.9 PO9 35,67 12 2,97 3

1.10 PO10 35,07 12 2,92 3

2 Acquisition and Implementation

2.1 AI1 35,23 12 2,94 3

2.2 AI2 35,80 12 2,98 3

2.3 AI3 36,30 12 3,03 3

2.4 AI4 36,50 12 3,04 3

2.5 AI5 36,60 12 3,05 3

2.6 AI6 37,20 12 3,10 3

2.7 AI7 35,57 12 2,96 3

Source : Research Result, 2013 Description:

2,89 : The average value of the lowest IT maturity

3,10 : The average value of the highest IT maturity

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

250

Figure 3. mapping results of the Company SI position for each SI process with the maturity level

Table 3. Audit recommendations Information System (IS) PT. Best Stamp Indonesia

No Process Recommendation

1 Planning and Organizing

1.1 Setting a Strategic IT Plan 1. IT strategic plan should be planned and documented

2. IT strategic planning should take into account all aspects of the

requirements of the institution

3. Institutional leaders should be involved directly in IT strategic

planning process

4. IT strategic planning should be monitored continuously by the

head of Institution

1.2 Establish Information

Architecture

1. Information architecture planning should be well planned and

documented

2. Information architecture development should use certain

techniques and methods

3. Data dictionary should be established and documented

1.3 Establish Technology

Direction

1. Planning needs to consider the direction of technology trends or

tendencies

2. technology

3. The impact of the use of new technologies should be anticipated

4. It is necessary to give guidance to staff of the plan tht will use the

new technology

5. Maintenance needs to be anticipated over an adoption of new

technologies

6. documentation is made regarding the technology institutions

direction

1.4 Establish IT Processes,

Organization and their

Relationship

1. There needs to be a clear distribution of tasks and responsibilities

to the Web System

2. Web Systems department needs to be involved in the decision

making process

3. Explanations need to be done to raise awareness of the importance

of internal controls, security and discipline of all staff

4. Web Systems department needs to establish relationships with

external parties to facilitate its duties and responsibilities

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

251

Table 3. Audit recommendations IS PT. Best Stamp Indonesia (cont)

No Process Recommendation

1 Planning and Organizing

1.5 Organize the IT

Investments

1. IT investments should be planned and budgeted adequately

2. The Inventory the the IT investment should be recorded and

documented

1.6 Communicate the

purpose and

Management Direction

1. Structurally corporate leaders must provide direction on IT

controls continuously

2. There should be feedback on referrals that has been delivered by

the corporate leaders

3. communication activity and management direction should be

documented

1.7 Managing Human

Resources

1. Personal Web Systems need to be trained, in light of the

development of the system

2. Training of the IT personnel need to be done to improve the skills

and abilities

3. Qualifications for the IT personnel need to be established

4. Job descriptions of the human resource need to be made and well

documented

1.8 Quality setting

1. There should be a conscious and structured effort to manage and

improve the quality of SI PT. Best Stamp Indonesia

2. There should be guidelines for quality assurance of the outcome

of PT. Best Stamp Indonesia

1.9 Assess and Manage the

IT Risks

1. Environmental risk management guidelines in PT. Best Stamp

Indonesia should be made

2. The IT personnel should have awareness and engage in risk

assessment

1.10 Organize The project

1. Guidelines of management processes and methodologies on the

IT Project need to be made

2. The IT senior management involvement in the project needs to

be improved

3. Monitoring the implementation of IT projects is conducted

continuously during the life of the project

2 Procurement and Implementation

2.1 Identification of

automation solutions

1. Identification of automation solutions need to be made

Structurally

2. Plans and surveys is made on automation solutions, if needed

with the benchmarking in order to get the right solution

2.2 Acquire and maintain

application software

1. Procurement and maintenance of software applications need to

be conducted continuously given the dynamics or demands of the

system are continuously changing

2. Application development should follow the life cycle software

application development.

3. Software development need to be well documented

4. There should be user approval, of the implementation and

application software changes

2.3

2.3

Acquire and maintain

the technology

infrastructure

1. Making the plan for the technology platform that will be used.

2. There should be regulation on migration from old technology to

new technology

3. Documentation needs to be made in obtaining and maintaining

the technology infrastructure

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

252

Table 3. Audit recommendations IS PT. Best Stamp Indonesia

No Process Recommendation

2 Procurement and Implementation

2.4

2.4

Able to Use and Operate

1. There should be a structured effort to define, validate and maintain

2. Operating procedures to the user

3. Procedures need to be socialized properly

4. There should be monitoring on the implementation of procedures

that are running

2.5

2.5

Obtain the IT Resource 1. The IT personnel need to be added, considering the workload

2. Placement of the the IT personnel need to be done with the

appropriate skills and abilities

3. Qualifications of the the IT personnel need to be established

2.6 Organizing the changes 1. Guidelines need to be made in managing the changes, related to the

impact, authorization, changes in organizational structure, business

process reengineering and others

2. The changes that occur need to be documented

3. Endorsement of the changes from the leadership need to be made

2.7 Installing and

accreditation of Solutions

and Changes

1. Endorsement of the user and or corporate leader system changes

should be made before the system is installed.

2. In testing or installation the user need to be included.

3. There should be testing of the safety factor and the recognition

system

4. There should be documentation of hardware and software

installation

5. There should be planning and effort to install the system.

5. Conclusion and Recomendation

5.1. Conclusion

Based on the analysis and evaluation has been discussed previously, the conclusions can be obtained :

Implementation of information technology has been adapted to the company's strategic plan, as a

guide to information technology that provides direction and guidance in the application of

information technology at PT. Best Stamp Indonesia.

Seventeen process of information technology at PT. Best Stamp Indonesia overall are at the level

defined

There are still some weaknesses in the company's IT processes such as the lack of adequate Help

Desk System.

Monitoring and evaluating the performance of the IT performance has not done optimally, it is

characterized with the information systems audit that has never be done for the overall information

systems that run at PT. Best Stamp Indonesia.

5.2. Recomendation

Based on the research that has been done, the authors have suggestions that can later be used as a basis

for further research. The suggestions include inadequate information technology security measures. One

of the things that concerns the author that there’s no system of adequate disaster management.

In terms of internal controls there had been an awareness from the management to improve the

company's internal control conditions, but it is not optimally done. the author suggest to put more

attention to internal controls management so later the company can create a better information

technology governance.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

253

References

Board Briefing IT Governance 2nd edition. (2007), ITGI.

COBIT Framework 4.1. (2007), ITGI.

Grembergen, Wim Van and Steven De Haes. (2004), IT Governance and Its Mechanisms, Information Systems

Audit and Control Association.

Grembergen, Wim Van. (2004), Strategy For Information Technology Governance, Idea Group Publishing, Idea

Group Publishing.

Grembergen, Wim Van, Steven De Haes & Eric Guldentops. (2004), Structures, Processes and Relational

Mechanisms for IT Governance, Idea Group Publishing.

Heather, A. Smith, James D. Mckeen, Satyendra Singh. (2007), Developing Information Technology Strategy

for Business Value, Journal of Information Technology Management Volume XVIII, pg 56, Number1.

IT Governance Institute. COBIT 4.1. IT Governance Institute, United States of America

Stacey Hamaker, CISA, and Austin Hutton. (2004), Principles-of-IT-Governance, Information Systems Control

Journal, Volume 2.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

254

Curve Fitting Based on Physichal Model

Accelerated Creep Phenomena for Material

SA-213 T22

Cukup MULYANAa, Agus YODIb aPhysics Department, University of Padjadjaran, Indonesia

bMathematical Department B, Institute Technology of Bandung, Indonesia

*[email protected]

Abstract: The critical component made from metal that has high temperature and mechanical

resistance more commonly use in power plant and refinary. A number of tube made from carbon

steel ferritic SA - 213T22 is used in boiler to generate steam on power plant. The failure of the

tube commonly causes by creep phenomenon. In order to know the creep behavior of materials,

it can be simulated by using accelerated creep test in laboratory. In this research, a model of

creep behavior has been formulated for two different creep stage. Stage I is a region when the

materials is given a sudden load and its experience strain hardening, and followed by a balance

of strain hardening and softening due to high temperature effects. Stage II is a region where the

equilibrium is broken because of the dominant effect of metal temperature. The first step is

formulating a creep model stage I that describes the strain hardening mechanism of metal, the

equation is dε

dt ~ k − ε at 0 < t < t1. Where t1 is transition time. Creep model stage II describes

the failure mechanism materials due to the effects of temperature and it is happen very quickly,

the equation is dε

dt ~ ε at t1 < t < t2 . At time t = t1 , the functions in equations (3) and (4),

should be continuous and differentiable. Coefficients A, B, C, α, and β are obtained by fitting

the equations (3) and (4) to the creep curves from the accelerated creep test in the laboratory.

The solution of stage I creep is exponential function that convergen at certain time (t1), and for

the stage II is positive exponential function. The t1 is the transition time, physically describe the

broken balance of strain hardening and softening, and it is called terminate creep time. The

higher temperature is the shorter terminate creep time. The higher stress is the shorter terminate

time. Coefficient A, B, C, α, and β in the solution function of creep models is unique, depend

on temperature and stress given.

Keywords: Curve fitting, accelerated creep, creep curve, transition time, creep stage

1. Introduction

The critical component made from metal that has high temperature and mechanical resistance more

commonly use in power plant and refinary. The important characteristics that should be considered is

the thermal resistance. If the metal is exposed in high temperature and pressure for along term it will

undergo plastic deformation called creep (Viswanathan, 1995). A number of tube made from carbon

steel ferritic SA - 213T22 is used in boiler to generate steam on power plant. In order to know the creep

behavior of materials, it can be simulated by using accelerated creep test in laboratory. A constant stress

is given to the materials in a certain temperature for a longer period of time. Elongation or strain

materials are measured as a function of time.

In this research, a model of creep behavior has been formulated for two different creep stage . Stage I

is a region when the materials is given a sudden load and its experience strain hardening, and followed

by a balance of strain hardening and softening due to high temperature effects. Stage II is a region where

the equilibrium is broken because of the dominant effect of metal temperature (Fujio Abe, 2008 and

Rusinko, 2011). In this stage physically metal experienced crack initiation and then propagate rapidly

that eventually fracture.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

255

The mathematical model formulated is tested by fitting to the data from accelerated creep test in

laboratory to obtain curve fitting function that satisfied the real condition.

2. Methodology

The first step is formulating a creep model stage I that describes the strain hardening mechanism of

metal dε

dt ~ k − ε at 0 < t < t1 (1)

with dε

dt is strain rate, k is a constant and ε instantaneous strain, t1is the time when the equilibrium

between strain hardening and softening is broken.

In this stage the strain rate is proportional to the part of the metal that has not experienced strain. Creep

model stage II describes the failure mechanism materials due to the effects of temperature and it is

happen very quickly. dε

dt ~ ε at t1 < t < t2 (2)

with t2 is the time of fracture.

In this stage the strain rate is directly proportional to the strain, and it increase from time to time rapidly.

Solution of the creep stage I model is equations (3) which describe the strain as a function of time ε(t).

ε(t) = A − Bexp(−αt) at 0 < t < t1 (3)

Solution of the creep stage II model is equations (4)

ε(t) = C exp(βt) at t1 < t < t2 (4)

At time t = t1 , the functions in equations (3) and (4), should be continuous and differentiable.

Coefficients A, B, C, α, and β are obtained by fitting the equations (3) and (4) to the creep curves from

the accelerated creep test in the laboratory.

The second step is accelerated creep tests in laboratory. The material is taken from the steam distribution

pipe with material SA-213 T22 in power plant. And then the tube is cut in the longitudinal direction as

shown in Figure 1.

Figure 1. Sample preparation for creep test (ASTM E-139, 2000)

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

256

The next step, the accelerated creep test is done in B2TKS PUSPIPTEK Serpong shown in Figure 2.

Figure 2. (a) Creep machine (b) Schematic diagram creep test (J.Rösler, 2007)

The first test is done for constant stress 190 MPa, while the temperature is adjusted as the operating

conditions about 550 oC to 580 oC. The results shown in Figure 2. The second test is done for constant

temperature while the applied stress is about 152 MPa to 240 MPa. The stress given is higher than

operational stress to enable the accelerated creep and is done until the samples has fractured shown in

Figure 3.

Figure 3. Photos of specimens before and after accelerated creep testing for constant stress

3. Result and Analysis

The creep test is done by two ways constant stress and constant temperature (Marc Andre 2009).

3.1 Creep Curve For Constant Stress 190 MPa.

In Figure 4 the point in the graph is obtained from strain meassurement as a function of time from the

accelerated creep test. While the continous line is the curve fitting from the point using creep function

model in equation (3) and (4).

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

257

Figure 4. Curve fitting for creep behavior, creep strain versus time with constant stress 190 MPa

for the four specimens.

For T = 550°C and tr=10.59 hours with dε

dt= 0.20 % per hour

ε(t) = {2.01228 − 1.86667exp(−0.697207t) , 0 < t < 6.6 hours (5)

For T = 560°C and tr=5.9 hours with dε

dt= 0.36% per hour

ε(t) = {2.89306 − 2.17877exp(−0.408887t) , 0.26 ≤ t < 4.6 hours

0.019050exp(1.02844t) , 4.6 ≤ t < 6 hours (6)

For T = 570°Cand tr=3.45 hours with dε

dt= 1.1% per hour

ε(t) = {1.77877 − 1.695696exp(−0.825645t) , 0.08 < t ≤ 1.8 hours

0.262177exp(0.95177t) , 1.8 < t ≤ 3.45 hours (7)

For T = 580°C and tr=1.95 hours with dε

dt= 6.12% per hour

ε(t) = {5.29328 − 4.62584exp(−0.489035t) , 0.01 < t ≤ 1.75 hours

0.00272996exp(4.09812t) , 1.75 < t ≤ 1.95 hours (8)

For each creep curve has a time of transition is different t1, t2, and t3. The transtition time is the time

when the balance of strain hardening and softening is broken and then the material begin cracking

finally fractured. For T = 580 oC the transtition time is 1.75 hour, for T= 570oC is 1.8 hour and for T =

560 oC is 4.6 hour. While for T = 550 oC there is not transtition time, it means the material will not

fracture for along periode of time. So the higher the temperature, the smaller the transition time, its

mean the remaining life of material is shorter.

3.2 Creep Curve For Constant Temperature.

In Figure 4 the point in the graph is obtained from strain meassurement as a function of time from the

accelerated creep test. While the continous line is the curve fitting from the point using creep function

model in equation (3) and (4).

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

258

Figure 5. Curve fitting for creep behavior, creep strain versus time with constant temperature

580 oC for five specimens.

for σ = 152.51 MPa and tr=31,8 hours with dε

dt= 0.39% per hour, from the picture above the curve

did not experience creep stage II.

ε(t) = 2.57495 − 1.96113exp (−0.264482t) (9)

For σ = 228.17 MPa and tr=4.3 hours with dε

dt= 0.66 % per hour

ε(t) = {40.4643 − 40.5169exp(−0.0137855t) , 0.68 < t < 1.5 hour

0.16345exp(0.85979t) , 1.5 ≤ t < 4.3hours (10)

For σ = 229.25 MPa and tr=3.2 hours with dε

dt= 1.6% per hour

ε(t) = {63.2105 − 63.0704exp(−0.0269024t) , 0.06 < t ≤ 1.75 hours

0.7606exp(0.781321t) , 1.75 < t < 3.3 hours (11)

For σ = 234.05 MPa and tr=0.48 hours with dε

dt= 13.3% per hour

ε(t) = {4.5344 − 4.34028exp(−3.18695t) , 0.1 < t < 0.2 hours

0.559826exp(5.5517t) , 0.2 ≤ t < 0.5 hours (12)

For σ = 239.94 MPa and tr=0.3 hours with dε

dt increase rapidly, because of the stress is very high.

ε(t) = {3.84697 − 3.904exp(−5.41964t) , 0.1 < t < 0.15 hours

0.003756exp(24.7537t) , 0.15 ≤ t ≤ 0.3 hours (13)

In Figure 5 each creep curve consists of two segments separated by a transition time of creep stage 1 to

stage 2 trip. That is expressed by t1, t2, t3 and t4. For the curve with σ = 152,51 MPa there is no

trasntition time because of very low stress. The material will not fracture for along periode. For σ =228.17 MPa the time transtition is 1.5 hour, σ = 229.25 MPa is 1.75 hour, σ = 234.05 MPa is 0.2 hour

and σ = 239.94 MPa is 0.15 hour. The higher stress given the shorter time transtition and the shoerter

remaining life is. Using fitting curve for each curve has its own constant A, B, C, α, and β as shown in

equation (5) untill (13).

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

259

4. Conclusion

Mathematical model for stage I creep can be express by differential equation dε

dt ~ k − ε at 0 < t < t1,

and for the stage II is dε

dt ~ ε at t1 < t < t2. The solution of stage I creep is exponential function that

convergen at certain time (t1), and for the stage II is positive exponential function. The t1 is the transition

time, physically describe the broken balance of strain hardening and softening, and it is called terminate

creep time. The higher temperature is the shorter terminate creep time. The higher stress is the shorter

terminate time. Coefficient A, B, C, α, and β in the solution function of creep models is unique, depend

on temperature and stress given. The function can be obtain by fitting curve.

Acknowledgements

We would like to thank to Mr. Ilham Hatta from B2TKS BPPT Serpong that help us in collecting data

and testing materials during accelerated creep test. And to Mr. Sari Purwono from engineering division

in Indonesian Power Suralaya for giving the sample test during the experiment.

References

Viswanathan, R. (1995). Damage Mechanisms and Life Assessment of High-TemperatureComponents

(3rd ed). Metals Park, Ohio: ASM Internasional.

Fujio Abe, Torsten Ulf-Kern and R.Viswanathan. (2008). Creep-resistant Steel. United States of

America: CRC Press LLC. J.Rösler, H.Harders and M.Bäker. (2007). Mechanical Behavior of Materials. United States of America :

Cambridge University Press.

Marc Andre Meyers and Khrisnan Kumar Chawla. (2009). Mechanical Behavior of Engineering Materials. United

States of America: Cambridge University Press.

Rusinko, Andrew and Konstantin Rusinko. (2011) Plasticity and Creep of Metals. Berlin: Springer Verlag Berlin

Heidelberg.

ASTM E-139. (2000). Standard Test Methods for Conducting Creep, Creep-Rupture, and Stress-Rupture Test of

Metallic Materials, ed Update 2003. USA : ASTM International

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

260

Growing Effects in Metamorphic Animation of

Plant-like Fractals based on Transitional IFS

Code Approach

Tedjo DARMANTOa*, Iping S. SUWARDIb& Rinaldi MUNIRc

aDepartment of Informatics, STMIK Amik Bandung, Indonesia

b,cSTEI, Bandung Institute of Technology, Indonesia

*[email protected]

Abstract: In this paper, the growing effects in metamorphic animation of plant-like fractals are presented. The metamorphic animation technique is always interesting, especially the animation

which is involving objects in the nature that can be represented by fractals. Through the inverse problem process, objects in nature can be encoded into IFS fractals form by means of the collage

theorem and self-affine function. Multidirectional, bidirectional and unidirectional growing

effects in metamorphic animation of plant-like fractal can be simulated based on a family of transitional IFS code naturally between the source and target objects by an IFS rendering

algorithm.

Keywords: Fractal, metamorphic animation, growing effects, collage theorem, transitional IFS code, self-affine function

1. Introduction

In this paper, there are six sections. The first and the last sections are introduction and conclusion.

In between both sections there are other four sections, those are related works, transitional IFS code,

metamorphic animation and simulation. In this introductory section, the discussion begin with the basic

terminology such as fractal geometry, self-affine, IFS code, and IFS rendering algorithms in conjunction

with the metamorphic animation in fractal form.

1.1 Fractal Geometry

The term of fractal is first coined by Mandelbrot, picked from a Latin word: fractus, which has a

meaning: fractured or broken (Mandelbrot, 1982). One way to generate a fractal object is by the iterated

function system (IFS) which is first introduced by Barnsley based on Hutchinson’s idea as mathematical

background (Barnsley, 1993 and Hutchinson, 1979). Another way to generate a fractal is by

Lindenmayer or L systems which is first introduced by Lindenmayer and is suitable for generating the

tree-like objects (Lindenmayer et.al, 1992). The fractal geometry as a superset of the euclidean

geometry can have the range of dimension in fractional numbers continuously, and not as a discrete

integer numbers like in the euclidean geometry.

1.2 Self-affine

As the next term, self-affine function is special case of affine transformation function of fractal

objects which have a self-similarity property. The self-similarity as a property of a fractal object means

that parts of an object can represent an object as a whole in smaller scale and in the different orientations.

The self-affine function (2D) maps the next position of points (x’, y’) as a vector in an object that depend

on the previous ones (x, y) by a matrix (2 X 2) which has four coefficients : a, b, c and d and a vector

which has two coefficients : e and f, so totally there are six coefficients as described in the equation-1

below.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

261

𝑊[𝑥′

𝑦′] = [

𝑎 𝑏𝑐 𝑑

] ∗ [𝑥𝑦] + [

𝑒𝑓] (1)

1.3 IFS Code

`Fractal objects in iterated function systems or IFS form is represented by IFS code set, which is actually

a collection of self-affine function coefficients. Typically a 2D object in IFS fractal form can be encoded

as one or more collections of six coefficients: a, b, c, d, e and f. One IFS code set represents one part

of a fractal object that has similarity to the object as a whole as already mentioned in the previous

section above. The coefficient-a, b, c and d represent and determine the shape of the fractal object in x

and y directions, and the the other two coefficients, e and f represent and determine the position and

scale of the object (Barnsley, 1993). The typical IFS code as an example with probability factors p is

displayed in table-1 and the correspondent figure in figure-1 below. The first row of function represents

the center part, the second and third represent the right and left side parts and the two last row represent

the the trunk (right and left as a pair in opposite orientation

to fill the void area complementarily).

Table 1. An example of IFS code set

Figure 1. Parts of tree-like fractal

1.4 IFS Rendering Algorithms

In general there are two major IFS rendering algorithms. The first is randomize IFS rendering

algorithms and the second is deterministic IFS rendering algorithms. The randomize IFS rendering

algorithms is also called the random iteration algorithms, which is divided into : the classic random

iteration algorithm, and the colour-stealing random iteration algorithm. The second algorithms is also

called the deterministic iteration algorithms, which is divided into : the classic deterministic iteration

algorithm, the deterministic iteration algorithm and superfractals, the minimal plotting algorithm, the

constant mass algorithm, and the escape-time algorithm (Nikiel, 2008).

2. Related Works

2.1 Algorithm

In their paper, Chen et al. presented a new fractal-based algorithm for the metamorphic animation.

The conceptual roots of fractals can be traced to the attempts to measure the size of objects for which

traditional definitions based on Euclidean geometry or calculus failed. Therefore, the objective of this

study is to design a fractal-based algorithm and produce a metamorphic animation based on a fractal

idea. The main method is to weight two IFS codes between the start and the target object by an

interpolation function. Their study shows that the fractal idea can be effectively applied in the

metamorphic animation. The main feature of the algorithm is that it can deal with a fractal object that

the conventional algorithm cannot. In application, their algorithm has many practical values that can

improve the efficiency of animation production and simultaneously greatly reduce the cost (Chen et al.,

2006).

In their research, Zhang et al. proposed the general formula and the inverse algorithm for the multi-

dimensional piece-wise self-affine fractal interpolation model and presented in their paper to provide

Treelike

a b c d e f p

-.637 0.000 0.000 0.501 0.025 0.570 20.0

0.215 -.537 0.378 0.487 0.047 0.966 20.0

0.510 0.455 -.277 0.397 -.049 1.001 20.0

-.058 -.070 0.453 -.111 0.117 0.728 20.0

-.035 0.070 -.469 0.022 -.105 0.578 20.0

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

262

the theoretical basis for the multi-dimensional piece-wise hidden-variable fractal model ( Zhang et al.,

2007).

In his research Chang proposed a hierarchical fixed point-searching algorithm that can determine the

coarse shape, the original coordinates, and their scales of 2-D fractal sets directly from its IFS code.

Then the IFS codes are modified to generate the new 2-D fractal sets that can be the arbitrary affine

transformation of original fractal sets. The transformations for 2-D fractal sets include translation,

scaling, shearing, dilation/contraction, rotation, and reflection. The composition effects of the

transformations above can also be accomplished through the matrix multiplication and represented by

a single matrix and can be synthesized into a complicated image frame with elaborate design (Chang,

2009).

2.2 IFS Model

Starting from the original definitions of iterated function systems (IFS) and iterated function

systems with probabilities (IFSP) in their paper, Kunze et al. introduced the notions of iterated

multifunction systems (IMS) and iterated multifunction systems with probabilities (IMSP). They

considered the IMS and IMSP as operators on the space H(H(X)), the space of (nonempty) compact

subsets of the space H(X) of (nonempty) compact subsets of the complete metric base space or pixel

space (X; d) on which the attractors are supported (Kunze et al., 2008).

2.3 Interpolation

In his thesis, Scealy studied primarily on V-variable fractals, as recently developed by Barnsley,

Hutchinson and Stenflo. He extended fractal interpolation functions to the random (and in particular,

the V -variable) setting, and calculated the box-counting dimension of particular class of V-variable

fractal interpolation functions. The extension of fractal interpolation functions to the V-variable setting

yields a class of random fractal interpolation functions for which the box-counting dimensions may be

approximated computationally, and may be computed exactly for the {V, k}-variable subclass. In

addition he presented a sample application of V-variable fractal interpolation functions to the generation

of random families of curves, adapting existing curve generation schemes that are based upon iteration

of affine transformations ( Scealy, 2009).

2.4 IFS Fractal Morphing

In their paper, Zhuang et al. proposed a new IFS corresponding method based on rotation matching

and local coarse convex-hull, which ensures both that one IFS’s local attractor morph to the most similar

local attractor from the other IFS, and the fractal feature is preserved during morphing procedure. The

coarse convex-hull and rotation matching is very easy to create. Furthermore, they can be used for

controlling and manipulating 2D fractal shapes (Zhuang et al., 2011).

3. Transitional IFS Code

In transitional IFS code the are are two things should be considered. The first one is the number of

self-affine function for both IFS code set of the source and target. It is easy to interpolate coefficient of

IFS code in between the source and target, if the number of self-affine function in the source and target

is the same. If it ‘s not then the dummy function should be inserted into one of IFS code set either the

source or the target, so the number of self-affine function in both sets becomes the same. A dummy

function has a probability factor set to zero and the six coefficients are the same as in the counter part

(as the source or target). The second one to be considered is the sequence of self-affine function in the

source and target should be also the same based on the part of object with the same position and scale

represention relatively. If two things mentioned above are satisfied, then a pair of IFS code set as the

source and target is in a family of transitional IFS code (Darmanto et al., 2012).

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

263

4. Metamorphics Animation In this paper, there are two kinds of metamorphic animation using the random iteration algorithm

with the partial interpolation of IFS code and the whole interpolation of IFS code between the

coefficients in functions of the source and target.

4.1 Partial Interpolation IFS Code

There are two types of partial interpolation IFS code. The first one is interpolating all functions of

the source to all functions of target in the IFS code set, but is not for all coefficients in a function are

interpolated. The second one is interpolating all coeffcients in each function of IFS code set, but is not

for all functions of the IFS code set. The pair of IFS code sets example of the first one is displayed in

table-2a and table-2b below and the pair of IFS code sets example of the second one are displayed in

table-3a and table-3b below. Basically only coefficient-a and c that are influencing the x-direction are

interpolated ( marked in bold type ).

Table-2. A pair IFS code set of grass-like fractal (a. Source; b. Target) : 5 functions a. IFS code set of grass-like fractal (source) b. IFS code set of grass-like fractal (target)

a b c d e f p a b c d e f p

0.03 0.06 -0.06 -0.42 -0.033 0.40 10.0 0.04 0.06 -0.06 -0.50 -0.033 0.40 10.0

0.10 -0.15 0.04 0.37 -0.015 0.20 20.0 0.24 -0.15 0.09 0.37 -0.015 0.20 20.0

0.08 -0.08 0.03 0.19 -0.025 0.30 05.0 0.18 -0.08 0.07 0.19 -0.025 0.30 05.0

0.15 -0.19 0.06 0.46 -0.031 0.40 25.0 0.36 -0.19 0.14 0.46 -0.031 0.35 25.0

0.23 -0.27 0.09 0.65 -0.035 0.40 40.0 0.54 -0.27 0.22 0.65 -0.033 0.40 40.0

Table-3. A pair IFS code set of bushes-like fractal (a. Source; b. Target) : 4 functions

a. IFS code set of bushes -like fractal (source) b. IFS code set of bushes -like fractal (target)

a b c d e f p a b c d e f p

0.01 0.06 -0.004 0.17 0.00 0.00 5.0 0.01 0.06 -0.004 0.17 0.00 0.00 5.0

0.01 -0.09 0.004 0.26 0.00 0.00 5.0 0.01 -0.09 0.004 0.26 0.00 0.00 5.0

0.50 -0.30 0.208 0.79 -0.09 0.22 45.0 0.60 -0.30 0.238 0.79 -0.09 0.22 45.0

0.50 -0.30 -0.208 0.79 0.04 0.12 45.0 0.60 0.30 -0.238 0.79 0.04 0.12 45.0

4.2 Whole Interpolation IFS Code

The whole interpolation IFS code is interpolating each coefficient of all functions of IFS code set

between the source and target. The IFS code set example are displayed in table-4a and table-4b below

(almost all coefficients are interpolated). This kind of interpolation will show the multidirectional

growing effect as described in the next section.

Table-4. A pair IFS code set of tree-like fractal (a. Source; b. Target) : 5 functions a. IFS code set of tree-like fractal (source) b. IFS code set of tree-like fractal (target)

a b c d e f p a b c d e f p

-0.64 0.00 0.00 0.50 0.025 0.52 20.0 -0.64 0.00 0.00 0.50 0.025 0.57 20.0

0.19 -0.49 0.34 0.44 -0.007 0.92 20.0 0.21 -0.54 0.38 0.49 0.047 0.97 20.0

0.46 0.42 -0.25 0.36 -0.003 0.94 20.0 0.51 0.45 -0.28 0.40 -0.049 1.00 20.0

-0.06 -0.07 0.45 -0.11 0.110 0.62 20.0 -0.06 -0.07 0.45 -0.11 0.117 0.73 20.0

-0.03 0.07 -0.47 0.02 -0.098 0.48 20.0 -0.03 0.07 -0.47 0.02 -0.105 0.58 20.0

5. Simulation

In this paper, there are three kinds of simulation that have the different types of growing effect resulted,

those are the unidirectional growing effect, the bidirectional and the multidirectional growing effects.

For the simulation purpose three different plant-like fractal objects are used. The first simulation which

shows the unidirectional growing effect, a grass-like fractal object is used. The transitional images as

the result of metamorphic animation of that object is displayed in figure-2 below. The second simulation

which shows the bidirectional growing effect, a bushes-like fractal object is used. The transitional

images as the result of metamorphic animation of that object is displayed in figure-3 below. Finally the

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

264

third simulation which shows the multidirectional growing effect, a tree-like fractal object is used. The

transitional images as the result of metamorphic animation of that object is displayed in figure-4 below.

5.1 Unidirectional Growing Effect

The eight images below shows the moving direction of top object to the right-below (south-east

direction) as long as the object is growing.

Figure-2. The transitional images in metamorphic animation of grass-like fractal

5.2 Bidirectional Growing Effect

The nine images below shows the growing direction of right side object to the right and left side to the

left (horizontally).

Figure-3. The transitional images in metamorphic animation of bushes-like fractal

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

265

5.3 Multidirectional Growing Effect

The nine images below shows the growing direction of top object to the north, right side object to the

east and left side object to the west (both horizontally and vertically).

Figure-4. The transitional images in metamorphic animation of tree-like fractal

6. Conclusion

From the metamorphic animation and simulation in the above sections, we can conclude that there are

three kinds of growing effect as the result of metamorphic animations of plant-like fractals that depend

on the kind of interpolation chosen, i.e: unidirectional, bidirectional and multidirectional growing

effects.

Acknowledgements

We would like to thank the Orginizing Committee of IC-MCS 2013 who has informed and invited

researchers and lecturers in ‘Aptikom’ society forum through the official invitation letter.

References

Mandelbrot, Benoit B. 1982. The Fractal Geometry of Nature. W.H. Freeman and Company.

Hutchinson, John E. 1979. Fractals Self-similarity. Indiana University Mathematics Journal 30.

http://gan.anu.edu.au/~john/Assets/Research%20Papers/fractals_self-similarity.pdf [ visited

:2013-07-31]

Barnsley, Michael F. 1993. Fractals Everywhere. 2nd edition. Morgan Kaufmann,. Academic Press

Lindenmayer, A., Fracchia, F.D., Hanan, J., Krithivasan, K., Prusinkiewicz, P. Lindenmayer Systems,

Fractals, and Plants. (1992). Springer Verlag.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

266

http://www.booksrating.com/Lindenmayer-Systems-Fractals-and-Plants/p149012/ [ visited

:2013-07-31]

Chen, Chuan-bo, Zheng, Yun-ping, Sarem, M. (2006). A Fractal-based Algorithm for the Metamorphic

Animation. Information and Communication Technologies, ICTTA '06. 2nd, Volume: 2, D.O.I:

10.1109 / ICTTA. 2006. 1684885 , Page (s): 2957-2962.

Nikiel, S. (2007).Iterated Function Systems for Real-Time Image Synthesis. © Springer-Verlag London

Limited.

Zhang, Tong, Zhuang, Zhuo. (2007). Multi-Dimensional Piece-Wise Self-Affine Fractal Interpolation

Model. TSINGHUA SCIENCE AND TECHNOLOGY; ISSN 1007-0214 02/18 pp244-251;

Volume 12, Number 3, June 2007; D.O.I: 10.1016/S1007-0214(07)70036-6

Kunze, H.E., La Torre, D., Vrscay, E.R. (2008). From Iterated Function Systems to Iterated

Multifunction Systems. Communications on Applied Nonlinear Analysis {\bf 15} (4), 1-14.

Chang, Hsuan T. (2009).Arbitrary Affine Transformation and Their Composition Effects for Two-

Dimensional Fractal Sets. Photonics and Information Laboratory Department of Electrical

Engineering of Taiwan National Yunlin University of Science and Technology.

http://teacher.yuntech.edu.tw/htchang/geof.pdf [ visited :2013-07-29]

Scealy, Robert. (2009).V-Variable Fractals and Interpolation. Philosophy doctoral thesis of the

Australian National University. http://www.superfractals.com/pdfs/scealyopt.pdf [ visited

:2013-07-29]

Zhuang, Yi-xin, Xiong ,Yue-shan, Liu, Fa-yao. (2011). IFS Fractal Morphing based on Coarse Convex-

hull. 978-1-4244-8625-0/11/$26.00 ©2011 IEEE

Darmanto, T., Suwardi, I.S., Munir, R. (2012). Cyclical Metamorphic Animation of Fractal Images

based on a Family of Multi-transitional IFS Code Approach. Control, Systems & Industrial

Informatics (ICCSII), IEEE

Conference on, D.O.I: 10.1109/CCSII.2012.6470506, Page(s): 231 - 234

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

267

Mining Co-occurance Crime Type Patterns for

Spatial Crime Data

Arief F HUDAa, Ionia VERITAWATI b aMath Dept, UIN Sunan Gunung Djati, Indonesia

bInformatic Engineering Dept, Pancasila University, Indonesia

Abstract: Crime is an unlawful act that happens in a particular location. The linkage between

the type of crime with social-economic and geography conditions of a region has been a lot of

research studies. These conditions may cause some types of crime. Because some types of crime

occurring in a location that these conditions are the same, so in this study, the proposed method

to observe the relationship between the type of crime in a particular location and co-location

patterns of variation among regions crimes. Proposed method using aggregate data in a location

under the jurisdiction of a police, aggregate data is then converted into a non-numeric data then

all types of crime in one location arranged in sequense. Clustering performed on the sequence

formed. The experiments were performed by using crime data from police Metro Jakarta area.

Found co-location for a type of crime against other types of crime as well as various types of

crime similarity to certain regions.

Keywords: Spatial analysis, co-location, crime dataset,

1. Introduction

Crime understood as an interaction between the victim, offender, and neighborhood crime[

Wang et al., 2013]. In practice, this concept can be measured using a variety of socio-economic

circumstances and chance occurrence of the crime, such as population density, type of location (market,

mall, or village), and the level of regional security [Anselin 2000, Ratcliffe 2010].

Criminal analysis is parse (breaking up) the efforts made in violation of law in some parts to

get the basic properties and give a report of the findings. The goal is to get useful information from

large amounts of data and deliver it to the parties relating to the prevention of crime and the criminal

muzzle [Osborne, 2003]. Bruce analysis linking the definition of a crime by location, time, a certain

character, and in common with other criminal [Bruce, 2004].

So it can be understood that a criminal offense related to the location, time, geographic

conditions, economic conditions, and other crimes. Recently, several statistical methods for spatial data

[Anselin et. al. 2000, Ahmadi 2003, Ratcliffe 2010, Patrick et. al 2011] and some of the techniques used

in data mining proposed for crime data analysis [Philip 2012, Wang et. al 2013] which provides a

method to overcome some of the weaknesses in traditional methods. However, these methods still

contain some weaknesses, namely the use of the data aggregate of all types of data in addition to crime

and involves another data instead crime data. Aggregate use of these types of crime reduces the accuracy

of the analysis results for spatialnya elements. This is because there are several types of low-level crimes

are summed with high levels of crime or otherwise that result in lower than the spatial correlation

analysis on each type separately [Andresen et. al 2012]. On the other hand, the availability of criminal

data on a country (region) evolved as Indonesia, can not complete in the developed countries like

America. Data recorded at the Jakarta Police, the data is in the form of aggregates of each type of crime

by police force area level resort. For more detailed data, such as crime locations were not recorded in

the report. This causes the data can not analyzed by spatial point pattern types, but only able to analyze

by spatial data type region (area / polygon).

To overcome these two shortcomings in this study the researchers made criminal data analysis

method that is spatial, by type of crime by a police jurisdictions resort level. The advantage of this

method is that the data used is the data type of crime by region, so it does not require any other data

related to an area, eg economic conditions, type of area is residential, industrial or commercial, etc.

(note: this data must be obtained from the different agency). Another advantage is the result obtained

in the form of the level of crime in an area (to determine the hot spot area), the pattern of crimes in an

area to form a pattern for the areas that have the same pattern.

This method is based on the method for data sequences klustering [Dorohanceanu 2000,

Okkunen 1999]. Data in the form of a type of crime in the area is converted into a non-numeric, then

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

268

arranged in a sequence. Obtained pattern is the pattern of sequence similarity criteria specified above

threshold.

2. Related Work

Reported studies conducted for criminal data started by the French ecologist Guerry and

Quetelet in which they explain the difference in the level of crime in the social conditions that vary on

a locality. This is the beginning of the study using the unit of analysis spatially defined population

aggregates [Anselin et. Al. 2000, Ratcliffe 2010]. The development of computer technology come to

drive the development of spatial data analysis methods including criminal data. This is supported by the

Geographic Information System.

Two approaches in spatial crime data analysis method is a statistical approach and the approach

to data mining. Approach with statistical methods are usually based on the method of spatial

autocorrelation using Moran's I and Geary's c for global measures while to know the value of a particular

location used autocorrelation LISA and Gi and Gi* [Anselin et. Al. 2000, Chakravorty, 1995, Ahmadi

2003, Zhang 2007, Ratcliffe 2010].

3. Methodology

3.1 Model Data

The data model is a data area with the contents of the data is the aggregate of each kind of crime

in an area is. Data set of this type of crime in a particular location to form a single entity, which in this

study is called particle. Particle is sequences taken from the data for the observation area that consists

of several types of crime.

3.2 Particle Formation Patterns

Particle pattern is the arrangement of elements that is level of cime type in some location.

Particle patterns are grouped according to the similarity of its constituent elements. Particle size

similarity is :

- P is a particle with member are {𝑝1, 𝑝2, … , 𝑝8}

- For n particle, there are 𝑃1, 𝑃2, … 𝑃𝑛

- Similarity of 𝑃𝑖 and 𝑃𝑗, for 𝑖, 𝑗 = 1,2,… , 𝑛 is:

𝑠𝑖𝑚(𝑃𝑖 , 𝑃𝑗) = ∑𝑆(𝑝𝑖𝑘 , 𝑝𝑗𝑘) ≥ 𝜃

8

𝑘=1

where 𝑆(𝑝𝑖𝑘 , 𝑝𝑗𝑘) = 1 if 𝑝𝑖𝑘 = 𝑝𝑗𝑘 and 0 if 𝑝𝑖𝑘 ≠ 𝑝𝑗𝑘

𝜃 is threshold

3.3 Clustering Algorithm

In general, the algorithms developed in this study are as follows:

1. Prepare spatial data from level of each crime type to non-numerical form

2. Determine the candidate subsequence

3. Determine core subsequence of candidate subsequence. This is the core subsequence pattern of

spatial data.

This algorithm is implemented using simulated data. Arranged so that the data presented in the model

with 9 elements and its value in the form of non-numeric.

Implementation steps are as follows:

1. Prepare spatial data in non-numeric form.

2. Define particle, each location is forming one particle.

Some particles are obtained from the data above example is as follows :

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

269

Figure 1 Particles are formed, a sequence of characters

3. Determine the candidate subsequence

Candidate subsequence is the subsequence similarity of all the particles that have been formed

from the previous step. Similarities found using equation (1) and the value threshold 2.

Figure 2 Candidate of Subsequence

These results consist of {subsequence, the same amount of char, beginning the character, the number of

particles, [no particles]}

4. Determining core subsequence of candidate subsequence.

Core subsequence is determined by combining the candidate subsequence of all particles. Core

candidate subsequence subsequence formed by combining, merging is done following :

Equation

𝑃1 ∩ 𝑃2 ∕ 𝑃1 ∪ 𝑃2 ≥ Φ (2) 𝑃1, 𝑃2 are particles that make up the 1st and 2nd candidate. Threshold we use is 10%

𝑃1 ∩ 𝑃2 ≥ Φ (3)

𝑃1, 𝑃2 are particles that make up the 1st and 2nd candidate. Threshold can be determined in

accordance with the conditions of the data and the desired cluster. In the example described,

used equation 2.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

270

4. Experiment

4.1 Data

Criminal data in this study are data on the number of crimes that occur in an area. There are

two approaches of the data area of the crime scene, which is the area where the crime occurred and the

area where the police station where the crime was reported. Jakarta crime data form, using the second

model, where the crime is a crime the police station where the incident was reported.

Police Station where the crime is reported to the Office of Police Resort (POLRES), Office of

the Police Sector (POLSEK), and the Regional Police Office (Police). Number of Police under the

Police POLRES METRO JAYA is 11, with 110 POLSEK and 2 Implementing Port Security Unit

(KPPP) Sukarno Hatta Airport Police Station and the Port of Tanjung Priok. Jakarta Police have work

areas in the administrative area of DKI Jakarta, Tangerang, Tangerang City, South Tangerang, Depok,

Bekasi and Bekasi. So METRO JAYA Police jurisdiction is Jakarta and surrounding areas. Examples

of crime data obtained from the City Police, is as follows :

Table 1. Level of Crimes by Type of data reported in CENTRAL JAKARTA POLRES the

Year 2011((Distik, POLDA METRO JAYA, 2012)) Region ANI

RAT

CU

RAT

TOD

ONG

RAM

PAS

RAM

POK

BAJA

K

RODA

2

RODA

3

RODA

4

JML

1 2 4 5 6 7 8 9 10 11 12 19

I POLRES

JAKPUS

1 GAMBIR 38 117 3 11 0 0 209 0 17 395

2 SW. BESAR 54 78 8 8 1 0 158 0 10 317

3 KEMAYORAN 31 121 6 16 1 0 223 0 32 430

4 MENTENG 23 74 6 0 2 0 74 0 8 187

5 TN. ABANG 28 78 0 8 2 0 61 0 7 184

6 SENEN 13 54 2 7 0 0 102 1 9 188

7 C. PUTIH 3 97 2 1 1 0 120 1 13 238

8 JH. BARU 7 50 3 1 0 0 157 0 7 225

The result of simple statistical analysis can be seen the crime rate areas in the region to the

police station where the crime was reported.

Statistical elementary method can determine the level of crime, the average incidence, etc. But

we can not know the pattern of the relationship between the type of crime to one another. Also the

pattern of linkage to the region where the crime took place or reported. The method developed in this

research can answer these problems.

The methods developed in this study can reveal a pattern of crimes for each area and point out

if there are patterns of the same or similar areas exist. Results using simple statistics can be seen in

Figure 12 and 13. Figure 12 illustrates the incidence rate of crime categorized into very high, high,

medium, low and none. Whereas Figure 13, describes the incidence rate for all types of theft crimes

with categories like in the figure 12.

Step-by-step analysis of criminal data clustering based on location (spatial clustering analysis

of data) is:

1. Conversion of data into a number of crimes intervals, each interval symbolized by the letter

of a, b, ...

2. Take steps clustering as described previously (methodology section)

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

271

Table 2. Crime level in non-numerik symbol from tabel 1

1 2 4 5 6 7 8 9 10 11 12 13

I POLRES JAKPUS

0 GAMBR 0 d c c h 0 g d a

1 SW. BESAR 0 f b d f a f b b

2 KEMAYORAN 0 d c e i a h g a

3 MENTENG a c b b c b c b a

4 TN. ABANG a c b b c b c b a

5 SENEN 0 b b b d 0 d b c

6 C. PUTIH a a b a e a d c b

7 JH. BARU 0 a a a f 0 f b a

Ilustration of patterns generated from clustering algorithm is

Core 0- {-aa-a0a--,1,5,17,[33, 44, 45, 48, 49, 50, 52, 55, 56, 62, 72, 73, 90, 91, 97, 107, 108]}

Core 1- {--aaa-aa-,2,5,16,[30, 33, 38, 46, 47, 51, 75, 76, 79, 81, 82, 88, 92, 103, 105, 107]}

Core 65- {0c--b0ba-,0,6,3,[8, 11, 34]}

Core 66- {-ca-b0ba-,1,6,3,[11, 21, 34]}

Core 94- {-ba---ab-,1,4,3,[23, 102, 106]}

Core 95- {--a0---0a,2,4,3,[49, 62, 109]}

Explanation :

Core ii {xxx xxx xxx, j, n, m [k,k,...] }

Core ii : Pattern ii-th, e.g Core 1, that means 1st pattern.

xxx xxx xxx : The resulting pattern, e.g -aa-a0a-- , This pattern suggests that there is a similar

incidence rates for all types of crime number- 2, 3, 5, 6, and 7. While this type of crime

number-1, 4, 8, and 9 different levels of crime incident.

j : initial character patterns formed

n : number of character patterns formed

m : number of areas that have the pattern

[k,k,...] : number of areas that have the pattern, starting from 0. Suppose 0 is Gambir area, 1 is

a.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

272

Figure 3. Crime level in resort region POLDA METRO JAYA

For spatial crime data POLDA METRO JAYA year 2011, produced 96 patterns of crime.

4.2 Analysis of Result

By using this method, the obtained pattern of crimes that occur in an area and can also be

known regions that have a similar pattern. Let's take the example of a pattern to the 0 and 66.

Pattern number-0,

Core 0- {-aa-a0a--,1,5,17,[33, 44, 45, 48, 49, 50, 52, 55, 56, 62, 72, 73, 90, 91, 97, 107, 108]}

Figure 4. Thef level in resort region POLDA METRO JAYA

This pattern is the pattern for the most number of areas, namely the 17 areas that have this

pattern. We use 4-forming region to graphically see patterns,

Figure 5. Pattern number-0 clustering result of Tangerang, Batu Ceper, Jati Uwung, Tagaraksa

In the graph, it appears there was a pattern of the same crime rate for all crime types 2, 3, 5, 6,

and 7, while the other different types. Regions that have the same pattern there are 17 regions, namely

Pamulang, Tangerang Kota, Batu Ceper, etc. Graphs for data before conversion is

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

273

Figure 6. Pattern of Crime Type Level of Tangerang, Batu Ceper and Tigaraksa

To chart pattern types of crime using the data before it is converted (figure), showed a different

pattern with graphic images after conversion. This is caused by the conversion of the method used. The

method used is to use a data distribution for each type of crime divided into intervals that were made.

Differences in the number of significant events for each type of crime causes the difference conversion

value. This is what causes the figure .. not seen the same pattern for all three areas above.

Pattern number-66,

Core 66- {-ca-b0ba-,1,6,3,[11, 21, 34]}

Pattern number-66, 6 have the same type of crime with 3 areas that have this pattern.

Figure 7. Pattern number 66, Tanjung Priuk, Pasar Rebo and Kembangan have similar pattern

Pattern number-66, easier to take a look at the differences and similarities incidence rate for

each type of crime. Crime type 1st, 4th, dan 9th have different incidence rates for different regions,

whereas other types of crimes are the same for all three constituent regions, namely Tanjung Priok,

Kembangan and Pasar Rebo.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

274

Figure 8. Crime Level area Tanjung Priuk, Kembangan and Pasar Rebo

If you see the graph for the data source (before dikonfersi) area of Tanjung Priok, Kembangan, and

Pasar Rebo, the difference in incidence rates for all types of crime patterns 1 and 4 are not seen clearly.

Similarly, for the equation to the local pattern-2 and 3 are also not clear. However, by using the method

of conversion of the data, the pattern and the pattern equations can be found.

5. Conclusion and Future Work

Model snapshots to store spatial data can sustain for spatial inherent in the data. For the clustering

process data values are presented in the form of non-numeric, so that the process can be done demgan

cluatering subsequence clustering method.

Particle formation in the process of clustering may retain elements of spatial data. The proposed

algorithm can find the patterns that form up 2 D. However computation of this algorithm is still quite

large due to the formation of core candidate subsequence still using methods exhoustive search.

For that we still continue to develop more efficient algorithm by using the existing frame work of

this research.

6. Unknowledgement

The work is funded by “Hibah Strategis Nasional”, Ministary of Education, Indonesia, 2013

References

Wang, D., Ding, W., Lo, H., Morabito, M., Chen, P., Salazar, J., & Stepinski, T. (2013). Understanding

the spatial distribution of crime based on its related variables using geospatial discriminative

patterns. Computers, Environment and Urban Systems, 39, 93–106.

doi:10.1016/j.compenvurbsys.2013.01.008

Zhang, B. H., & Peterson, M. P. (2007). A Spatial Analysis Of Neighborhood Crime In Omaha ,

Nebraska Using Alternative Measure Of Crime Rates. Internet Journal of Criminology.

Osborne, D., & Wernicke, S. (2003). What Is Crime Analysis? In Introduction to Crime Analysis, Basic

Resources for Criminal Justice Practice (pp. 1–11).

Bruce, C. W. (2004). Fundamentals of Crime Analysis. In Exploring Crime Analysis (pp. 7–11).

Anselin, L., Cohen, J., Cook, D., Gorr, W., & Tita, G. (2000). Spatial Analysis of Crime. Measurement

and Analysis Of Crime and Justice, 4, 213–262.

Ahmadi, M. (2003). Crime Mapping and Spatial Analysis. International Institue for Geo-Information

Science and Earth Observation Enschede.

Ratcliffe, J. (2010). Crime Mapping: Spatial and Temporal Challenges. In A. R. Piquero & D. Weisburd

(Eds.), Handbook of Quantitative Criminology (pp. 5–25). New York, NY: Springer New York.

doi:10.1007/978-0-387-77650-7

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

275

Patrick, D. L., Murray, T. P., & Governor, L. (2011). Analysis of Massachusetts Hate Crimes Data (pp.

1–34).

Andresen, M. a., & Linning, S. J. (2012). The (in)appropriateness of aggregating across crime types.

Applied Geography, 35(1-2), 275–282. doi:10.1016/j.apgeog.2012.07.007

Dorohanceanu B, Nevill-Manning C. (2000), A Practical Suffix-Tree Implementation for String

Searching, Dr. Dobb’s Journal,

Ukkonen, E. (1995). On-line construction of suffix trees. Algorithmica, 14(3), 249–260.

doi:10.1007/BF01206331

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

276

An Improved Accuracy CBIR using Clustering

and Color Histogram in Image Database

Juli REJITO Informatic Engineering Department, Faculty of Mathematics and Natural

Science, Unpad

Email: [email protected]

Abstract: The increasing number of digital images stored in the computer’s storing

media and database capability to provide for rapidly user’s information requirements has been

an important demand today. Efficiency in search of database images in large quality size

in the CBIR (content based image retrieval) system has been a highly demand.

This paper is aimed to obtain solution of applied access time optimization and

improved accuracy CBIR on WANG database as much as 1,000 image records in resolution

256 x 384 pixels and

384 x 256 pixels. The proposed solution has a system architectural model as integration from

the query optimization and partition clustering with expectation that they will increase efficiency

in search of images duration without a reduction in accuracy level of searching output.

The cluster formation basis in this study utilized the minimum and maximum PSNR

(peak signal to noise ratio) values in form of image similarity from comparison between image

record color feature extraction and basic images, in formation of 2, 4, 8, 16, and 32 clusters as

cluster index in filtering as well as searching of the image recording cluster position.

Meanwhile, the CBIR Query application is to measure images similarity searched by images

record in image color feature extraction usage with color histogram measurements.

The result of this model implementation showed that accuracy in F-Score value for

non- cluster query to 5 clusters query in K-Means clustering with extraction in image color

histogram shows an increasing accuracy from 0.22 to 0.23.

Keywords : CBIR, Clustering, PSNR, WANG Database, Color Histogram, F-Score

1. INTRODUCTION

A query optimization in a process of access to data computationally is an alternative to consider in attempt to minimize access time more efficiently to a database record by using the query. The

application of query operation is widely used in relational databases, such as text, web, picture,

multimedia, etc. databases. CBIR is an application of a computer vision related to a digital image data

retrieval process. By ‘Content’ is in this context meant as colors, shapes, textures or other

information that could be derived from the image. There are some types of features used in an image

retrieval, such as color, texture, shape, etc retrievals.

A new rank-join algorithm that was unrelated to an union strategy, and held an excellent

accuracy proof. The proposed algorithm used a regulation of input relation to produce values on

combined inputs (Ihab, 2004). The result obtained in the rank-joint algorithm proved that the produced

optimality was related to the number of tuples accessed in the query joints. A statistical results of a

query clustering process and the probability of results that could be seen by a traditional query approach

significantly differed (Roussinov, 2000). The implementation accomplished in a traditional query

application requires the users to acquire a skill in query language in selecting a keyword when searching

for information in webs.

From the research conducted it could be concluded that the effectiveness of a search would be

better if the information searching process used an adaptive search and used both query clustering

and summarizing. Early in the 1990, CBIR which conducted a retrieval process based on a visual content

in a form of the compositions of image colors, began to be developed (Ghosh, 2002). Currently,

retrieval systems have also involved the user feedbacks, irrespective of whether or not an image of

retrieval results was relevant (relevance feedback) which was used as a reference in modifying

a retrieval process to obtain more accurate results. The implementation of a query optimization proposed

in this paper was related to CBIR in image databases aimed at obtaining a record with a high image

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

277

content similarity level (above 90%) and a minimum access time during an image database record

searching process. The developed proposal was initiated by carrying out a color extradition process of

each image database record, and then a searching process of clustering values on an image database

record based on basic figures was conducted, and furthermore the results of clustering of each image

database record were used to a filtering process based on the searched images, thus it was conceivable

that the record content searching process was conducted in only those records that fell into the same

cluster.

2. MEASUREMENT OF IMAGE QUALITY

A measurement of image quality consisted of 2 (two) groups based on HVS (Human visual

system). One group was a conventional measurement system that used a technique of computing both

SNR (Signal to Noise Ratio) and PSNR (Peak Signal to Noise Ratio) values. Another group was based

on a criterion of the accuracy of changes in information signal and image distortions based on the

computations of SSIM (Structural Similarity Index) and VroiWQI (Visual region of interest Weighted

Quality Index) values.

The measurement of image quality could use a conventional measurement system that applies

a technique of computing SNR (Signal to Noise Ration), MSE (Mean Square Error) and PSNR (Peak

Signal to Noise Ration) values. The formula of computing SNR, MSE, and PSNR values was as follows:

Where M,N : Image width and height

f(x,y) : Value of initial/original image intensity (x,y)

g(x,y): Value of result image intensity at a position of (x,y)

Meanwhile, PSNR was the value of a comparison between the maximum values of images as

a reconstructed result with mean square error (MSE) values. For a 8-bit image, the maximum pixel

value was 255. The higher the result of PSNR, the better the quality of picture. PSNR was expressed

in a decibel 9dB) unit. The minimum value of PSNR at interpolation that fell into a category of good

was 30dB. Mathematically, it was formulated as follows:

𝑃𝑆𝑁𝑅 = 10 log 10 [2552

𝑀𝑆𝐸]𝑑𝐵

3. CLUSTERS THAT USE K-MEANS

K-Means is an algorithm to group a number of objects based on attributes into k groups.

Here K was a positive integer value. The grouping was conducted by minimizing the distance between data and the corresponding cluster center. Let P(n,K) be a partition that shows that every n

objects is entered into one of the clusters 1, 2, …, K. The mean of jth variable in ith cluster is showed

by (l,j), and the number of objects that exist in lth cluster is showed by n(l). The determination of

the distance between objects may applies an Euclidean computation as follows:

𝑑𝑖𝑗 = √∑(𝑥𝑖𝑘 − 𝑥𝑗𝑘)

𝑝

𝑘=1

K-means is one of the simplest unsupervised learning algorithms in which each point is

assigned to only one particular cluster. The procedure follows a simple, easy and iterative way to

classify a given data set through a certain number of clusters (assume k clusters) fixed apriori. The

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

278

procedure consists of the following steps,

Step 1: Set the number of cluster k.

Step 2: Determine the centroid coordinate.

Step 3: Determine the distance of each object to the centroids. Step 4: Group the object based on

minimum distance.

Step 5: Continue from step 2, until convergence that is no object move from one group to another.

The determination of K centroid for each cluster used as an initial estimation of the cluster

center could apply several methods, namely:

1. selecting the first objects of K in the sample as a mean vector of initial cluster K,

2. selecting objects K that are very far away from one another,

3. beginning with an experimental value for K that is greater than necessary one, and beginning to

make a distance of cluster center at an interval of a standard deviation in each variable, and

4. selecting K and the initial values configuration of cluster based on the prior explanation.

4. COLOR HISTOGRAM FEATURE EXTRACTION

Obtained by extraction of the color histogram feature extraction image pixels for each

color component R, G, and B then calculated the frequency of each color index from 0 to 255 and raised

in the form of histogram value for each color component, and is written as a vector shown in equation

as follows:

𝐻 = {𝐻[0], 𝐻[1], 𝐻[2], 𝐻[3], ………𝐻[𝑖], ……… ,𝐻[𝑛]}

Where i is the color in the color histogram storage and H [i] indicates the number of pixels of color i in

the image, and n is the number of colors used in the storage of the color histogram.

Results histogram value for each color component (x, y, z) of the image is sought (Hq) and record

images (Hi) then calculate resemblance to calculate the distance to the known color histogram

Histogram Intersection Technique (HIT), using the formula in equation as follows:

𝑆(𝐻𝑞 , 𝐻𝑖) =∑ min (𝐻𝑞(𝑥, 𝑦, 𝑧), 𝐻𝑖(𝑥, 𝑦, 𝑧))𝑥∈𝑋,𝑦∈𝑌,𝑧∈𝑍

∑ 𝐻𝑞(𝑥, 𝑦, 𝑧)𝑥∈𝑋,𝑦∈𝑌,𝑧∈𝑍

Possess the formula distance values tend to have small differences, so that the formula developed into

equation as follows:

𝑆(𝐻𝑞 , 𝐻𝑖) =∑ min (𝐻𝑞(𝑥, 𝑦, 𝑧), 𝐻𝑖(𝑥, 𝑦, 𝑧))𝑥∈𝑋,𝑦∈𝑌,𝑧∈𝑍

min[∑ 𝐻𝑞(𝑥, 𝑦, 𝑧),𝑥∈𝑋,𝑦∈𝑌,𝑧∈𝑍 ∑ 𝐻𝑖(𝑥, 𝑦, 𝑧)𝑥∈𝑋,𝑦∈𝑌,𝑧∈𝑍

5. CLUSTERING DESIGN AND IMAGE CONTENT RETRIEVAL

A clustering design refers to a clustering algorithm by using both minimum and maximum

PSNR values as a basis of forming clusters, shown in Fig. 1. The stages of image clustering was initiated by an initialization of every cluster specially formed, not referring to prior researchers, instead

conducted by an initial computation before the initialization was conducted. Before this special

initialization was conducted, the authors had tried it by available initialization models but it could not

be applied because it turned out that, during the image retrieval query process at the next stage, there

were a considerably large number of records that should have had a similarity above 90.00% but was

not found.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

279

Fig. 1 Clustering and Image Content Retrieval Procedure

The initialization of clusters intended was by computing in advance the value of square root

of the sum of squares of minimum PSNR and maximum PSNR for each record in an image database

used as a base of record order, and then the distance between clusters was determined by

computing the total records divided by the number of clusters and ended by determining each

cluster that was taken from the record successively based on the change of their distances.

This original initialization algorithm was written as follows:

// Compute the Distance of each record based on PSNRMin and PSNRMax while NOT tablex.eof do begin XPsnrMin:= tablex.FieldByName('PSNRMin').asextended; XPsnrMax := tablex.FieldByName('PSNRMax').asextended;

Edit; FieldByName('PSNRAvg').AsExtended:=

sqrt(XPsnrMin*XPsnrMin+XPsnrMax*XPsnrMax); Next; end; // Computation of reach for each cluster xrange := reccount div jmlklas; query1.SQL.Add('select gbrname, PSNRMin,PSNRMax,PSNRAvg from prcitra

order by PSNRAvg ');

// Original initialization of each cluster from the results of // distance computation for i := 1 to jmlklas do begin centerx[i,1] := i;

centerx[i,2] := query1.FieldByName('PSNRMin').AsExtended; centerx[i,3] := query1.FieldByName('PSNRMax').AsExtended;

query1.moveby(xrange); end;

After the original initialization of each newly formed cluster an adjustment for each record from

the image database table was conducted by computing in advance the distance of image record to the

nearest cluster by using Euclidean dij distance. The computation of dij distance was conducted by

computing the sum of the square of difference of minimum PSNR and jth minimum cluster center

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

280

plus maximum PSNR and jth maximum cluster center. The result was then squared to obtain the searched distance. Then, the nearest distance with the intended cluster was computed again to obtain

the value of cluster center. This process was then repeated to the beginning of record of the image

database until there was no change anymore in the formed cluster center. After the clustering process,

the next process was to run clustered-based image retrieval query. The clustered-based image retrieval

query began with a process of computing minimum and maximum PSNR values based on base

image, and then image groups were determined by computing the nearest distance with the record

contained in prcitra table. After finding the cluster of the searched image, then a filtering of record was

carried out by adjusting to the cluster of the searched image. The next steps were in accordance with

the process applied to query retrieval process.

6. RESULTS

An image query testing by using index clusters began with a clustering process with a total

image data (record) used of 1,000 images stored in an image database (WANG Database) table that

contained blob field for JPEG file. The implementation of the clustering was applied in several cluster

groups, namely, 2 clusters, 4 clusters, and 8 clusters, and the amounts of iteration of each cluster

and the values of minimum PSNR and maximum PSNR for each cluster were shown in Table 1 and

Figure 2.

TABLE 1 CLUSTERING OF 1,000 WANG DATABASE RECORDS IN 2, 4, AND 8 CLUSTERS

Cluster Centroid PSNR (db) Record

Count Group Number Minimum Maximum 2 1 7,238144 12,298757 838 2 2 3,217335 7,589639 162 4 1 6,318496 11,190640 529 4 2 6,894847 14,625369 155 4 3 2,638533 6,818268 119 4 4 9,449756 12,881852 197 8 1 10,119414 13,778683 110 8 2 8,000599 11,858669 209 8 3 5,688476 11,963560 188 8 4 6,923792 10,427970 147 8 5 7,305264 17,609415 24 8 6 4,749914 9,994102 95 8 7 6,782889 13,985441 121 8 8 2,518710 6,560355 106

Fig. 2 Plot for K-Means Clustering in 2,4 and 8 Clusters for WANG Database.

Table 2 shows in detail the results of testing process by using CBIR query and that of image query

with index cluster by using 1,000 records stored in WANG database. The testing was conducted 30

images by using different searching base images. From the testing of CBIR query, a average F-Score

of 0.22 was obtained, whereas from the testing of image query by using index cluster for each

cluster a average F-Score of 0.21, 0.22, 0.22, and 0.23 for 2, 3, 4, and 5 clusters, respectively, were

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

281

obtained.

TABLE 2 THE RESULTS OF PRECISION, RECALL AND F-SCORE GROUP BY CATEGORIES USING CBIR

QUERY AND IMAGE QUERY WITH INDEX CLUSTER AND ACCESS WITH METHODE COLOR HISTOGRAM

Categories

Without Cluster 2 Cluster 3 Cluster 4 Cluster 5 Cluster 6 Cluster 7 Cluster 8 Cluster

P R P R P R P R P R P R P R P R Tribal 0,92 0,18 0,92 0,18 0,90 0,18 0,90 0,18 0,92 0,18 0,92 0,18 0,92 0,18 0,68 0,14 Beach 0,33 0,07 0,33 0,07 0,35 0,07 0,38 0,08 0,42 0,08 0,28 0,06 0,33 0,07 0,27 0,05 Monument 0,35 0,07 0,33 0,07 0,37 0,07 0,35 0,07 0,38 0,08 0,28 0,06 0,35 0,07 0,32 0,06 Bus 0,47 0,09 0,45 0,09 0,48 0,10 0,48 0,10 0,52 0,10 0,50 0,10 0,55 0,11 0,55 0,11 Dinosaur 1,00 0,20 1,00 0,20 1,00 0,20 1,00 0,20 1,00 0,20 1,00 0,20 1,00 0,20 1,00 0,20 Elephant 0,67 0,13 0,67 0,13 0,67 0,13 0,67 0,13 0,68 0,14 0,58 0,12 0,60 0,12 0,60 0,12 Rose 0,87 0,17 0,87 0,17 0,85 0,17 0,92 0,18 0,92 0,18 1,00 0,20 0,95 0,19 0,93 0,19 Horse 0,93 0,19 0,93 0,19 0,93 0,19 0,93 0,19 0,97 0,19 0,97 0,19 0,93 0,19 0,95 0,19 Mountain 0,30 0,06 0,28 0,06 0,27 0,05 0,23 0,05 0,25 0,05 0,17 0,03 0,13 0,03 0,23 0,05 Food Dish 0,77 0,15 0,65 0,13 0,75 0,15 0,75 0,15 0,78 0,16 0,68 0,14 0,70 0,14 0,67 0,13 Rata-rata 0,66 0,13 0,64 0,13 0,66 0,13 0,66 0,13 0,68 0,14 0,64 0,13 0,65 0,13 0,62 0,12 F-Score 0,22 0,21 0,22 0,22 0,23 0,21 0,22 0,21

Figure. 3 Graph results of Precision, Recall and F-Score K-Means Clustering for WANG Database.

Figure 3 shows erdetailed results for the overall average value of precision, recall and F-Score query

with cluster and query CBIR without cluster. The value of the F-Score the highest overall average

achieved at the value 0.23 for CBIR query using 5 cluster, while the other has a value of less than

0.23 and F-Score average overall relative decline and the lowest conditions contained in the 32 clusters.

7. CONCLUSION

1. The image database record clustering used in this research was conducted based on the computation

of minimum and maximum PSNR (Peak Signal Noise Ratio) values of each image database record

by using random base image.

2. Clustering that began with a cluster initialization process would be a main key in an information

retrieval process of a query. This research found a new algorithm for cluster initialization by

computing in advance the square root of the sum of minimum the square of PSNR and that of

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th , 2013

282

maximum PSNR for each record in the image database used as a base of record order, and then the

distance between clusters was determined by computing total record divided by the number of

clusters and ended by determining each cluster that was taken from ordered records based on

changes in their distances.

3. The results of this model implementation on CBIR query in WANG database by using records that

was taken randomly by an amount of 1,000 showed that highest level of accuracy achieved in 5

cluster, and accuracy decreases when the cluster size is enlarged. These results also suggest that an

increase in the level of accuracy for query without cluster compared than query with cluster

increased from 0:22 to 0:23.

8. REFERENCES

[1] Bdanyopadhyay, S., dan Maulik, U., 2002, An evolutionary technique based on k-

means algorithm for optimal clustering in R. Information Science. 146:221-237.

[2] Boncz P.A., Manegold S.,and Kersten. M.L., 1999, Database architecture optimized for the

new bottleneck:Memory access.In Proc.of VLDB pages 54–65.

[3] Chaudhur S. and Shim K., 1999, Optimization of queries with user-defined predicates.

TODS 24(2):177–228.

[4] Ganguly S., 1998, Design and Analysis of Parametric Query Optimization Algorithms, Proc.

of 24th Intl. Conf. on Very Large Data Bases (VLDB).

[5] Gassner P., Lohman G., Schiefer K. and Wang Y., 1993, Query Optimization in the IBM

DB2 Family, Data Engineering Bulletin, 16 (4).

[6] Ghosh A., Parikh J., Sengar V. and Haritsa J., 2002, Query Clustering for Plan Selection,

Tech Report, DSL/SERC, Indian Institute of Science.

[7] Gopal R. and Ramesh R., 1995, The Query Clustering Problem: A Set Partitioning Approach,

IEEE Trans. on Knowledge and Data Engineering, 7(6).

[8] Ihab F. Ilyas, Walid Aref G, Elmagarmid and Ahmed K., 2004, Supporting top-k join queries

in relational databases, The VLDB Journal (2004) 13: 207–221.

[9] Ioannidis Y., Ng R., Shim K. and Sellis T., 1992, Parametric Query Processing, Proc. of Intl.

Conf. on Very Large Data Bases (VLDB).

[10] Kossmann, and Konrad Stocker, 1998, Iterative Dynamic Programming: A New Class of

Query Optimization Algorithms, The VLDB Journal.

[11] Park J. and Segev A., 1993,Using common sub-expressions to optimize multiple queries, Proc. of IEEE Intl. Conf. On Data Engineering (ICDE).

[12] Roy P., Seshadri S., Sudarshan S. and Bhobe S., 2000, Efficient and Extensible Algorithms

for Multi Query Optimization, Proc. of ACM SIGMOD Intl. Conf. on Management of Data.

[13] Roussinov Dmitri G. , McQuaid Michael J, 2000, Information Navigation by Clustering

and Summarizing Query Results, Proceedings of the 33rd Hawaii International Conference

on System Sciences.

[14] Sellis T., 1988, Multiple Query Optimization, ACM Trans. On Database Systems, 13(1).

[15] Shim K., Sellis T. and Nau D., 1994, Improvements on a heuristic algorithm for multiple-query

optimization, Data and Knowledge Engineering, 12.

[16] Smith, K. A., dan Ng, A, 2003, Web page clustering using a self-organizing map of

user navigation patterns. Decisions Support Systems. 35:245-256.

[17] Zhang T., Ramakrishnan R. and Livny M., 1996, BIRCH: An Efficient Data Clustering

Method for Very Large Databases, Proc. of ACM SIGMOD Intl. Conf. on Management of Data.

Proceedings of the International Conference on Mathematical and Computer Sciences

Jatinangor, October 23rd-24th, 2013

283