a systematic study of real time distributed systems in the context … · a systematic study of...
TRANSCRIPT
DOI: 10.23883/IJRTER.2017.3207.UIDJ0 134
A Systematic Study of Real time Distributed Systems in the Context of Distributed DBMS
Shetan Ram Choudhary1, C.K. Jha2
1,2Department of Computer Science, Banasthali University, P.O. Banasthali Vidyapith-304 022 Rajasthan, India
Abstract -Nowadays, everything is becoming online.Accordingly,human tendency has also changed as they try to doeverything from
home using Internet.The RealTime Systems(RTS) are characterized for managing huge volumes of data processing in a defined time
limit. The same is true for database accesses too with efficient database management algorithms for manipulating and accessing data
being required to satisfy timing constraints of the supported applications. RealTime Database Systems(RTDBS) involve a new
research area investigating possible ways of applying database systems technology to RealTime Systems(RTS). Management of
realtime information through a database system requires the incorporation of concepts from both database systems and real time
systems. A number of new criteria needs to be developed to involve timing constraints of real time applications in many database
systems design issues, such as query processing or transaction, data buffering, CPU, and I/O scheduling. In this work,some major
issues in realtime database systems in distributed environment have been exploredwhile introducing the ongoing research efforts in
this area. Dissimilar approaches to various problems of realtime database systems in distributed environment are briefly described.
Some important possible future research directions have been presented.
Keywords: DRTDBS, Real time database systems(RTDBS), Management, Transaction, ACID
I. INTRODUCTION Realtime means implicitorexplicittime constraints (soft, firm, and hard). A highperformance database which is simply speedy without
the capability of specifying and enforcing time constraints is not appropriate for realtime applications. Its correctness depends not only
on the logical correctness, but also on the timeliness of its actions.
In the context of databases, a single logical operation on the data is called a transaction.A RealTime Database(RTDB) is a data store
whose operations execute with predictable response and with applicationacceptable levels of logical and temporal consistency of data,
in addition to timely execution of transactions with the ACID (Atomicity, Consistency, Isolation, Durability) properties.
In Distributed RealTime Database System(DRTDBS), it is very important to design an efficient commitprotocols to guarantee
transaction atomicity. The performance of the commit protocol istypically measured in the terms of number of transactionsthat
complete before their deadlines. The transactionthat miss their deadlines before the completion ofprocessing are aborted. However,on
the other side, the successfultransaction is committed.The commit processing in a DRTDBS can significantly grow the execution time
of a transaction [1, 2, 14, 110,120, 125].
Real Time Database System(RTDBS) is growing important in a large variety of operations in distributed environment. In twenty first
century, computers have become more rapid and powerful tools being used in wider spread, real time database systems. For example,
they are used in program military tracking, factory automation, robotics,stock arbitrage system, aircraft control, shipboard control,
networking management, medical monitoring, virtual environment, telephone switching, computer integrated manufacturing(CIM),
railway reservation, traffic control, sensory and banking systems to name a few touching our day to day lives. Additionally,
specifically, in the context of applications like stock market, we must monitor the state of the stock market and update the database
with novel information too. If the database is to contain a precise illustration of the current market, then this updating and monitoring
process must get together with certain timing constraints. In this system, we also need to satisfy certain real time constraints in reading
and analyzing information in the database in order to respond to a client friendly or to initiate a trade in the stock market. For other
examples given above, we can think about similar operations with timing constraints.
II. DISTRIBUTED SYSTEMS
Distributed systems (DS) are characterized by their structure such as a typical distributed system will consist of an enormous number
of interacting devices that each run their own programs but that are affected by receiving messages or observing sharedmemory
updates or the states of other devices. Examples of distributed computing systems range from simple systems in which a single client
talks to a single server to huge amorphous networks like the Internet as a whole.A distributed database appears to a user as a single
database but it is a set of databases stored on multiple computers at multiple locations.The data on several computers can be
simultaneously accessed and modified using a network which would be connected to other systems. Each database server in the
distributed database is controlled by its local DBMS, and cooperates to maintain the consistency of the global database [127]. Some
important characteristics of a Distributed database system are:
• Collection of shared data.
• Data will be split into fragments.
• Fragments can be replicated.
• Sites would be linked by a communication network with one another.
• Data at each sites would be controlled by local DBMS.
• Each DBMS would be in at least one application.
2.1Architecture of Distributed DBMS
Conventional database systems deal with persistent data whose validity does not decay with time. The correctness of the output
depends only on the logical computations, but not on the time at which the results are delivered. The purpose of transaction and query
processing approaches in conventional database systems is to achieve a good throughput or to minimize average response time. In
International Journal of Recent Trends in Engineering & Research (IJRTER)
Volume 03, Issue 05; May - 2017 [ISSN: 2455-1457]
@IJRTER-2017, All Rights Reserved 135
contrast, Distributed database have two applications through which a database user accesses the database. One such application is
Global where the applications do require data from other sites and the other one as Local where the applications do not require data
from other sites.
There is growing need for Real Time (RT) data services in distributed environment. New electronic commerce and electronic services
communication applications characterized by elevated/high volume of transactions, can’t survived without an online support of
computer system and updated database technology, as suggested by Davidson and Watters (1988) [104].
Database management systems(DBMSs) handle shared, persistent data; Real Time Systems(RTS) are more usually concerned with
transient data with very short meaningful lifetimes. Though, transient and persistent, exclusive and shared are relative terms. Data may
be considered shared and persistent if it lasts long enough to be used by subsystems written by different organizations or people.
Database technology facilitates communication among engineers, not just among their products.
The goal of Real Time Database System is to meet time constraints of transactions in distributed environment. These entire real time
database applications are characterized by their time constrained access to data that has temporal validity in distributed environment. It
begins from assembling data from the environment, processing of gathered information in the context of information acquired in the
earlier period to providing timely responses. They also involve processing not only archival data but also temporal data which loses its
validity after a particular time interval. Both the temporal nature of the data and the response time requirements imposed by the
distributed environment make transactions possess time constraints in the form of either periods or deadlines.Thus, the correctness of
real time database operation in distributed environment depends not only on the logical computations carried out but also on the time
at which the results are delivered. While the goal of real-time computing in distributed environment is to meet the individual timing
constraint of each activity, one key feature to note here is that real time computing does not imply quick computing. Rather than being
fast, more significant properties of real time database system in distributed environment should be timeliness, i.e., the ability to
produce expected results early or at the right time, and predictability, i.e., the ability to function as deterministically as necessary to
satisfy system specifications including timing constraints [102]. Fast computing, which if busy doing the wrong activity without taking
individual timing constraint into account is not helpful for real time computing in distributed environment. Fast computing is helpful in
meeting stringent timing constraints, but fast computing alone does not guarantee rightness and predictability. In order to guarantee
timeliness and predictability, we need to handle explicit timing constraints and to use time-cognizant techniques to meet deadlines or
periodicity associated with activities.
One more important point to note is that the characteristics of Real Time Database System(RTDBS) in distributed environment are
unique and their problems cannot be solved by simple integrations or modifications of the solutions proposed for the problems in
traditional real time systems and database systems. In modern times, there have been efforts to apply the benefits of traditional data-
base technology to solve problems in managing the data in RTS.Attempts are taken to utilize the techniques of time-driven scheduling
and resource allocation policies/algorithms in Real Time Database System(RTDBS) in distributed environment. From these efforts,
however, it has been exposed that a simple integration of concepts, mechanisms, and tools from database systems with those from real
time systems is not feasible. The reason for this situation is that the two environments, real time systems and database systems, have
different assumptions and objectives, which result in an impedance mismatch between them [17,88]. Thus, meeting the timing
constraints of RTDBS demands newer approaches to data and transaction management in distributed environment.
Transactions are based on their nature in RTDBSin distributed environment and are characterized along two dimensions: the source of
timing constraints, and the implication of executing a transaction by its deadline, or more precisely, the implication of missing
specified timing constraints. The characteristics of these temporal constraints, typically, are application-dependent. With regard to the
source of timing constraints, as explained previously a number of timing constraints come from temporal consistency requirements,
and some from requirements imposed on system reaction time.
Tounderstand the effect of missing a transaction’s deadline, we partition real time transactions into three categories: soft deadline, firm
deadline and hard deadline transactions in distributed environment. This classification can be exposed obviously by using the Key-
value function model for real-time scheduling [51,70,103]. The idea of the Key-value function model is that completion of a
transaction has a worth (value), i.e., importance to the application that can be uttered as a function of the completion time. While, in
reality, arbitrary types of value functions can be associated with activities, the above categorization confines ourselves to simple value
functions.A hard deadline transaction may result in a catastrophe if its deadline is missed. That is, if a hard deadline is missed, a large
negative value is imparted to the system. A soft deadline transaction has some (positive) value even after its deadlines. Typically, the
value drops to zero at a certain point past the deadline. Lastly, firm deadline transaction is a special case of soft deadline transaction.
The value of the transaction drops to zero at its deadline in a firm deadline.
The different characteristics of transactions lead to different performance objectives and processing strategies for the transactions. The
main difference between hard deadline and soft deadline including firm deadline and is that all transactions with hard deadlines must
meet their timing constraints. It is impossible to provide such a guarantee in dynamic transaction management surroundings.
In order to support the guarantee, the resource and data requirements in addition to the computation requirement of such transactions
have to be obtainable before the actual execution starts, i.e., a static planning is required for transaction execution. For a variety of
reasons, however, typical database systems do not provide such information ahead of time, and thus appear infeasible in supporting
transactions with hard deadlines [42,88]. Therefore, a database system for supporting hard deadline transactions has to know when the
transaction is likely to be invoked. But, this information is not readily available for periodic transactions. Further, the system has to
determine the data and resource requirements and worst-case execution time of the transaction. This information is not frequently
given and the worst-case execution time often may not be helpful due to the huge variance between the average-case and worst-case
execution time caused by the disk I/O and dynamic paging mechanism in typical database systems.
In soft deadline transactions, we have an extra flexibility to process transactions because the deadlines of such transactions are not
required to be met all the time. Certainly, the primary objective of the system should be to maximize the number of transactions that
meet their deadline (or equivalently, to minimize the number of the transactions that miss their deadline). When transactions have
different values, the value of transactions that complete by their deadlines must also be maximized. One is willing to deal with hard
deadline transactions using approaches analogous to those used in real time task scheduling where many restrictions have to be placed
on these transactions so that their characteristics are known ahead of time. With these restrictions, however, the functionality and
applicability of transactions will be significantly limited, and poor resource utilization may result given the worst-case assumptions
made about the transactions. For all these reasons, the problem of dealing with hard real time transactions has been intractable under
International Journal of Recent Trends in Engineering & Research (IJRTER)
Volume 03, Issue 05; May - 2017 [ISSN: 2455-1457]
@IJRTER-2017, All Rights Reserved 136
the state of the art database technology, although there have been some effortsin this area [13,77,93,94].
In this work, we restrict our interest to RealTime Database System in distributed environment for processing transactions with firm or
soft deadline, as the majority of the work has been done in this area. One more justification for this approach is that a greater part of
real time applications in distributed environment in real world are involved in firm or soft deadlines and the range of applications being
impressive hard real time is rather limited [92].
At last, we make a note there is a sure propensity to reserve the term “real time” only to refer to hard deadlines, because
conventionally the problem of real time system has implied that of scheduling activities with hard deadlines. A number of researchers
favor the word “time critical” to “real time” for processing activities with firm and soft deadlines as in “processing of time critical
transactions”. The term “Real Time Database System (RTDBS)” is broadly accepted in the research community of real time systems
as representing the system for time constrained transaction and query management. However, the word “Real Time Database System
in Distributed (DRTDBS)” refers to a collection of multiple, logically interrelated database distributed over a computer networks
where transaction have explicit timing constraints usually in the shape of deadline. Accordingly, the term DRTDBS in our study refers
to the system for transactions with “firm” deadlines.
2.2Distributed Real Time Database Management System(DRTDBMS) An integrated and shared base of persisted data used by dissimilar kind of users in a range (variety) of programs is said to be a
database. The management software that handles all requests from users for access to database is called database management system
(DBMS). DBMS is also responsible for maintenance of the central base of data. A database buffer had to be maintained for purpose of
interfacing disk and main memory. In Distributed Data Base Management System(DDBMS), each participating server needs to be
administered independently by other databases for security purpose. DDBMS consists of collection of sites connected together via
some kind of communication network in which:
• Each site is a database system site in its own right.
• The sites have granted to work collectively (together) so that a user at any site can access data anywhere in the network
exactly as if the data were all stored at the user’s own site.
All Databases can work together and they are unique. Further, there are separate repositories to store data for each database.DDBMS
uses external magnetic disks (device) for the storage of mass data. They offer low cost per bit and non-volatility, which makes them
indispensable in today’s DBMS technology. However, under commercially available operating systems, data can only be manipulated
(i.e., Compared, modified, insertedand deleted) in the main storage of computer. Therefore part of the database has to be loaded in to
main storage area before manipulation and written back to the disk after modification. A database buffer had to be maintained for
purpose of interfacing main memory and disk.
In the information era, information dispersalworldwidethrough internet, and other medium, is bulk and changing constantlyand
dynamic in nature. As our society becomes moreintegrated with computer technology, information processedfor human activities
necessitates computing that responds torequests in Real Time(RT) rather than just with best effort. DBMS have also entered the
Internet Era.If too many users approach for information, then this degradesthe system performance. The degradation may cause trouble
and delay for particular end user in accessing the information.Accessing information in easy way and within certaintime limit, by
keeping its freshness, assessing user's requirementsand then providing them information in time is importantaspect.
Conventional databases are mainly characterized by theirstrict data consistency requirements. Database Systems(DBS) forReal Time
(RT) applications must satisfy timing constraints associatedwith transactions.Real Time Data Base Systems(RTDBS) combine the
concepts fromReal Time Systems(RTS) and conventional DBS. RTS are mainly characterized by their severe/strict timingconstraints.
Conventional databases are mainly characterizedby their strict data consistency requirements. Therefore, RTDBS should satisfy both
the data integrity and consistency constraintswith timing constraints.
Classically, a timing constraint is expressed in the form of adeadline, a certain time in the future by which a transactionneeds to be
completed. In RTDBS, the correctnessof transaction processing depends not only on maintainingconsistency constraints and producing
correct resultsbut also on the time at which a transaction is completed. Transactionsmust be scheduled in such a way that they can
becompleted before their matching deadlines expire.E.g. applications that handle great amounts of dataand have stringent timing
requirements include radar tracking, telephoneswitching and stock arbitrage system. Arbitrage trading, for example,involves trading
commodities in different markets atdifferent prices. Since price discrepancies are usually shortlived,automated searching and
processing of large amountsof trading information are very desirable. In order to capitalizeon the opportunities, sell - buy decisions
have to be made promptly, often with a time constraint so that the financial overheads in performing the trade actions are well
compensated by the benefit resulting from the trade. The goal of transaction and query processing in RTDBis to maximize the number
of successful transactions in the system.
The paper has been divided into six sections. Section 3 reports some domain survey in the domain. Section 4 presents the issues and
challenges in Real Time Database Systems(RTDBS). Section 5 presents the issues and challenges in real time database systems in
distributed environments(DRTDBS). The paper ends in Section 6 detailing the conclusions drawn from the study.
III. DOMAIN SURVEY Managing transaction in real time distributed database systems is not easy.Appropriate management of transactions is required during
the arrival, execution or at any phase of transactions, bearing in mind that a transaction processing in any database systems can have
real time constraints.
The significant issue in transaction management is that if a database was in a reliable state prior to the initiation of a transaction, then
the database should return to a consistent state after the transaction is completed. This should be done irrespective of the fact that
transactions were successfully executed simultaneously or there were failures during the execution. Thus, a transaction becomes a unit
of reliability and consistency [3, 27, 71, 95]. Many researches deals with the transaction management’s problem this section presents
these works.
3.1 Locking Method /Policy
International Journal of Recent Trends in Engineering & Research (IJRTER)
Volume 03, Issue 05; May - 2017 [ISSN: 2455-1457]
@IJRTER-2017, All Rights Reserved 137
According to Ramakrishnan et al., 2003; Ullman, 1992, one of the essential properties of a transaction is isolation. As soon as a
number of transactions execute concurrently in the database, the isolation property must be preserved. To make certain this, the system
must control the interaction among the concurrent transactions; this control is achieved through concurrency control schemes[84,115].
A number of the main concurrency control techniques such as two phase locking (2PL) are based on the concept of locking of data
items. Locks are used to ensure the non-interference property of concurrently executing transactions and to guarantee serializability of
the schedules. A transaction is said to follow the two-phase locking protocol if all locking operations precedes the first unlock
operation in the transaction. These transaction can be classified into two phases: first phase as rising or an expanding phase, during
which new locks on data items can be acquired but none can be released; and second phase as a shrinking phase, during which existing
locks can be released but no new locks can be acquired, as suggested by Mittal et al., 2004 [72]. It ensures serializability, but not
deadlock freedom.
Lam1994 investigated that the two phase locking can be classified as either dynamic or static. The working principle of is dynamic
two phase locking (D2PL) similar to static two phase locking (S2PL)except for the procedure of setting locks [57]
According to Tayet al.1984 that transactions acquire locks to access data items on demand and release locks upon termination or
commit in D2PL [106]. A similar study by Tayet al.1985suggests that the required locks of a transaction are assumed to be known
before its execution in S2PL [109].
The prior knowledge of the required data items by a transaction is effortless to address in DRTDBSas it is generally agreed that the
behavior and the data items to be accessed by real time transactions, especially hard real time transactions, are much more “well-
defined” and predictable. Therefore, an outcome of the better defined environment of real time transactions, it is usual to assume that
the locking information of a transaction is known before its processing.
Sha et al.1991 investigated that the locks required by the transactions must be known before their arrivals with predefined priorities in
priority ceiling protocol (PCP) [93].
A transaction has to get all its required locks before the start of execution. If any one of its locks is being used by another transaction,
it is blocked and releases all seized locks. The locks to be accessed by a transaction at each site can be filled into a single message for
transmission.
In DRTDBS, the number of messages for setting the locks is usually smaller for S2PL in distributed nature than for D2PL. Therefore,
the number of messages and the time delay for remote locking can be considerably reduced. There is no local deadlock and a
distributed deadlock is much easier to resolve withS2PL than withD2PL. Shiow Chen et al. 1990 conducted thatS2PL protocol is
deadlock free [96] since blocked transactions cannot hold locks. In the last fifteen years, very many works has been done to compare
D2PL with S2PL by Lee et al. (1999) [66]. Most researchers agree thatD2PL is an improved choice for conventional non Real Time
Database System than S2PL because of the reasons asfollows assuggested by Huang et al.,1994; Ryu et al., 1994 [49, 90].
I. Lesser probability of lock conflicts due to the shorter average lock holding time in D2PL;
II. Difficulty in determining the required locks before the processing of transaction
Though, the meaning of improved performance inDRTDBSis quite dissimilar from that in conventional non real time database
systems,the major performance measures are mean system throughput and mean system response time in the conventional database
systems. On the converse, minimizing the number of missed deadlines is the main concern inDRTDBS.
A transaction is blocked if any of its required locks are seized in conflicting mode by any other transactions in non-real-timeS2PL.
Though it is being blocked, some of its required locks which may be free initially can be seized by other transactions. Hence, even
when the original conflicting locks are released, the transaction may be blocked by other transactions which arrive after it. Thus,the
blocking time of higher priority transaction can be arbitrarily long due to long-standing blocking as an outcome of waiting for multiple
locks as suggested by Dogduet al. 1997 [26]. A substitute for concurrency control in DRTDBS is to use real-time S2PL (RT-S2PL).
Lam1994; Lam et al.1997 and Takkar et al.1998 suggested that each lock in the database is defined with a priority same as the priority
of the highest priority transaction which is waiting for that lock inRT-S2PL. All the locks of the data items to be accessed by a
transaction have to be set in suitable modes before processing of the transaction. If any of the required locks is in a conflicting mode or
has a priority higher than that of the requesting transaction none of the required locks gets set and the transaction has been blocked
instead. Though for the locks with lower priorities, their priorities gets updated to that of the requesting transaction. These
characteristics of RT-S2PL makes it attractive forDRTDBS [57, 59,105].
Thomasian et al.1991; 1993; 1998 investigated that the problem of locking-induced thrashing can be prevented because lock
requesting transactions can be blocked due to a lock conflict in RT-S2PL protocols [111,112,113]. Based on the typical strict two
phase locking protocol with high priority which is the expansion of E2PL with high priority(E2PL-HP) is used in the work. Here, the
lock requesting cohort waits for the data item to be released when the requested data item is held by one or higher priority cohorts in a
conflicting mode. Alternatively, if the data item is held by a lower priority cohort in a conflicting way, the lower priority cohort is
aborted and requesting cohort is granted the desired locks.
The expansion of E2PL with high priority(E2PL-HP) is as: (i)the requesters are allowed to access locked data in a controlled manner.
(ii)On receipt of thePREPARE message from the master, a cohort releases all its read locks but retains its update locks until it receives
and implements the global decision from the master. (iii)A cohort that is in the prepared state cannot be aborted,irrespective of its
priority inversion.
S. R. Choudharyet al. 2013 [3] studied a new concept to manage the transactions in database size for remote site and originating site
rather than database size computing parameters.The new approach keeps track of the timing of the transactions to prevent aborts.Their
approach showed information about theremaining execution time of the transactions, and this will help the system to inject extra time
to such transactions.
S. Takkar and S. P. Dandamudi1998 [91] introduced the behaviour of Real Time(RT) transactions in parallel databases, and reported
theperformance of a new priority based policy for scheduling hard real-time transactions in parallel database systems. They used miss
ratio as a performance metric.The new policy provides a superior performance over peers compared for the system and workload
parameters.
International Journal of Recent Trends in Engineering & Research (IJRTER)
Volume 03, Issue 05; May - 2017 [ISSN: 2455-1457]
@IJRTER-2017, All Rights Reserved 138
Yumnam et al.2010 [126] discussed a new move towards called “intelligent agent” which keep track and recording the status of failing
transactions inorder to improve the performance of the system.
J.Cowling, B. Liskov2012 [50] presented Granola, a transaction coordination infrastructure for building reliable distributed, Granola
uses anovel timestamp-based coordination mechanism toorder distributed transactions, offering lowerlatency and higher throughput
than the previous systems thatoffer strong consistence.
Y. Jayanta Singh etal.2010 [123] reported a new concept to manage the transactions in dynamic ways rather than setting computing
parameters in static ways.Their study follows the real time processing model. The new approach dynamic management either dynamic
“intelligent agent” or dynamic “slack management” gives asignificant improvement to the performance of the system
M.S.Khataib and Mohammad Atique2014 [71] presented a simulation studyto analyze the performance under different workloads,
different transaction scheduling,priority policies, arrival rate, altering slack factors and pre-emptive policies.The throughput of their
system depends on the arrival rate of transaction.
Gyanendra Kumar Gupta, et al. 2011 [39] conducted a new notion to manage the transactions in hybrid transaction management rather
thandynamic and static ways setting computing parameters.This keeps the track of thestatus of mix transaction dynamic as well as
static so that it can improve the performance of the system with the advantage ofdynamic as well as static.
3.2 Priority Policy Hong et al.,1993; Kim et al.,1995; Kim,1995 have described thatthe real time database system is a portion of a large and complex real
time system. The responsibilities in RTS and transactions in real time database system in distributed environment(DRTDBS) are the
same in the common sense that both are units ofwork as well as units of scheduling [47,55,56].
Transactions and tasks are dissimilar computational concepts and their differences affect how they should be processed and
scheduled.Different transactions, tasks in real time systems do not consider consistency of the data items used.Although many real
time task scheduling techniques are still second-hand for scheduling real time transactions, the transaction scheduling in Real Time
Database System needs a different approach than that of which is used in scheduling tasks in the real time systems.The following sub
sections next to it review the literature ontask/transaction scheduling in centralized and distributed environments respectively.
3.2.1 Priority Policies in Centralized Environment Liu and Layland1973 have conducted study on a rate monotonic static assignment scheme to determine the schedulability of a set of
periodic tasks for real time systems in centralized environment [69].
The priority assignmentalgorithms/techniques proposed for real time systems in centralized environment can be divided into broad
three types: static, hybrid or dynamic.A scheduling algorithm is said to be static,if priorities are assigned to tasks once and for all.A
scheduling algorithm is said to be dynamic,if the priority of a task changes from request to request. Earliest Deadline First (EDF)
algorithm is mostly used inthis class; according to this priorities assigned to tasks are inversely proportional to the absolute deadlines
of active jobs where deadline of a job depends on the arrival time of its next occurrence.A scheduling algorithm is known as a hybrid
scheduling algorithm if the priorities of some of the tasks are fixed and priorities of the residual tasks differ from request to request. Themajority concurrence controllers restart or block transactions when data conflicts are detected. The casualty selection policy is
depended upon the rules of the precise concurrency controller. Ulusoyet al.1992 considered the traditional no priority based
concurrency control algorithm penalizes the transaction that requests the lock very last [117].
Abbottet al.1988 examined on a performance evaluation scheduling real-time transaction [5]. The cohort remains blocked awaiting the
conflicting locks is released. The real time priority of the cohort is not considered in processing the lock request.Current studies
demonstrate that the system performance can be more improved by using priority based scheduling. The transaction with closest
deadline is assigned highest priority. It is firstly planned for real time tasks scheduling, is known asEDF. If the two cohorts have
matching deadline, first with earlier arrival time is assigned a higher priority on the basis of First Come First Serve(FCFS).Another
priority assignment policies are Random and LeastSlackFirst (LSF). Ulusoy et al.1993also considered on real time transaction
scheduling in database systems [116].The random priority assignment policy assigns priority to each transaction on a random starting
point and the priority assigned to transaction is independent of transaction deadline. The transaction with less slack time have higher
priority in theLSF assignment policy.The slack time can be defined as the quantity of time the cohort can afford to wait so as to
complete before its deadline was explored by Ulusoy1998[118].
Abbot et al. 1988 first presented the performance of different scheduling policies for the soft deadline based transaction [5]. They
worked on theperformance of three priority assignment techniques such asEDF,FCFS andLSF, with different concurrency control
methods i.e., high priority(HP), serial execution(SE) and conditional restart(CR) throughout simulation.
The revolutionary work in RTDBS performance evaluation of various scheduling options for a real time database system with shared
locks and disk is showed another time by Abbot et al. 1989[4].The scheduling algorithms used for this study are EDF,FCFS andLSF
together with the concurrency control algorithms namely wait-promote, wait conditional restart and high priority.
The problem of bios against longer transactions under earliest-deadline-based scheduling policies in centralized environment of
RTDBS was conducted by Pang et al.1994, 1995 [80,81].Their move toward to solve the problem of favouritism assigns virtual
deadlines to all transactions.Atransaction with an earlier virtual deadline is served before one with a later virtual deadline.The virtual
deadline of a transaction is in tune dynamically as the transaction progresses and is computed as a function of the size of
thetransaction.
Haritsaet al.1992, 1991a explained an application may assign a value to a transaction to mirror the return if expects to receive if
thetransaction commits before its deadline in a real time database system [41,42]. Haritsa et al.1991binvestigated that the problem of a
priority ordering establish along with transactions characterized by together values and deadlines that outcomes to maximize the
realized value[44].They developed the Adaptive Earliest Deadline(AED) protocol for priority assignment in addition to load control of
the transactions.AED was later enhanced to Adaptive Earliest Virtual Deadline(AEVD) policy using virtual deadline based on both
arrival time and deadline. Dattaet. al 1996 showed some of the weaknesses in AEVD, and proposed the Adaptive Access
Parameter(AAP) methods for explicit admission control[21].
Dogduet al.1997 derived new priority assignment and load control policies for repeating real time transactions[26].According to
transactions execution history, they addressed that a widely used priority assignment techniqueEDF is based towards scheduling short
transactions positively also developed protocols that attempt to eradicate the discriminatory behaviour(actions) ofEDF to adjust the
priority using the execution history information of transactions.They gave to introduce the view of “fair scheduling” of transactions in
which, the objective was to have “similar” success ratios for all transaction classes(petite to lengthy in size).
International Journal of Recent Trends in Engineering & Research (IJRTER)
Volume 03, Issue 05; May - 2017 [ISSN: 2455-1457]
@IJRTER-2017, All Rights Reserved 139
3.2.2 Priority Policies in Distributed Environment The transaction is usually classified into numerous sub transactions(cohorts) in Real Time Database System in distributed
environment.These cohorts execute on different sites. The system performance is greatly reliant on the local scheduling of the cohorts
at different sites.
According to Kao et al.1993a that the distributed real time system performance can be enhanced by assigning suitable priority to the
sub-tasks of a task in terms of meeting task deadline[52]. They observed four different heuristics:(i)Ultimate Deadline(UD),
(ii)Effective Deadline(ED), (iii)Equal Slack(EQS) and (iv)Equal Flexibility(EQF), to assign the deadline to sub-tasks. These heuristics
believe only real time constraints and may not be appropriate for DRTDBS as they do not consider their impact on data contention
which can seriously affect the system performance.
Lee et al.1996 developed theperformance of these four heuristics and recommended three other alternatives that take into reflection the
impact of data conflicts[65].These alternative priority assignment strategies are as(i)Number of Locks(NL) held, (ii)Static EQS(SEQS)
and (iii)Mixed Method(MM). Kao et al.1993a conducted that theNL policy assigns the priority to cohorts on the basis of the number
locks being held by its parent transaction while other two heuristics are improved version of the heuristics[52].Though, both of the
above studies believe sequential executions ofcohorts/task. These heuristics,exceptUD, are not appropriate for cohorts executing in
parallel.
Complex distributed tasks frequently involve parallel execution of the subtasks at different sites/nodes. All of its parallel subtasks have
to finish eventually meeting the deadline of a global task. In comparison to a local task (which involves execution at only one node), a
global task may locate it greatly harder to meet its deadline because it is fairly liked that as a minimum one of its subtasks run into an
overloaded node. Another problem with complex tasks in distributed environment occurs when a global task consists of parallel and
serial subtasks. If one parallel subtask is delayed, then the whole task is tardy.
Kao et al.1993b investigated that the problem of assigning deadlines to the parallel and theserial subtasks of the complex distributed
tasks[53]. With this study suggested the problem of automatically translating the deadline of a real time activity to deadlines for all its
sequential and parallel sub tasks constituting the activity. Each sub task deadline is assigned just before the sub task is submitted for
execution.Theconstruction of the complex tasks is assumed to be identified in advance. To gather the deadline of a global task, the
scheduler must approximation the execution times of the subtasks and assign them to processors in such a way that all finish before the
deadline of the global task.A number of strategies for assigning a deadline to each parallel subtask have been planned.The strategies
have also been investigated for assigning deadlines to sequential subtasks. The problem of assigning deadlines to parallel and serial
subtasks of complex distributed tasks in a real time system have been considered through simulation.The observed outcomes are
provided for assigning virtual deadlines to parallel subcomponents of a task in order to meet the global task deadline.
According to Lam et al.1997, 1999 that the effects of different priority assignment heuristics via optimistic concurrency control
protocol andhigh priority two phaselocking[61,62]. The outcomes of their performance experiments demonstrate that optimistic
concurrency control protocols are extra affected by the priority assignment policies compared to locking based protocols.It was
presented that bearing in mind both transaction deadline and current data contention to assign transaction priority provided best
performance between a variety of priority assignment techniques.
Conventional real time schedulers do not consider the impact of communication delay to transfer the remote data and outcome. Chen
et al.2007investigated to reduce the miss- percentage of transactions and the wastage of time for remote transaction due to
communication delay, a new real time scheduler called FlexibleHighReward with the concurrency control factor(FHR-CF).FHR-CF
tries to reduce the miss-percentage by benevolent slightly high priority to remote transactions[20].
Haritsa et al.2000 investigated that the majority of the previously work on priority assignment focuses either ondistributed databases or
on centralized database where subtasks(cohorts) of transactions are executed sequentially.Only a few researches have measured the
scheduling of cohorts in distributed nature executing in parallel[46].
3.3 Commit Protocols NG1998 has conducted a study on commit protocol for check pointing transactions.A distributed transaction is executed at various
sites[76].The transaction may decide to commit at some sites whereas at some another sites it could decide to abort resulting in a
violation of transaction atomicity in distributed environment,this study is considered by Ramakrishnan et al.2003[84].
Al-Houmailyet al.1999 took atomicity with incompatible presumptions.To overcome this problem distributed databases systems use a
distributed commit protocol which ensures the uniform commitment of the distributed transaction, i.e.all the participating sites agree
on the final outcome (commit/abort) of the transaction. Lee Inseonet al.2004 worked on a causal commit protocol such as a new
approach for distributed main memory database systems[9, 64]
Commit protocol is required to ensure thateither all the effects of the transaction persist or none of them persist in spite of the site or
communication link failures and loss of messages.
Database researchers have been working in this area for many early years and a variety of commit protocols have so far been
proposed.These protocols include one phase protocols like Early Prepare (EP), two phase protocols like the classical Two Phase
Commit (2PC), three phase protocol like Three Phase Commit (3PC) and many of their optimizations.
3.3.1 One Phase Commit Protocol/Early Prepare
The one phase commit (1PC) protocol has been first introduced by Skeen Dale which was shown in notes on database operating
systems ofGray1978 [32]. There is no precise “voting phase” to decide on the result of the transaction because cohorts enter in
“prepared state” at the time of sending the work done message (WORKDONE) itself, i.e., the commit processing and data processing
behaviour are efficiently overlapped. To eliminate a complete phase and thus reducing commit processing overheads and durations,
one PC protocols appear to be potentially capable of completing extra transactions before their deadlines.
Stamoset al.1990explained on a low-cost atomic commit protocol. Several variant of1PC have been explained in the literatures.The
Early Prepare(EP) an agent 1PC protocol forces each cohort to enter in a PREPARED state after the execution of each operation[100].
It makes vote of cohort implicit and exploits the feature of 1PC.The Coordinator Log(CL) protocol avoids the disk blocking time of
cohort to centralize the log of cohort on the coordinator [100]. On the other hand, this violates site autonomy and can be practical only
when the entire distributed system sits in a single machine room rather than several geographically distant locations connected by
communication lines.The Implicit Yes Vote(IYV) protocol adapts the CL in the casing of gigabit network database systems. TheIYV
protocol allows unsuccessful participants to perform part of the recovery procedure independently not to involve thecoordinator, and to
resume the execution of transactions that are still active in the systems instead of aborting them.
International Journal of Recent Trends in Engineering & Research (IJRTER)
Volume 03, Issue 05; May - 2017 [ISSN: 2455-1457]
@IJRTER-2017, All Rights Reserved 140
Abdallah Maha et al.1998 explained that lastly, non-blocking single phase atomic commit (NB-SPAC) protocol is a non-blocking
edition ofCL and conserve site autonomy by logging logical operations in place of physical records on the coordinator[6]. According
to the authors, these protocols have mostly been unobserved in the implementation of transaction systems in distributed environment
due to the strong assumptions they make about the cohort’s behaviour identified and formalized by Abdallah M.[6].These protocols
cause longer blocking times because the prepared cohorts cannot be aborted before final consensus is reached.Thus, 1PC and its
variants are most suited for transactions in distributed environment with small size cohorts.
3.3.2 Two Phase Commit Protocol (2PC) Gray 1978 and Gray et al.1993 introduced the two phase commit protocol(2PC) referred to since the Presumed Nothing 2PC
protocol(PrN) is the most commonly used protocol in the rework ofDDBS[32,31]. It guarantees uniform commitment of transactions
in distributed environment to maintain thelogs and to exchange explicit multiple messages among thesites.In the absence of failures,
the protocol is straight forward in that a commit decision is reached if all cohorts are ready to commit; orelse, an abort decision is
taken.
Stamoset al.1990 took on a low-cost atomic commit protocol to assume no failures.It works as follows [100]:
(i) The coordinator sends a vote request(VOTE REQ) message to all cohorts.
(ii) When a cohort receives aVOTE REQ. it responds by sending a yes or no vote messages(YES or NO VOTE, YES VOTE
also known asPREPARED message)to the coordinator. If the cohort’s vote is NO, it decides to abort and stops.
(iii) The coordinator collects the vote messages from all cohorts. If all of them are YES and the coordinator’s vote is also
YES,the coordinator decides to commit and sends commit messages(COMMIT) to all cohorts. Or else, the coordinator decides to abort
and sends abort messages(ABORT) to all cohorts that voted YES.
(iv) Each cohort thatvoted YES waits for aCOMMIT orABORT messages from the coordinator. When it receives the messages,
it decides accordingly, sends an acknowledgement message(ACK) to the coordinator and stops.
(v) Lastly,the coordinator after receiving the acknowledgement(ACK) messages from all the cohorts writes an end log record
and then forgets the transaction.
Mohan Chandrasekaranet al.1986 investigated that 2PC satisfies the aforesaid (aforementioned) listed rules for atomic commitment
providing failures do not occur.Though, if due to some reason the communication between cohorts and thecoordinator is distracted, it
is possible that the cohort is in uncertain state it can neither abort nor commit because it does not know what theother cohorts and the
coordinator have decided.The cohort is in a blocked state and may wait providing the failure is corrected. Unluckily, this blocking may
continue for an indefinitely long period of time. To handle this situation,2PC ensures that sufficient information is force-written on the
stable storage to reach a consistent global decision about the transaction [74].
Therefore,2PC imposes a great deal of overhead on the transaction processing.There has been much research to mitigate the overhead
of 2PC and a number of 2PC variants have been proposed.These variants can be classified into (following) four groups (Lee et al.,
2002)[63] as (I) Presumed Abort/Presumed Commit Protocol (Al-Houmaily Yousef J. et al., 1997a, 1997b) [12,11], (II) One Phase
Commit Protocol (Abdallah Mahaetal., 1998; Al-Houmaily Yousef J. et al., 1996; Haritsa et al, 1992 ) [6,10,41], (III) Pre
Commit/Optimistic Commit Protocol, and (IV) Group Commit Protocol (Ramakrishnan et al.) [85].
(I) Presumed Abort/Presumed Commit Protocol A variant of two phase commit(2PC) protocol, called Presumed Abort(PA), attempts to reduce 2PC overheads by requiring all cohorts
to follow a “in the no information case, abort” rule, it is suggested by Al-Houmaily Yousef J. et al.,1997a; Chandrasekaran et al.,1994
& 1986; Stamos et al.,2002; Lee et al.,2002[12,73,100,74,63].
If after upcoming from a failure a site queries the coordinator about the final result of a transaction and finds no information available
with the coordinator the transaction is understood to be aborted. The coordinator of the transaction does not log information nor wait
for acknowledgement (ACK) message regarding aborted transactions. Thus, PA behaves identical to2PC for committing transactions,
but has abridged messages and logging overheads for aborted transactions.
Al-Houmaily Yousef J. et al.1997b and Lindsay Bruce G. et al.1984 have demonstrated that generally, the number of committed
transaction is much more than the number of aborted transactions [11,67]. Based on this study another variant of 2PC i.e. the
Presumed Commit (PC) is proposed that attempts to reduce the messages and logging overheads for committing transactions rather
than aborting transactions. The coordinator interprets missing information about transactions as commit decision. Unlike PA, the
coordinator has to force write starting of the voting phase. This is to ensure that an undecided transaction is not presumed as
committed when the coordinator recovers from a failure.
(II) One-Phase Commit Protocol
The second is One-Phase commit protocol which eliminates the voting phase of the2PC, by enforcing some properties on the
cohorts’sbehavior during the transaction execution. It is already described in section 3.3.1.
(III) Pre Commit/Optimistic Commit Protocol According to Ramesh et al.1997b that the Optimistic commit protocol concentrates on reducing the lock waiting time by lending the
locks the committing transactions hold [36]. Because, the lock lending is done in a controlled method, there is no possibility of
cascading aborts even if the committing transaction is aborted. This type of protocol has fine performance due to the lessening of the
blocking arising out of locks held prepared data.
(IV) Group Commit Protocol
Grayet al.1993 and Gray 1978 stated that the transactions are not committed soon since they are ready in Group commit protocol [31,
32].
To reduce the number of disk writes, several transactions are grouped together and committed with one disk write. Even though the
group commit can enhance the system performance by plummeting the number of disk writes, it also increases the lock holding time of
committing transactions. Therefore, group commit is usually applied in conjunction withpre-commit. According to Park Taesoon et al.,
1999,when pre-commit is applied to thedistributed main memory database system,it is possible for the global transactions to commit in
an inconsistent order.To prevent such inconsistency, it is proposed to propagate the transaction dependency information [82]. It is also
to note that the pre-commit can also result in cascading abort if there is a site failure in themiddle of commit process.It can be
catastrophic for thesystem performance.
3.3.3 Three Phase Commit Protocol
International Journal of Recent Trends in Engineering & Research (IJRTER)
Volume 03, Issue 05; May - 2017 [ISSN: 2455-1457]
@IJRTER-2017, All Rights Reserved 141
A fault tolerant computer may contain several forms of redundancy, depending on the types of faults it is designed to tolerate. Fault
tolerant computer systems prevent the disruption of services provided to users. A basic problem with all of the above protocols is that
cohorts may become blocked in case of a site failure and remain blocked awaiting the failed site recovers. For example, if the
coordinator fails after initiating the protocol but before conveying the decision to its cohorts, these cohorts become blocked and remain
so until the coordinator recovers and informs them about the finaldecision. During the blocked period, the cohorts may continue to
hold system resources, for example locks on data items making them unavailable to other transactions, which in turn become blocked
waiting for the resources to be relinquished. It is easy to see that, if the blocked period is long, it may result in a major disruption of the
transaction processing activity.
It is described in Haritsa et al., 2000, to address the failure-blocking problem while a three-phase commit(3PC) protocol was proposed
by Skeen Dale[46].This protocol achieves a non-blocking capability by inserting an extra phase, called the “pre commit phase”, in
between the two phases of 2PC protocol.
In pre commit phase, a preliminary decision is reached concerning the fate of the transaction. The information made available to the
participating sites consequently of this preliminary decision allows a global decision to be made in spite of a subsequent failure of the
coordinator. Note, though, that the price of gaining non-blocking functionality is an increase in the communication overheads because
there is an extra round of message exchange between the coordinator and the cohorts. Additionally in thepre-commit phase, these
overheads can be reduced by having only some sites to execute the three phase protocol and the remaining can execute the cheaper two
phase protocol.The transaction is not block as long as if one of the sites executing the three phase protocol remains operational.
3.4 Real Time Commit Protocols in Distributed Environment Owing to sequence of synchronous messages and logging cost,commit processing can outcome in a considerable increase in
thetransaction execution time.In a distributed real-time environment, this is obviously unwanted.It may also outcome in priority
inversion, since, once a cohort reaches the prepared state,it has to retain all its data locks until it receives the global decision from the
coordinator.This retention is vitally necessary to maintain atomicity. So, if a high priority transaction requests access to a data item that
is locked by a “prepared cohort” of lower priority, it is not possible by force obtain access by pre-empting /aborting the low priority
cohort.In this sense, thecommit phase inDRTDBS is inherently susceptible to priority inversion.More prominently,the priority
inversion interval is not bounded since the time duration, that a cohort is in theprepared state, can be arbitrarily long(e.g. due to
network delays or blocking).This is particularly more problematic in distributed context.Therefore, in order to meet the transaction
deadlines, the choice of an enhanced commit protocol is vital for DRTDBS. To design the commit protocols forDRTDBS, we need to
arise two questions.
(i) How can we decrease the number of missed transaction in the real time system?
(ii) How do we adapt the standard commit protocols into the real time domain?
Researchers have proposed some real time commit protocol in the literature to address this issue. Soparkar et al.1994 have proposed a
protocol that allows individual sites to unilaterally commit[98].Later on, if it is found that the decision is not consistent globally then
compensation transactions are executed to rectify errors.The problem with this approach is that a lot of actions are irreversible in
nature.
According to Yongik Y. et al.1996, thescheme is also based on the subject matter of allowing individual sites to unilaterally commit-
the idea is that unilateral commitment outcomes in greater timeliness of actions [124]. If later on, it is found that thedecision is not
consistent globally,“compensation” transactions are executed to rectify the errors.Although the compensation-based approach certainly
appears to have thepotential to improve timeliness, until now there are quite a few practical difficulties described as (I)The standard
notion of transaction atomicity is not supported -as an alternative, a “relaxed” notion of atomicity is provided, (II)The design of a
compensating transaction is an application specific task because it is based on the application semantics,(III)Some real action for
example firing a weapon or dispensing cash may not be compensated at all, and (IV)The compensation transaction needs to be
designed in advance in order that they can be executed the instant error is detected.This means that the transaction workload must be
fully characterized a priori.There are some difficulties from a performance evaluation viewpoint also as:
(i) The execution of compensation transaction imposes an additional burden on the system.
(ii) It is not clear as to how the database system should schedule the compensation transaction relative to the normal
transactions.
(iii)
According to Davidson et al.1989; 1991, a centralized timed 2PC protocol guarantees that the fate of a transaction(commit or abort)is
known to all the cohorts before the expiry of the deadline when there are no processor, communication or clock faults[23,24].
However, in case of faults, however, it is not possible to provide such guarantees and an exception state is allowed which indicates the
violation of the deadline.Further, the protocol assumes that it is possible forDRTDBS to guarantee allocation of resources for a
duration of time within a given time interval. Lastly,the protocol is predicated upon the knowledge of worst-case communication
delays.
Ramesh Gupta et. al.1996; 2000; 1997a; 1997b; 1996a; 1996bconducted a detailed study of the relative performance of different
commit protocols [33, 34, 35, 36, 37, 38]. It is used a detailed simulation model for firm-deadline DRTDBS, the authors have worked
to evaluate the deadline miss performance of a variety of standard commit protocols including 2PC,PC,PA and3PC. In thatcase they
have investigated and evaluated the performance of a new commit protocol called OPT designed specifically in particular for the real-
time environment, these described in Agrawal, D. et al., 1995; Gupta Ramesh et al., 1997b; 1996a[8,36,37].
It is a 2PC variant that attempts to improve priority inversion in 2PC andincludes features i.e., controlled optimistic access to
uncommitted data,silent kill and active abort.These protocols allow a high priority transaction to borrow(i.e. access) data items held by
low priority transaction that is waiting for the commit message under theassumption that thelow priority transaction has most probably
commit.This creates dependencies among transactions. If a transaction depends on other transaction, it is not allowed to start commit
processing and is blocked until the transactions, on which it depends have committed.The blocked committing transaction may include
a chain of dependencies as other executing transactions may have data conflicts with it. Gupta et al.1997a have suggested two variant
of OPT such as,Shadow-Opt andHealty-Opt protocols[35].
A health factor is associated with each transaction andtransaction is allowed to lend its data, only if, its health factor is greater than a
minimum value inHealthy-Opt.Though, it does not consider the type of dependencies between two transactions.The abort of a lending
transaction aborts all transaction that has borrowed the data from it.The performance of the system is dependent on selected threshold
International Journal of Recent Trends in Engineering & Research (IJRTER)
Volume 03, Issue 05; May - 2017 [ISSN: 2455-1457]
@IJRTER-2017, All Rights Reserved 142
value of health factor(HF), which is defined as ratio of TimeLeft toMinTime, whereTimeLeft is the time left until the deadline of
transaction and MinTime is theminimum time required for commit processing.
Whenever it borrows a data page, a cohort creates a replica of the cohort called a shadow in Shadow-Opt.The original cohort continues
its execution andthe shadow are blocked at the point of borrowing. If the lending cohort commits,the borrowing cohort continues and
theshadow is discarded, otherwise if the lender aborts, theborrowing cohort is aborted.
Haritsa et al.2000 stated that a new protocol Permits Reading ofModified Prepared-Data for Timeliness(PROMPT).It is also designed
specifically for the real time environment and includes features, i.e.controlled optimistic access to uncommitted data, active abort,
silent kill and healthylending [46]. Theperformance outcomes of PROMPT demonstrate that it provides considerable improvement
over the classical commit protocols, and makes enormously efficient use of the lending premise.Finally, the authors have also
evaluated the priority inheritance approach to address the priority inversion problem associated with prepared data,but found it to be
inherently unsuitable for the distributed environment. Owing to sharing data items, it creates abort orcommit dependency between the
inconsistent transactions.The dependencies limit the commit order of the transactions and thus may cause a transaction to miss its
deadline whereas(even as) it is waiting for its dependent transaction to commit.The impact of buffer space and admission control is
also not studied since it is assumed that buffer space is sufficient large to allow the retention of data updates until commit time.
Lam et al.1997; 1999 investigated deadline-driven conflict resolution(DDCR) protocol which integrates transaction commitment and
concurrency control protocol for firm real time transactions[58,62]. DDCR resolves different transaction conflicts as said by the
dependency relationship between the lock-requester and the lock holder to maintain three copies of each modified data item such as
before,after and further.It does not only create additional workload on the systems but also has priority inversion problems.
The serializeability of the schedule is ensured by checking the before set and the after sets when a transaction needs to enter the
decision phase.The protocol aims to reduce the impact of a committing transaction on theexecuting transaction which depends on
it.The conflict resolution in DDCR is classified into two parts:(I) resolve conflicts at the conflict time; and (II)reverse the commit
dependency when a transaction,which depends on a committing transaction, needs to enter thedecision phase and its deadline is
forthcoming.If data conflict occurs between thecommitting and executing transaction, performance of system has been affected.
Based on theDDCR, Pang et al.1998 proposed that enhancement inDDCR called the DDCR with similarity(DDCR-S) to resolve the
executing-committing conflicts in DRTDBS with mixed requirements of criticality and consistency in transactions[79]. In DDCR-S,
conflicts involving transactions with looserconsistency requirement and the concept of resemblance are adopted in order that a higher
degree of concurrency can be achieved and at the same time the consistency requirements of the transactions can still be met. The
simulation outcomes demonstrate that the use of DDCR-S can considerably improve the overall system performance as compared with
the original DDCR approach.
Based on PROMPT andDDCR protocols, Qin Biao et al.2003 suggested double space commit(2SC) protocol[83]. They worked and
classified all kind of dependencies that may take place due to data access conflicts between the transactions into two types of
dependency such as commit and abort dependency.
Based on theabove real-time commit protocol B. Qin and Y.Liu suggested a 2SC(double space) real-time commit protocol[16]. 2SC
allows non-healthy transactions to lend its locked data to the transactions in its commit dependency set.When the prepared transaction
is aborts, only the transaction in its abort dependency set are aborted even as the transaction in its commit dependency set execute as
normal. These two properties of the 2SC can reduce the data inaccessibility and the priority inversion that is inherent in distributed
real-time commit processing. 2SC protocol uses blind write model. Extensive simulation experiments have been performed to compare
the performance of 2SC with that of other protocols such as PROMPT and DDCR.The simulation results show that 2SC has the
bestperformance. Furthermore, it is easy to incorporate it in any current concurrency control protocol.
3.5 Memory Optimization According to Garcia-Molina et al.1992, the very important data base system resources are the data items that can be viewed as logical
resources, andCPU, main memory and disks which are physical resources [28]. However, thecost of the main memory is sinking
speedily andits size is growing,the size of database is also growing very quickly.The databases are of limited size or are growing at a
slower rate than the memory capacities are increasing,they can be reserved in the main memory in real time application. Ramakrishnan
et al. conducted study on its, yet, there are many real time applications that handle large amount of data and require support of an
intensive transaction processing[85].
The amount of data they store is too large(and too expensive) to be stored in the non-volatile main memory. Examples, include satellite
image data,telephone switching, radar tracking, computer aided manufacturing, and media servers etc. in these cases, the database
cannot be accommodated in the main memory easily. Therefore,these types of several databases systems are disk resident.The buffer
space in the main memory is used to store the execution code, copies of files & data pages, and any temporary objects produced. With
the new functionalities and features of the light weight devices,there is need of new protocols/policy in order that memory utilization
can be improved, it is suggested by Ramamritham et al.,2004[87]. Ramamritham et al.2004 utilized a novel storage model, ID based
storage, which reduces storage costs significantly[87]. They present an exact algorithm for allocating memory among the database
operators. Because of its high complexity, they proposed a heuristic solution based on the advantage of an operator per unit memory
allocation.
3.6 Real Time Buffer Management W. Effelsberg and T. Haerder,1984 invested that data storage in a database management system is organized into hierarchy,the lowest
level of the hierarchy being secondary storage(disk),followed by primary memoryand high speed cache.A DBMS must optimize these
storage areas organization and access withconstraint policies to facilitate increased data manipulation throughput and to provide real-
time support [121].
Huang Jiandong and Stankovic1990 proposed, buffer management plays a very important role in database systems.But little work has
been done to study buffer management in real-time database systems. In this work, we propose and evaluate algorithms for real-time
oriented buffer allocation and buffer replacement based on the existing organization of a real time database testbed. Our purpose is to
increase the percentage of transactions meeting their deadlines. The experimental outcomes obtained from the testbed indicate that
under two-phase locking, the real time oriented buffer management schemes do not significantly improve system performance; rather,
other integrated processing components such as conflict resolution and CPU scheduling play a more important role in the system.We
International Journal of Recent Trends in Engineering & Research (IJRTER)
Volume 03, Issue 05; May - 2017 [ISSN: 2455-1457]
@IJRTER-2017, All Rights Reserved 143
have shown that data contention is a constraint on the performance improvement of buffer management. Under data contention,
conflict resolution becomes a key factor in real-time transaction processing. In addition,CPU scheduling is more important than buffer
allocation,even if the system is not CPU bound.We discuss reasons for these results and give suggestions as to where and how real-
time buffer management may improve real-time transaction performance.
Ozsoyoglu and Snodgrass,Aug. 1995 discussed that primary memory is organized into a limited number of physical pages which are
shared between active transactions,these pages and their management form the critical link between thedatabases high level
functions(transaction processing, concurrency control, recovery, etc.)and the physical realization of data within the logical database as
viewed by these upper level functions. For each logical page reference, the Buffer manager performs the following tasks[78]
(i) The buffer is searched for requested page.
(ii) If thepage is not currently in primary memory buffers,the buffer manager must retrieve the page from secondary memory.
(a) If free frames are available, the page is assigned to a physical memory frame.
(b) If there are no free frames, a replacement page must be selected and then moved to secondary storage.
(iii) The page reference is recorded in the buffer manager’s page table.
(iv) The requested page, once retrieved, is placed in the buffer memory frame.
(v) The address of thecorresponding buffer frame is returned to thetransaction manager to be used in physically accessing data
items held on that page.
ErsanKayan and Ozg¨urUlusoy1999[27] introduced critical issues related to Real Time(RT) transaction management in mobile
computing systems.Taking into account the timing requirements of applications supported by mobile computing systems, they provide
a transaction execution model with two alternative execution strategies for mobile transactions that evaluate the performance of the
system considering various mobile system characteristics. The performance metric used in all the experiments was theSuccess Ratio
(SR); they noted that the handoff process leads to a decrease in the performance of ESMH.
UdaiShanker et al, 2010 [114] discussed a modified RT (Real Time) commit protocol for distributed real time database
systems(DRTDBS), which allows Commit Dependent and in time borrowers for Incredible Value added data lending without
Extended abort chain.
Shishir Kumar and SonaliBarvey2009 [97] proposed two phase commit protocols and their variants both on the basis of cost and
time.They presented a new commit protocol which isNon-Blocking(NB) and can give even better performance in reliable systems
where failure rate is not very high
Xiai YAN etal, 2012[122] analyzeda protocol adapted to the distributed realtime transaction commit, which can avoid the blocking
problem when dealing with transactions by coordinator redundancy.
IV. ISSUES AND CHALLENGES IN REAL TIME DATABASE SYSTEMS (RTDBS) DiPippo and Wolfe, 1997; Hansen and Hansen, 2000 analyzed an integrated and shared base of persisted data used by different kinds
of users in a multiplicity of programs is said to be database. Databases and database systems have turn out to be a very important part
of day to day life in recent society. In the route of a day, mostly of us meet numerous activities that entail some interaction with
databases. [25,40].
Data management in Real Time Systems(RTS) has traditionally been implemented via application-dependent designs.The main
drawback of this approach is that, as the applications grow in complexity and amount of data, the code which deals with data
management becomes increasingly difficult to develop and maintain.Real Time Database Systems(RTDBS)is the most promising
alternative to manage the data with a systematic and structured approach.
Database Systems(DBS) have been successfully applied in many manufacturing systems.A lot of websites rely on conventional
databases to provide a repository for data and to organize the retrieval and manipulation of data. Though, this type of database is not
suitable for applications with timing requirements(e.g. telecommunications,industrial control and air traffic control), it becomes
interesting to apply techniques from theReal Time Systems(RTS) research to provide timeliness in addition to the common features of
database systems.
The Real Time Systems(RTS) are characterized by managing huge volumes of data.Efficient database management algorithms for
manipulating and accessing data are required to satisfy timing constraints of supported applications. Real Time Database
Systems(RTDBS) involve a new research area investigating possible ways of applying database systems technology to
RTS.Management of real time information through a database system requires the incorporation of concepts from both database
systems and real time systems. A number of new criteria need to be developed to involve timing constraints of real time applications in
many database systems design issues, such as query processing or transaction, data buffering, CPU, and I/O scheduling. Next section
presents a basic understanding of the issues in real time database systems in distributed environment(DRTDBS). Dissimilar
approaches to various problems of real-time database systems in distributed environment are briefly described. Some important
possible future research directions have been presented.
4.1 Real Time TransactioninDistributed Environment When theuser programs for interaction with database, partially ordered sets of read and write operations are generated[29].This
sequence of operations on the database is called a transactionand it transforms the current consistent condition of thedatabase system
into a new consistent state.There are two types of transactions in DRTDBS: I.Local Transactions and II.Global Transactions.
I.Local Transactions:Local transactions are executed at the generating site only.
II.Global Transactions:Global transactions are distributed real-time transaction executed at more than one site.
A common model of a transaction in distributed environment is that there is one process called coordinator which is executed at the
site where the transaction is submitted and a set of other processes called cohorts that execute for the transaction at other sites that are
accessed by the transaction.
In fact, thereal-time transaction processing in distributed environment is a form of transaction processing that supports transactions
whose operations are dispersed between dissimilar computers or among different databases from different vendors. Thus, in a real time
transaction in distributed environment, the operations are executed at the site where the required data item resides and is connected
with time constraints. Transfer of money from one account to another(banking system), filing of tax returns, entering marks on a
student's grade sheet, reservation of train tickets, etc. are some of the examples of real time transactions in distributed environment.
The transaction is an atomic unit of work, which is either completed in it’s completely or not at all. According to Ramsay et al.,1997,
International Journal of Recent Trends in Engineering & Research (IJRTER)
Volume 03, Issue 05; May - 2017 [ISSN: 2455-1457]
@IJRTER-2017, All Rights Reserved 144
Resource
Request
CC Request
Service
Done
CC Reply
End
Transaction
Start
Transaction
Sink
CC
Mana
Resou
Transaction Sourc
Recovery
Mnger
therefore, a distributed commit protocol is needed to guarantee the uniform commitment of transaction execution in distributed nature
[89].The commit operations imply that the transactions are doing well, and thus all of its updates should be incorporated into the
database permanently. An abort operation indicates that the transaction has failed, and thus, requires the database management system
to abolish or cancel all of its effects in the database system. In brief, a transaction is an "all or nothing “unit of the execution.
V. ISSUES AND CHALLENGES IN REAL TIME DATABASE SYSTEMS IN DISTRIBUTED ENVIRONMENT
(DRTDBS) The implementation and design of DRTDBS introduce several other interesting problems. Among these problems, predictability and
consistency are fundamental to Real Time (RT) transaction processing, but sometimes these require conflicting actions. To ensure
consistency, we may have to block certain transactions, though may cause several unpredictable transaction executions and may lead
to the violation of timing constraints. Other design issues of DRTDBS are data access mechanism and invariance, new metrics for
database correctness and performance, maintain global system information, fault tolerance, security, failure recovery, etc. There are
number of other sources of unpredictability such as site failure, communication delays and transactions interaction with the underlying
operating system(OS) and I/O subsystems.
Though a lot of research has been done on these issues, there still exist many challenges and unresolved issues. Many Real Time(RT)
application need to share data that are distributed among multiple sites.In different applications, remote data access consist of multi-
hop network operation and take substantially more time thenthe local data access.Another problem is that due to the long remote data
access time, by the time a transaction gets all the data it need,some of the data item may have already become stakes.
5.1Types of Real Time Transaction in Distributed Environment The Real time transactions are fall into three categories: I.Hard Real Time Transaction, II.Firm Real Time Transaction and III.Soft
Real Time Transaction.
I. Hard Real Time Transaction:It must meet its deadline strictly.A missed deadline may result in a catastrophe.It includes embedded
control applications, avionics systems, and defense systems.Unluckily, the problem of implementing hard real-time transactions on
multiprocessors has received relatively small attention, it is suggested byLam et al., 1996; Liu et al.1973[60, 69].
II. Firm Real Time Transaction:It does not outcome in a catastrophe,if the deadline is missed. Though, the results have no value
after the expiry of deadline. This is in contrast to a hard real-time database system. Star-Base differs from previous real-time database
that (a)it relies on a real time operating system which provides priority based scheduling and time-based synchronization, and (b)it
deals explicitly with data contention and deadline handling as well to transaction scheduling, the traditional focus of simulation
studies, it is described in Datta et al.,1996; Haritsa et al.,1992[21, 41].
III. Soft Real Time Transaction:In this system nothing catastrophic happens if some deadlines are missed but theperformance has
been degraded below acceptable level still substantial fraction of design efforts in these systems goes into making sure that task
deadlines are met, these are suggested by Huang et al.,1992; Kao et al., 1993[48, 54]
5.2 Transaction Execution Model in Distributed Environment
According to Agrawal et al., 1992; Mittal et al.,2004, there are two types of distributed transaction execution model viz. I.Sequential
and II.Parallel [7, 72].
I. Sequential Execution Model:In the sequential execution model,there can be at most one cohort of a transaction at each execution
site,and only one cohort can be active at a time. After the successful completion of one operation,next operation in the sequence is
executed by the suitable cohort.At the end of execution of the last operation, the transaction can be committed[7].
II. Parallel execution model: In the parallel execution model, the coordinator of the transaction spawns all cohorts jointly and sends
them for execution on respective sites [13]. All cohorts then execute in parallel. The assumption here is that the operations performed
by one cohort during its execution at one site are independent of the results of the operations performed by some other cohort at some
other site. In other words, the sibling cohorts do not require whichever information from each other to share, it has expressed by
Audsley et al.,1993[13]
The time expressed in the form of dead line is a critical factor to be considered in distributed real time transaction[22].The completion
of transaction on or before its deadline is one of most important performance objective of DRTDBS.One of the most significant factors
is the data conflict among transactions[75].The data conflict that occurs among executing conflict. Scheduling of distributed
transactions. Management of distributed transactions. Possibilities of distributed deadlocks. Optimizing the use of memory. Deadline
assignment strategies. The Framework of Real Time Database System in Distributed Environment Model shows in figure 1.1.
Fig. 1.1: The Framework of Real Time Database System in Distributed Environment Model[75]
International Journal of Recent Trends in Engineering & Research (IJRTER)
Volume 03, Issue 05; May - 2017 [ISSN: 2455-1457]
@IJRTER-2017, All Rights Reserved 145
According to Ozsoyogluet al.1995 that the time uttered in the form of a deadline is a critical factor to be measured real time transaction
in distributed nature [78].
Stankovic et al. 1991 and Tay 1995 suggested that the completion of transactions before or on its deadline is one of the vital
performance objectives of DRTDBS [101,107].There are several important factors that have a say to the difficulty in meeting
thedeadlines of a transaction in distributed environment by Ginis et al.1996[30]. Datta et al. 2000 investigated that one of the most
significant factors is the data conflict among transactions [22].
According to Burger et al., 1997; Chakravarthy et al.,1994; Gehani et al., 1996; Haritsa et al., 1990; Haritsa et al.2001; Huang1991;
Lindström, 1999; Ramamritham et al.,1992; Srinivasa, 2002; Tay et al., 1985; Ulusoy et al.,1998; Ulusoy, 1992, the data conflict that
occurs between executing transactions is referred to as executing-executing conflict.The conflict connecting executing-committing
transactions is termed as executing-committing conflict.The executing-executing conflicts are resolved by concurrency control
protocols in distributed environment.A number of real time concurrency control protocols have been proposed in the past to resolve
execute-execute conflicts [18,19,29,43,45,48,68,86,99,108,118,119].
When a data conflict occurs among an executing and a committing transaction,a commit protocol has to work with concurrency
control protocol to handle it and to make certain the transaction atomicity.The traditional commit protocols block the lock requesting
transactions awaiting the lock holding transaction releases the lock.The blocked transactions may critically affect the performance of
DRTDBS,particularly when failures occur during the commitment phase.
Some of theother issues in DRTDBS are scheduling of distributed transactions,management of distributed transactions, fault
tolerance,security, failure recovery, optimizing the use of memory, possibility of distributed deadlocks (Bestavros et al., 1997) [15],
deadline assignment strategies (Lam et al., 2000; Lam et al.; 1997) [60,61], difficulty in maintaining global system information,
etc.Among these issues, priority policy for transaction scheduling, commit protocol, memory optimization and buffer management are
the only issues considered. In the following sections, we have review of literature which addresses these issues.
Embedding a DBMS(EDBMS) and an OSenvironment, in which it is usually treated like a normal application program, can result in
aggregating effects on buffer management.If DBMS runs in a program code, virtual address space as well as the Data base
Management System(DBMS) buffer is paged by OS memory management, unless they are made resident in main memory.While
replacement of buffer pages is done by the database management system according to the logical references, paging of main memory
frames is performed by independent OD algorithms based on the addressing behaviour within themain memory frames. In such an
environment,the following kinds of faults can occur:
1. Buffer faults
2. Page faults
3. Double-page faults
Because the database management system(DBMS) can keep several pages per transaction in fix status,it is possible that a shortage of
buffer frames will occur a resource deadlock;an additional page is requested,yet no page in the buffer can be replaced if all are flagged
with FIX status. This situation is especially threatening with small buffer sizes.
5.3 Fault Tolerance and Failure Recovery Failure recovery is frequently the most important aspect of security engineering;however is oneof themost neglected. For many years,
most of the research papers on computer securityhave dealt withconfidentiality, and most of the rest with authenticity and
integrity;availability has been neglected. So far the actual expenditures of a typical bank are theother way round. Perhaps a third of all
IT costs go to availability and recovery mechanisms,such as hot standby processing sites and multiply redundant networks; a
fewpercent are invested in integrity mechanisms such as internal audit; and an almost insignificantamount gets spent on confidentiality
mechanisms such as encryption boxes.
We’ll see that many other applications, from burglaralarms through electronic warfare to protecting a company from Internet-based
servicedenial attacks,are fundamentally about availability. Fault tolerance and failure recoveryare a huge part of the security engineer’s
job.
Classical system fault tolerance is usually based on mechanisms such as logs andlocking, and is greatly complicated when it must be
made resilient in theface of maliciousattacks on these mechanisms. It interacts with security in a number of ways: thefailure model, the
nature of resilience, the location of redundancy used to provide it,and defence against service denial attacks. We’ll use the following
definitions: a faultmay cause an error, which is an incorrect state; this may lead to a failure, which is adeviation from the system’s
specified behaviour.The resilience that we build into asystem to tolerate faults and recover from failures will have a number of
components,such as error recovery, fault detection, and if necessary, failure recovery. The meaningof mean-time-before-failure
(MTBF) and mean-time-to-repair(MTTR) should be obvious.
5.4 QoS routing The primary goals of QoS routing or constraint-based routing are
• To select routes that can meet the QoS requirements of a connection.
• To increase the utilization of the network.
While determining a route, QoS routing schemes not only consider the topology of a network, but also the requirements of the flow,
the resource availability at the links, etc. Therefore, QoS routing would be able to find a longer and lightly-loaded path rather than the
shortest path that may be heavily loaded. QoS routing schemes are, therefore, expected to be more successful than the traditional
routing schemes in meeting QoS guarantees requested by the connections.
VI. CONCLUSION
During the course of study, new methods for Priorities Assignment Policies(PAP), Transaction commitment(TC), Memory
optimization(MO), Buffer management (BM),Fault Tolerance & Failure Recovery, and QoS routing in real time database systems in
distributed environment and simulation are the fields of special interest. As we have seen that the development of commit protocols for
the traditional database system has been an area of intensive research in the past decade. However, in case of distributed real time
commit protocols, very little amount of the work has been reported in the literature.
International Journal of Recent Trends in Engineering & Research (IJRTER)
Volume 03, Issue 05; May - 2017 [ISSN: 2455-1457]
@IJRTER-2017, All Rights Reserved 146
REFERENCES [1] Shetan Ram Choudhary, C.K. Jha, “Performance Transaction‘s Assessment of Real Time Database System in Distributed Environment”, International Journal of
Engineering Trends and Technology (IJETT), Vol. 4, No. 9, pp. 4113-4118, Sep 2013. (1)
[2] Shetan Ram Choudhary, C.K. Jha, “Performance Evaluation of Real Time Database Systems in Distributed Environment”, Int. J. Computer Technology &
Applications, Vol. 4, No. 5, pp. 785-792, Sept-Oct 2013. (2)
[3] S. R. Choudhary, C.K. Jha, “A Study on Transaction Processing for Real Time Database System in Distributed Environment”, International Journal of
Computer Science and Information Technologies, Vol. 4 (5), pp.721-724,2013. (3)
[4] Abbott Robert and Garcia-Molina, H., “Scheduling Real-Time Transactions with Disk Resident Data,” in Proceedings of the 15th International Conference on
Very Large Databases, Amsterdam, The Netherlands, pp. 385–395, 1989. (3)
[5] Abbott Robert and Garcia-Monila, H., “Scheduling Real-Time Transaction: A Performance Evaluation,” in Proceedings of the 14th International Conference on
Very Large Databases, pp. 1–12, August 1988. (4)
[6] Abdallah Maha, Guerraoui R. and Pucheral P., “One-Phase Commit: Does It Make Sense,” Proceedings of the International Conference on Parallel and
Distributed Systems (ICPADS’98), Tainan, Taiwan, Dec. 14-16, 1998. (5)
[7] Agrawal Divyakant, Abbadi A. El, and Jeffers R. , “Using Delayed Commitment Locking Protocols for Real-Time Databases,” Proceedings of the ACM
International Conference on Management of Data (SIGMOD), San Diego, California, pp. 104–113, June 2-5, 1992. (6)
[8] Agrawal Divyakant, Abbadi A. El, Jeffers R. and Lin, L., “Ordered Shared Locks for Real-Time Databases” International Journals of Very Large Data Bases
(VLDB Journal) Vol. 4, No. 1, pp. 87-126, January 1995. (7)
[9] Al-Houmaily Yousef J. and Chrysanthis P.K., “Atomicity with Incompatible Presumptions,” Proceedings of the 18th ACM Symposium on Principles of
Database Systems (PODS), Philadelphia, June 1999. (10)
[10] Al-Houmaily Yousef J. and Chrysanthis P.K., “In Search for An Efficient Real-Time Atomicity Commit Protocol,” Url = citeseer.nj.nec.com/47189.html.
http://www.cs.bu.edu/techreports/pdf/ 1996-027-ieee-rtss9.pdf/13.ps (11)
[11] Al-Houmaily Yousef J., Chrysanthis P.K. and Levitan Steven P., “An Argument in Favor of the Presumed Commit Protocol,” Proceedings of the IEEE
International Conference on Data Engineering, Birmingham, April 1997b. (12)
[12] Al-Houmaily Yousef J., Chrysanthis P.K. and Levitan Steven P., “Enhancing the Performance of Presumed Commit Protocol,” Proceedings of the ACM
Symposium on Applied Computing, San Jose, CA, USA, February 28-March 1, 1997a. (13)
[13] Audsley Neil C., Burns A., Richardson M.F. and Wellings, A.J., “Data Consistency in Hard Real-Time Systems,” YCS 203, Department of Computer Science,
University of York, June 1993. (4) (17)
[14] Bandaru Vishnu Roopini, “Transaction Management Policy in Distributed Real Time System”, International Journal of Soft Computing and Engineering
(IJSCE) ISSN: 2231-2307, Vol. 3, Issue-2, May 2013. (5)
[15] BestavrosAzer, Lin K.J. and Son S.H., “Real-Time Database Systems: Issues and Applications,” Kluwer Academic Publishers, 1997. (21)
[16] BioaQin, Yunsheng Liu, JinCai Yang, “A Commit Strategy for Distributed Real-Time Transaction”, J. Comput. Sci. Technol., Vol. 18, No. 5, pp. 626-631,
2003. (22)
[17] Buchmann, A., D. McCarthy, M. Hsu, and U. Dayal, “Time-Critical Database Scheduling: A Framework for Integrating Real-Time Scheduling and
Concurrency Control,” Proceedings of the 5thData Engineering Conference, February 1989. (6)
[18] Burger Albert, Kumar Vijay and Hines Mary Lou, “Performance of Multiversion and Distributed Two-Phase Locking Concurrency Control Mechanisms in
Distributed Databases,” International Journal of Information Sciences, Vol. 96, No. 1-2, pp. 129-152, 1997. (26)
[19] Chakravarthy Sharma, Hong D. and Johnson T., “Real Time Transaction Scheduling: A Framework for Synthesizing Static and Dynamic Factors,” TR-008,
CISE Dept., University of Florida, 1994. (30)
[20] Chen, Hong-Ren and Chin, Y.H., “Efficient Priority Assignment Policies for Distributed real time database systems”, Proceedings of the 2007 WSEAS
International Conference on Computer Engineering and Applications, Gold Coast, Australia, January 17-19, 2007. (33)
[21] DattaAnindya, Mukhejee S., Konana F., Yiguler I.R. and Bajaj, A., “Multiclass Transaction Scheduling and Overload Management in Firm Real-Time Database
Systems,” Information Systems, Vol. 21, No. 1, pp. 29-54, 1996. (35)
[22] DattaAnindya, Son S. H. and Kumar Vijay, “Is a Bird in the Hand Worth More Than Two in the Bush? Limitations of Priority Cognizance in Conflict
Resolution for Firm Real-Time Database Systems,” IEEE Transactions on Computers, Vol. 49, No. 5, pp. 482-502, May 2000. (36) (7)
[23] Davidson Susan B., Lee I. and Wolfe V., “A Protocol for Timed Atomic Commitment,” in Proceedings of the IEEE 9th International Conference on Distributed
Computing Systems, pp. 199-206, June 5-9, 1989. (37)
[24] Davidson Susan B., Lee I. and Wolfe V., “Timed Atomic Commitment,” IEEE Transactions on Computer, Vol. 40, No. 5, pp. 573-583, 1991. (38)
[25] DiPippo Lisa C. and Wolfe Victor F., “Real-Time Databases,” Database Systems Handbook, Multiscience Press, 1997. (40)
[26] DogduEordgon and Ozsoyoglu G, “Real-Time Transactions with Execution Histories: Priority Assignment and Load Control,” in Proceedings of the 6th
International Conference on Information and Knowledge Management, Las Vegas, Nevada, United States, pp. 301-308, 1997. (41)
[27] ErsanKayan, O¨zgu¨rUlusoy, “Real-Time Transaction Management in Mobile Computing Systems”, DASFAA 1999 Proceedings of the Sixth International
Conference on Database Systems for Advanced Applications pp. 127-134. (8)
[28] Garcia-Molina Hector and Salem K., “Main Memory Database Systems: An Overview,” IEEE Transactions on Knowledge and Data Engineering, Vol 4, No. 6,
pp. 509-516, 1992. (45)
[29] GehaniNarain, Ramamritham K., Shanmugasundaram, J. and Shmueli, O., “Accessing Extra Database Information: Concurrency Control and Correctness,”
Information Systems, Vol. 23, No. 7, pp. 439-462, Nov. 1996. (46)
[30] Ginis Roman and Wolfe Victor Fay, “Issues in Designing Open Distributed Real-Time Databases,” Proceedings of the 4th IEEE International Workshop on
Parallel and Distributed Real-Time Systems, Honolulu, HI, USA, April 15-16, pp. 106-109, 1996. (48)
[31] Gray Jim and Reuter A., “Transaction Processing Concepts and Technique,” Morgan Kaufman, San Mateo, CA, 1993. (49)
[32] Gray Jim, “Notes on Database Operating Systems,” Operating Systems: an Advanced Course, Lecture Notes in Computer Science, Springer Verlag, Vol. 60, pp.
397-405, 1978. (50)
[33] Gupta Ramesh and Haritsa J.R., “Commit Processing in Distributed Real-Time Database Systems,” Proceedings of National Conference on Software for Real-
Time Systems, Cochin, India, pp. 195-204, January 1996. (52)
[34] Gupta Ramesh, “Commit Processing in Distributed On-Line and Real-Time Transaction Processing Systems,” Master Science (Engineering) Thesis,
Supercomputer Education and Research Centre, I.I.Sc. Bangalore, India, 2000. (53)
[35] Gupta Ramesh, Haritsa J.R. and Ramamritham K., “More Optimism About Real-Time Distributed Commit Processing”, Technical Report TR-97-04, Database
System Lab, Supercomputer Education and Research Centre, I.I. Sc. Bangalore, India, 1997a. (54)
[36] Gupta Ramesh, Haritsa J.R. and Ramamritham K., “Revisiting Commit Processing in Distributed Database Systems,” Proceedings of the ACM International
Conference on Management of Data (SIGMOD), Tucson, May 1997b. (55)
[37] Gupta Ramesh, Haritsa J.R., Ramamritham K. and Seshadri S., “Commit Processing in Distributed Real-Time Database Systems,” Proceedings of Real-Time
Systems Symposium, Washington DC, IEEE Computer Society, San Francisco, Dec. 1996a. (56)
[38] Gupta Ramesh, Haritsa J.R., Ramamritham K. and Seshadri S., “Commit Processing in Distributed Real-Time Database Systems,” Technical Report TR-96-01,
Database System Lab, Supercomputer Education and Research Centre, I.I.Sc. Bangalore, India, 1996b. (57)
[39] Gyanendra Kumar Gupta, A. K. Sharma and Vishnu Swaroop, “Hybrid Transaction Management in Distributed Real-Time Database System”, International
Journal Of Advances In Engineering & Technology, Vol. 1, No. 4, pp. 315-321, Sept 2011. (9)
[40] Hansen Gary W. and Hansen James V., “Database Management and Design,” Prentice-hall of India, 2000. (59)
[41] Haritsa Jayant R., Carey M.J. and Livny, M., “Data Access Scheduling in Firm Real-Time Database Systems,” Journal of Real-Time Systems, Vol. 4, No. 3,
203-241, 1992. (63)
[42] Haritsa Jayant R., Carey M.J. and Livny, M., “Value-Based Scheduling in Real-Time Database Systems”, Technical Report TR-1204, CS Department,
University of Wisconsin, Madison, 1991. (64) (10)
[43] Haritsa Jayant R., Carey Michael J. and LivnyMiron, “On Being Optimistic about Real-Time Constraints,” Proceedings of the ACM PODS Symposium, April
1990. (66)
International Journal of Recent Trends in Engineering & Research (IJRTER)
Volume 03, Issue 05; May - 2017 [ISSN: 2455-1457]
@IJRTER-2017, All Rights Reserved 147
[44] Haritsa Jayant R., Livny M. and Carey M.J., “Earliest Deadline Scheduling for Real-Time Database Systems,” Proceedings of 12th IEEE Real-Time Systems
Symposium (RTSS), San Antonio, Texas, USA, pp. 232-242, December 1991. (67)
[45] Haritsa Jayant R., Ramamritham K. and Gupta R., “Real-Time Commit Processing,” Real-time Database Systems: Architecture and Techniques, Kluwer
Academic Publishers, Dordrecht, Vol. 593, Kluwer International Series in Engineering and Computer Science, eds. Tei-Wei Kuo, and Kam –Yiu Lam, pp. 227-243, 2001.
(68)
[46] Haritsa Jayant R., Ramamritham K. and Gupta R., “The PROMPT Real-Time Commit Protocol,” IEEE Transactions on Parallel and Distributed Systems, Vol.
11, No. 2, pp. 160-181, 2000. (69)
[47] Hong Dong –Kweon, Johnson Theodore and Chakravarthy Sharma, “Real-Time Transaction Scheduling: A Cost Conscious Approach,” Proceedings of the
SIGMOD Conference, pp. 197-206, 1993. (72)
[48] Huang Jiandong, “Real Time Transaction Proceeding: Design, Implementation and Performance Evaluation,” Ph.D. Thesis, University of Massachusetts, May
1991. (76)
[49] Huang Sheung – Lun, Lam K.W. and Lam K.Y., “Efficient Technique for Performance Analysis of Locking Protocols,” Proceeding of the 12th IEEE
International Workshop on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems, Durham, NC, USA, pp. 276-283, Jan. 31-Feb. 2, 1994. (78)
[50] James Cowling Barbara Liskov, “Granola: Low-Overhead Distributed Transaction Coordination”, Proc. of the 2012 USENIX Annual Technical Conference.
Boston, MA, USA. June 2012. (11)
[51] Jensen, E. D., C. D. Locke, and H. Tokuda, “A Time-Driven Scheduling Model for Real-Time Operating Systems,” Proceedings of the 6thIEEE Real-Time
System Symposium, December 1985. (12)
[52] Kao Ben and Garcia-Molina H., “Deadline Assignment in a Distributed Soft Real-Time Systems,” Proceeding of the 13th International Conference on
Distributed Computing Systems, pp. 428-437, 1993. (90)
[53] Kao Ben and Garcia-Molina H., “Subtask Deadline Assignment for Complex Distributed Soft Real-Time Tasks,” Technical Report 93-149, Stanford University,
1993. (91)
[54] Kao Ben and H. Garcia-Molina., “Deadline assignment in a distributed soft real time system”, IEEE Transactions on Parallel and Distributed Systems, Vol. 8,
No. 12, pp. 1268-1274, December 1997. (88)
[55] Kim Young-Kuk and Son S.H., “Predictability and Consistency in Real-Time Database Systems,” Editor S.H. Son, Advances in Real Time Systems, Prentice
Hall, New York, pp. 509-531, 1995. (94)
[56] Kim Young-Kuk, “Predictability and Consistency in Real Time Transaction Processing,” PhD Thesis, University of Virginia, May 1995. (95)
[57] Lam Kam - Yiu, “Concurrency Control in Distributed Real-Time Database Systems,” Ph.D Thesis, City University of Hong Kong, Hong Kong, Oct., 1994. (96)
[58] Lam Kam -Yiu, Cao Jiannong, Pang Chung – Leung and Son S.H., “Resolving Conflicts with Committing Transactions in Distributed Real-Time Databases,”
Proceedings of the Third IEEE International Conference on Engineering of Complex Computer Systems, Como, Italy, pp. 49-58, September 8-12, 1997a. (98)
[59] Lam Kam -Yiu, Hung S.L and Son S.H., “On Using Real-Time Static Locking Protocols for Distributed Real-Time Databases,” Real-Time Systems, Vol. 13,
pp. 141-166, 1997. (99)
[60] Lam Kam -Yiu, Law Gary C.K. and Lee Victor C.S., “Priority and Deadline Assignment to Triggered Transactions in Distributed Real-Time Active Databases,”
Journal of Systems and Software, Vol. 51, No. 1, pp. 49-60, April, 2000. (100)
[61] Lam Kam -Yiu, Lee Victor C.S., Kao Ben and Hung S.L., “Priority Assignment in Distributed Real-Time Databases Using Optimistic Concurrency Control,”
IEE Proceedings on Computer and Digital Techniques, Vol. 144, No. 5, pp. 324-330, Sept. 1997. (101)
[62] Lam Kam -Yiu, Pang C., Son S.H. and Cao J., “Resolving Executing-Committing conflicts in Distributed Real-Time Database Systems,” Journal of Computer,
Vol. 42, No. 8, pp. 674-692, 1999. (102)
[63] Lee Inseon and YeomHeon Y., “A Single Phase Distributed Commit Protocol for Main Memory Database Systems,” 16th International Parallel & Distributed
Processing Symposium (IPDPS 200), FL Lauderdale, Florida, USA, April 15-19, 2002. (106)
[64] Lee Inseon, Heon Y. and Park Taesoon, “A new Approach for Distributed Main Memory Database Systems: A Causal Commit Protocol,” IEICE Transactions
on Information and System, Vol. E87-D, No.1, pp. 196-204, January 2004. (107)
[65] Lee Victor C.S., Lam K.Y., Kao Benjamin C.M., Lam K.W. and Hung S. L., “Priority Assignment for Sub-Transaction in Distributed Real-Time Databases,”
1st International Workshop On Real-Time Database Systems, 1996. (109)
[66] Lee Victor C.S., Lam Kam - Yiu and Kao B., “Priority Scheduling of Transactions in Distributed Real-Time Databases,” International Journal of Time-Critical
Computing Systems, Vol. 16, pp. 31-62, 1999. (110)
[67] Lindsay Bruce G., Haas Laura M., Mohan C., Wilms Paul F. and Yost Robert A., “Computation and Communication in R*: A Distributed Database Manager,”
in ACM Transaction on Computer Systems (TOCS), Vol. 2, No. 1, pp. 24-38, Feb. 1984. (113)
[68] Lindstrom Jan, “Using Priorities in Concurrency Control for RTDBS,” Seminar on Real-Time and Embedded Systems, Department of Computer Science,
University of Helsinki, Autumn 1999. (116)
[69] Liu C.L. and Layland, J.W., “Scheduling Algorithms for Multiprogramming in a hard Real-Time Environment,” Journal of the ACM, Vol. 20, No. 1, pp. 46–61,
Jan. 1973. (117)
[70] Locke, C. D., “Best-Effort Decision Making for Real-Time Scheduling”, Ph.D. Dissertation, Carnegie-Mellon University, 1986. (13)
[71] M.S.Khatib, Mohammad Atique, “An analysis of distributed real time systems: an overview”, International Journal of Advanced Technology in Engineering and
Science, Volume No.02, Issue No. 06, June 2014. (14)
[72] Mittal Abha and DandamudiSivarama P., “Dynamic Versus Static Locking in Real-Time Parallel Database Systems,” Proceedings of the 18th International
Parallel and Distributed Processing Symposium (IPDPS’04), Santa Fe, New Mixco, April 26–30, 2004. (123)
[73] Mohan Chandrasekaran and Dievendorff D., “Recent Work on Distributed Commit Protocols, and Recoverable Messaging and Queuing,” Data Engineering
Bulletin, Vol. 17, No. 1, 1994. (124)
[74] Mohan Chandrasekaran, Lindsay B. and Obermarck R., “Transaction Management in the R* Distributed Database Management Systems,” ACM Transactions
on Database Systems, Vol. 11, No. 4, pp. 378-396, December 1986. (126)
[75] MohaniaMukesh K., KambayashiYahiko, Tjoa A Min, Wagner Roland and BellatrcheLadjel, “Trends in Database research: DEXA”, pp. 984-988, 2001. (15)
[76] NG Pui, “A Commit Protocol for Checkpointing Transactions,” Proceedings of the 7th Symposium on Reliable Distributed Systems, Columbus, OH, USA, pp.
22–31, Oct. 10-12, 1998. (128)
[77] O’Neil Patrick, Ramamrittham K. and Pu C., “Towards Predictable Transaction Executions in Real-Time Database Systems”, Technical Report 92-35,
University of Massachusetts, August, 1992. (16)
[78] OzsoyogluGultekin and Snodgrass Richard T., “Temporal and Real - Time Databases: A survey,” IEEE Transactions on Knowledge and Data Engineering, Vol.
7, No. 4, pp. 513-532, August 1995. (130)
[79] Pang Chung-Leung and Lam K.Y., “On Using Similarity for Resolving Conflicts at Commit in Mixed Distributed Real-Time databases,” Proceedings of the 5th
International Conference on Real-Time Computing Systems and Applications, Oct. 27-29, 1998. (131)
[80] Pang HweeHwa, “Query Processing in Firm Real-Time Database Systems,” PhD Thesis, University of Wisconsin, Madison, 1994. (132)
[81] Pang HweeHwa, Carey Michael J. and Livny, Miron, “Multiclass Query Scheduling in Real-Time Database Systems,” IEEE Transaction on Knowledge Data
Engineering Vol. 7, No. 4, pp. 533–551, August 1995. (133)
[82] Park Taesoon and Yoem H.Y., “A Distributed Group Commit Protocol for Distributed Database Systems,” Seoul National University, Korea, March 1999. (134)
[83] Qin Biao and Liu Y., “High Performance Distributed Real-Time Commit Protocol,” Journal of Systems Software, Elsevier Science Inc., Vol. 68, No. 2, pp.
145-152, November 15, 2003. (136)
[84] Ramakrishnan Raghu and Gehrke, Johannes, “Database Management System,” McGraw Hill Publication, New York, January 2003. (137)
[85] Ramakrishnan Raghu and Ullaman Jaffrey D., “A Survey of Research on Deductive Database Systems,” www-db. stanford.edu/~ullman/dscb/ch1.pdf. (138)
[86] RamamrithamKrithi and Chrysanthis P.K., “In Search for the Acceptability Criteria: Database Consistency Requirements and Transaction Correctness
Properties,” Proceedings of the International Workshop on Distributed Object Management, Canada, pp. 211-229, August 1992. (140)
[87] RamamrithamKrithi and Sen, Rajkumar, “DELite: Database Support for Embedded Lightweight Devices,” EMSOFT, Pisa, Italy, pp. 3-4, September 27-29,
2004. (141)
[88] RamamrithamKrithi, “Real-Time Databases”, Distributed and Parallel Databases, Special Issue: Research Topics in Distributed and Parallel Databases, Vol. 1,
International Journal of Recent Trends in Engineering & Research (IJRTER)
Volume 03, Issue 05; May - 2017 [ISSN: 2455-1457]
@IJRTER-2017, All Rights Reserved 148
No. 2, pp. 199-226, April 1993. (17)
[89] Ramsay S., Nummenmaa J., Thanisch P., Pooley R.J. and Gilmore S.T., “Interactive Simulation of Distributed Transaction Processing Commit Protocols,”
Luker, P. (ed.) Proceedings of Third Conference of the United Kingdom Simulation Society (UKSIM’97), Department of Computer Science, University of Edinburgh, pp.
112-127, 1997. (143)
[90] Ryu In Kyung and Thomasian A., “Analysis of Database Performance with Dynamic Locking,” Journals of the ACM, Vol. 37, No. 3, pp. 491-523, July 1990.
(144)
[91] S. Takkar& S. P. Dandamudi, “Performance of hard real-time transaction scheduling policies in parallel database systems”, Modeling, Analysis and Simulation
of Computer and Telecommunication Systems, Proceedings Sixth International Symposium, 1998. (18)
[92] Schwan, K. (panel chair), “Semantics of Timing Constraints: Where Do They Come From and Where Do They Go?”, Panel Discussion in the 10th IEEE
Workshop on Real-Time Operating Systems and Software, New York, NY, May 1993. (19)
[93] Sha Lui, Rajkumar R., Son S.H. and Chang C.H., “A Real-Time Locking Protocol,” IEEE Transactions on Computers, Vol. 40, No. 7, pp. 793-800, 1991. (150)
(20)
[94] Sha, L., R. Rajkumar and J. P. Lehoczky, “Concurrency Control for Distributed Real-Time Databases”, ACM SIGMOD Record, March 1988. (21)
[95] SheetlaniJitendra, Gupta V.K., “Concurrency Issues of Distributed Advance Transaction Process”, Research Journal of Recent Sciences, Vol. 1, ISC-2011. (22)
[96] Shiow_Chen and Li Victor O.K., “Performance Analysis of Static Locking in Distributed Database Systems,” IEEE Transactions on Computers, VOl. 39, No. 6,
pp. 741-751, June 1990. (155)
[97] Shishir Kumar &SonaliBarvey, “Non-Blocking Commit Protocol”, IJCSNS International Journal of Computer Science and Network Security, Vol.9, No.8,
August 2009. (23)
[98] SoparkarNandit, Levy E., Korth H.F. and Silberschatz A., “Adaptive Commitment for Real-Time Distributed Transaction,” Technical Report TR-92-15, 1992,
Dept. of Computer Science, University of Texas, Austin and also in the CIKM’94 Proceedings of the 3rd International Conference on Information and Knowledge
Management, Gaithersburg, Maryland, United States, pp. 187-194, 1994. (160)
[99] Srinivasa Rashmi, “Network-Aided Concurrency Control in Distributed Databases, PhD thesis, University of Virginia, Jan. 2002. (161)
[100] Stamos James W. and Cristian Flaviu, “A Low-Cost Atomic Commit Protocol,” Proceedings of the 9th IEEE Symposium on Reliable Distributed Systems,
Huntsville, AI, USA, pp. 66-75, Oct. 9-12, 1990. (162)
[101] Stankovic John A., Ramamritham K. and Towsley D., “Scheduling in Real-Time Transaction Systems,” Foundations of Real-Time Computing Scheduling and
Resource Management, edited by Andre van Tilborg and Gary Koob, Kluwer Academic Publishers, pp. 157-184, 1991. (165)
[102] Stankovic, J. A., and K. Ramamritham, “What is Predictability for Real-Time System?”, Journal of Real-Time Systems, Vol. 2, pp. 247-254, Kluwer Academic
Publishers, 1990. (24)
[103] Strayer, W. T., “Function-Driven Scheduling: A General Framework for Expression and Analysis of Scheduling”, Ph.D. Dissertation, Department of Computer
Science, University of Virginia, May 1992. (25)
[104] Susan B. Davidson and Aaron Watters, “Partial Computation in Real-Time Database Systems”, University of Pennsylvania, 1988. (26)
[105] Takkar Sonia and DandamudiSivarama P., “A adaptive Scheduling Policy for Real-Time Parallel Database System,”
www.mpcs.org/MPCS98/Final_papers/Paper.48.pdf (172)
[106] Tay Yong Chiang and Suri R., “Choice and Performance in Locking for Databases, Proceedings of the 10th International Conference on Very Large Databases
(VLDB), August, 1984. (173)
[107] Tay Yong Chiang, “Some Performance Issues for Transactions with Firm Deadlines,” Proceedings of Real-Time Systems Symposium, Pisa, Italy, pp. 322-331,
Dec. 1995. (174)
[108] Tay Yong Chiang, Goodman N. and Suri R., “Locking Performance in Centralized Database,” ACM Transactions on Database Systems, Vol. 10, No. 4, pp. 415-
462, 1985. (175)
[109] Tay Yong Chiang, Suri R. and Goodman N., “A Mean Value Performance Model for Locking in Databases: the No – Waiting Case,” Journal of the ACM
(JACM), Vol. 32, No. 3, pp. 618-651, July 1985. (176)
[110] Teresa K. Abuya, Jose Richard M. Rimiru, Cheruiyot W.K, “An Improved Failure Recovery Algorithm In Two-Phase Commit Protocol For Transaction
Atomicity”, Journal of Global Research in Computer Science, Vol. 5, No. 12, December 2014. (27)
[111] Thomasian Alexander and Ryu I. K., “Performance Analysis of Two-Phase Locking,” IEEE Transactions on Software Engineering, Vol. 17, No. 5, pp. 386-402,
1991. (177)
[112] Thomasian Alexander, “Concurrency Control: Methods, Performance and Analysis,” ACM Computing Surveys (CSUR), Vol. 30, No. 1, pp. 70-119, March
1998. (178)
[113] Thomasian Alexander, “Two-phase Locking Performance and Its Thrashing Behavior,” ACM Transactions on Database Systems, Vol. 18, No. 4, pp. 579-625,
1993. (179)
[114] UdaiShanker, Nikhil Agarwal, Shalabh Kumar Tiwari, PraphullGoel, Praveen Srivastava, “ACTIVE-A Real Time Commit Protocol”, Wireless Sensor Network,
Vol.2, No. 3, pp. 254-263, March 2010. (28)
[115] Ullman Jeffrey D., “Principle of Database Systems,” Galgotia Publication Pvt. Ltd., 1992. (181)
[116] UlusoyOzgur and Belford G., “Real-Time Transaction Scheduling in Database Systems,” Information Systems, Vol. 18, No. 8, pp. 559-580, Dec. 1993. (182)
[117] UlusoyOzgur and Belford Geneva G., “A Simulation Model for Distributed Real-Time Database Systems,” Proceedings of the 25th Annual Symposium on
Simulation, Oriando, Florida, United States, pp. 232-240, 1992. (183)
[118] UlusoyOzgur, “Analysis of Concurrency Control Protocols for Real Time Database Systems,” Information Sciences, Vol. 111, No. 1-4, pp.19-47,1998. (186)
[119] UlusoyOzgur, “Concurrency Control in Real-Time Database Systems,” PhD Thesis, Department of Computer Science, University of Illinois, Urbana-
Champaign, 1992. (187)
[120] V. ManikandanV. Manikandan, R.Ravichandran, R.Suresh,F.Sagayaraj Francis, “An Efficient Non-Blocking Two Phase Commit Protocol for Distributed
Transactions”, International Journal of Modern Engineering Research (IJMER), Vol.2, No.3, pp. 788-791, May-June 2012. (29)
[121] W. Effelsberg, T. Haerder, “Principles of database buffer management,” ACM transactions on Database Systems, Vol. 9, No. 4, pp. 560-595, Dec. 1984. (191)
[122] Xiai YAN,Jinmin YANG, Qiang FAN, “An Improved Two-phase Commit Protocol Adapted to the Distributed Real-time Transactions”, PRZEGLĄD
ELEKTROTECHNICZNY, (Electrical Review), R. 88 NR 5b, 2012. (30)
[123] Y. Jayanta Singh, YumnamSomananda Singh, Ashok Gaikwad and S.C. Mehrotra, “Dynamic management of transactions in distributed real-time processing
system”, international journal of dabasemenagement systems (IJDMS ), Vol. 2, No. 2, May 2010. (31)
[124] Yoon Yongik, Han Mikyung and Cho Juhyun, “Real-Time Commit Protocol for Distributed Real-Time Database Systems,” Proceedings of the 2nd IEEE
International Conference on Engineering of Complex Computer Systems, pp. 221-225, Oct.21-25, 1996. (196)
[125] Yousef J. Al-Houmaily et al., “Enhancing the performance of presumed commit protocol”, In: Proceedings of the ACM Symposium on Applied Computing, San
Jose, CA, USA, 28 February-1 March 1997. (32)
[126] YumnamSomananda, Ashok Gaikwad, “Management of missed transactions in a distributed system through Simulation”, IEEE, 978-1-4244-5540, 2010. (33)
[127] RakshithBasavaraj,“Distributed Database Management System Overview”, https://www.linkedin.com/pulse/distributed-database-management-system-
overview-rakshith, 2015.