acta electrotechnica et informatica

74
Košice No. 4, Vol. 4, 2004 Slovak Republic Acta Electrotechnica et Informatica ISSN 1335-8243

Upload: others

Post on 08-Dec-2021

9 views

Category:

Documents


0 download

TRANSCRIPT

KošiceActa Electrotechnica et Informatica
Košice
Slovak Republic
Acta Electrotechnica
et Informatica
ISSN 1335-8243
WU, X., BRYANT, R. B., MERNIK, M.:
Object-Oriented Pattern-Based Language Implementation 5
KOLLÁR, J.:
MÜHLBACHER, J., NECHANICKÝ, M.:
Renewable Energy Sources in Czech Republic Related with Integration to EU 21
STOIMENOV, L., STANIMIROVI, A., DJORDJEVI-KAJAN, S.:
Interoperable Component-Based GIS Application Framework 25
HÜTTNER, .:
Thermal Stress of Contacts and Current Path of Low-Voltage Switchgears 32
BOKR, J., JÁNEŠ, V.:
Monadic Predicate Formulae 37
JOUTSENSALO, J.:
SAMUELIS, L., FAŠIANOK, .:
The Role of Inductive Inference in the Design of Intelligent Tutoring Systems 54
MOSTÝN, V., NEBORÁK, I.:
HAVLICE, Z., KOLLÁR, J., CHLADNÝ, V.:
Objektovo-orientovaná analýza/návrh systémov a modelovanie dátových tokov 64
Pokyny pre autorov príspevkov do asopisu Acta Electrotechnica et Informatica 71
Instructions for Authors of Contributions to Acta Electrotechnica et Informatica Journal 73
C O N T E N T S
WU, X., BRYANT, R. B., MERNIK, M.:
Object-Oriented Pattern-Based Language Implementation 5
KOLLÁR, J.:
MÜHLBACHER, J., NECHANICKÝ, M.:
Renewable Energy Sources in Czech Republic Related with Integration to EU 21
STOIMENOV, L., STANIMIROVI, A., DJORDJEVI-KAJAN, S.:
Interoperable Component-Based GIS Application Framework 25
HÜTTNER, .:
Thermal Stress of Contacts and Current Path of Low-Voltage Switchgears 32
BOKR, J., JÁNEŠ, V.:
Monadic Predicate Formulae 37
JOUTSENSALO, J.:
SAMUELIS, L., FAŠIANOK, .:
The Role of Inductive Inference in the Design of Intelligent Tutoring Systems 54
MOSTÝN, V., NEBORÁK, I.:
Methods of Investigation of the Interaction of Driving and Mechanical Subsystems 59
HAVLICE, Z., KOLLÁR, J., CHLADNÝ, V.:
Object-Oriented Analysis/Design of Systems and Data Flow Modeling 64
Instructions for Authors of Contributions to Acta Electrotechnica et Informatica Journal (in Slovak) 71
Instructions for Authors of Contributions to Acta Electrotechnica et Informatica Journal (in English) 73
Object-Oriented Pattern-Based Language Implementation
*Department of Computer and Information Sciences, University of Alabama at Birmingham, USA
E-mail: {wuxi, bryant, mernik}@cis.uab.edu
**Faculty of Electrical Engineering and Computer Science, University of Maribor, Slovenia
E-mail: [email protected]
SUMMARY
Formal methods are often used for programming language description as they can specify syntax and semantics precisely and unambiguously. However, their popularity is offset by the poor reusability and extendibility when applied to non-toy programming language design. One cause of this problem is that classical formal methods lack modularity. Meanwhile there are always needs for informal constructs for semantic analysis, and there is no simple and precise way to specify informal constructs by formal specification, which makes the formal specification too complicated to understand. To address the aforementioned problems with modern software engineering technology, we conbine object-oriented Two-Level Grammar with Java to modularize language components and apply design patterns to achieve the modularity and implement the informal constructs in a proper way.
Keywords: design patterns, object-oriented technology, programming language implementation, two-level grammar
1. INTRODUCTION
The advantages of using formal methods for programming language definition are well known as they can be used to specify syntax and semantics in a precise and unambiguous manner and offer the possibility of automatically constructing compilers or interpreters [1]. However, despite obvious advantages, most widely used formal methods such as attribute grammars, axiomatic semantics, operational semantics and denotational semantics [2] are yet to gain popularity and wide application due to the poor readability, reusability and extendibility [3]. There are several factors for this, and the following two are most critical:
· Traditional formal specification for language implementation lacks modularity [3]. The different phases of interpreter or compiler implementation (e.g. lexical analysis, syntax analysis and semantics analysis) are always tangled together, and the specification for real programming languages is always very large and complex. However, the traditional formal methods lack mechanisms to encapsulate the language components for tight cohesion inside a module and loose coupling between modules.
· Formal specification for language implementation lacks abstraction. The semantics of a programming language are diverse, which hinders specification by pure formal methods. Many mathematics-based formal specifications do not provide a strong library mechanism and I/O capabilities, which make them quite complicated to address low-level semantics implementation and hard for user comprehension, therefore the specification is hard to be reused even though they are well modularized. On the other hand, the general purpose programming languages (GPL) such as Java offer an abundant library of classes and can be directly used with ease.
In order to address these two issues by providing more readability, extendibility and reusability for programming language specifications, we apply object-oriented technology and design patterns [4] on formal specifications and GPL Java to design a framework for automatic parser generation and semantics implementation.
In this framework, we use Two-Level Grammar (TLG) [5] as an object-oriented formal method to properly encapsulate the entwining lexical/syntax rules and abstract semantics of each grammar symbol into a class, and use Java, a GPL, to address the semantics implementation details and obtain the interpreter for the desired language. Therefore, we maximize the automatic code generation capacity of formal specification to precisely specify syntax and semantics, and utilize the massive library of classes in programming languages such as Java to avoid overly complicated use of formal methods. As a result, reuse or extending the language can be easily achieved by rewriting or extending the terminal symbol classes.
The reminder of this paper is structured as follows. Section 2 introduces TLG specification and the concepts of abstract semantics and concrete semantics. Section 3 presents the overview of the whole framework and some of its salient features. Section 4 details the object-oriented design of language implementation in this framework regarding the interpreter pattern [4]. Section 5 presents how we use the chain-of-responsibility pattern [4] to separate the formal and informal concerns in semantics analysis and section 6 demonstrates in depth how readability, extendibility and reusability are obtained with our approach. Section 7 describes the related work. We conclude and suggest future research in section 8.
2. BACKGROUND KNOWLEDGE
2.1 Two-Level grammar specification
TLG (also called W-grammar) was originally developed as a specification language for programming language syntax and semantics and was used to completely specify ALGOL 68 [6]. It has been shown that TLG may be used as an object-oriented formal specification language to be translated into existing GPLs [7]. The name “two-level” comes from the fact that TLG contains two context-free grammars corresponding to the set of type domains and the set of function definitions operating on those domains respectively. The syntax of TLG class definitions is:
class Identifier-1.
end class.
Identifier-1 is declared to be a class which may inherit from classes Identifier-2, …, Identifier-n.
The type domain declarations (also called meta rules) have the following form:
Id1, ..., Id-m :: DataType1; …; DataType-n.
which means that the union of DataType-1, …,DataType-n forms the type definition of Id1, …, Id-m.
The function definitions (also known as hyper rules) have the following forms:
function-signature:
()
The function body on the right side of ‘:’ specifies the rules of the left hand side function signature. Symbol ‘;’ is used in the right hand side to delimit multiple rules which share the same function signature on the left hand side. For more details on the TLG specification language see [5].
In this framework, we rewrite the TLG keyword class as terminal and nonterminal, and use the following TLG notations for different constructs in the class for each grammar symbol:
· Meta-level keyword Lexeme for lexical rules
· Meta-level keyword Syntax for syntax rules
· Hyper-level keyword semantics for semantics
2.2 Abstract semantics and concrete semantics
One distinguishing feature of our approach is that we separate abstract semantics and concrete semantics in compiler design, using formal specification and GPL to handle them respectively. In this paper, abstract semantics refers to the semantics of a nonterminal that are used to describe the composition of this nonterminal by other grammar symbols. This kind of semantics can be easily specified by formal specification such as TLG. For example, if a program is composed by declarations and statements, then the semantics for program can be specified in TLG as:
nonterminal Program.
//Syntax definition
end nonterminal.
which means that the semantics of Program is simply composed by the semantics of Declarations and Statements.
On the other hand, concrete semantics refers to those for which the implementation is very low-level or operating system related, such as the calculation of two complex objects (e.g. two matrices) or any I/O operation. Such semantics is difficult to be specified by formal methods and can make specification quite complex and low-level. However, this semantics is easier to be implemented by GPL directly. So our goal is to separate the abstract semantics with concrete semantics and to have them specified and implemented by TLG and Java, respectively. We will elaborate this in the following sections.
3. OVERVIEW
Figure 1 provides the control flow of programming language implementation in this framework. Tools are shown in ellipses. Shaded boxes contain generated code. Arrows denote control flow. To describe a language, the user specifies the lexical, syntactic rules and abstract
semantics for each grammar symbol (terminal or nonterminal symbol) with a single TLG class. The framework takes the TLG class file as input, and extracts lexical rules and syntax rules, which will be compiled by the lexer generator JLex [8] and parser generator CUP [9], respectively, to generate the corresponding lexer and parser in Java, respectively. Meanwhile, Java classes and interfaces for nonterminals are generated and class structures (class names, method signatures, etc.) for terminals are generated into two separated files. Users can later add Java code for concrete semantics analysis into the second file, which is the only file users need to manage in the programming language level. Once the lexer, parser and semantics in Java are compiled together (using javac), an interpreter in Java byte code is produced. The relationship among lexer, parser and semantics Java classes is as follows: the parser takes tokens produced by the lexer as input, creates semantic objects of Java classes, and builds Abstract Syntax Tree (AST) by calling the construction methods of these objects.
4. Object-Oriented Modularization
Fig. 2  The context-free grammar of Sam
To design the TLG specification in this framework, we apply the interpreter pattern [4], treating each grammar symbol as a class. For illustrative purposes, we will explore how the framework models grammars based on a sample language named Sam, which is a very simple language for specifying computations involving integer arithmetic only. Figure 2 is the context-free grammar of the Sam language. Symbols in bold stand for terminals, in which quoted strings and characters stand for keywords/meta-symbols/operators, integer and id stand for integer values and identifiers, respectively. The other symbols are nonterminals.
Nonterminal symbol classes: Each nonterminal symbol must have an associated class. For each production rule in the form of R ::= R1 R2 ... Rn, we create a class for the left-hand-side (LHS) nonterminal R, and specify the syntax rule using the TLG keyword Syntax followed by the right-hand-side (RHS) of the production R1 R2 ... Rn. The syntax will not only help direct the grammar specification in CUP but also generate constructor methods of each Java class to store the instance variables of R1 R2 ... Rn. For example, the TLG class for nonterminal Binary_expression:
nonterminal Binary_expression.
class Binary_expression{
Expression expression1;
Binary_operator binary_operator;
Expression expression2;
}
The semantics of R is represented by the keyword semantics, followed by the semantics operations of R1 R2 ... Rn, and in the generated Java code, semantics implementation is obtained by applying method semantics() iteratively on the instance variables representing R1 R2 ... Rn For example, the semantics for nonterminal program in Sam will be composed by the semantics of nonterminal declaration-list and statement-list. However, the nonterminals can directly delegate the responsibilities of implementing concrete semantics to terminals as well, as described in the next secion.
Notice that if a nonterminal is the LHS of several different productions, then all the corresponding productions should be unit productions [10], i.e. only one RHS variable in the production (if there exists a non-unit production for this nonterminal, we can easily eliminate it by rewriting the original grammar). We make the LHS variable as a super class (i.e. interface in Java), with each RHS variable as its subclass. Since interfaces can’t be initialized in Java, all the semantics of this super class will be completely implemented by its subclasses. This technique reduces the number of the generated AST nodes and provides a proper level of abstraction for those LHS nonterminals, as illurstrated in Figure 3, where the AST of a print statement “print a” is presented with shaded boxes represent the actual AST nodes.
Fig. 3  The AST for a print statement
Terminal symbol classes: Each terminal symbol in a grammar may have its own class. The lexical rule for each terminal is defined in its own class using keyword Lexeme followed by the quoted regular expression of the symbol. However, to avoid creating too many terminal classes, we can only specify the lexeme of those non-trivial tokens using a class. Here non-trivial tokens refer to the semantics-related tokens such as Identifier, Integer, Operator, etc. Trivial tokens are those that have no significant semantics contributions, such as meta-symbols, whose lexeme can be specified in the syntax of its parent class. The semantics interface associated with terminal symbols is also introduced in the terminal class followed by hyper-level keyword semantics. A corresponding Java class interface is generated, into which user can add concrete Java code directly. For example, the binary operator “+“ in Sam will have the following TLG class:
terminal PLUS.
end terminal.
Notes: The generated Java class of this TLG class will contain an interface of a method for concrete semantics implementation in Java, with Expression1 and Expression2 as parameters (their types are both Expression).
The use of the interpreter pattern has the following two benefits: firstly, it is easy to change and extend the grammar. As each grammar is composed by a number of terminals and nonterminals, the designer can always modify the grammar by class manipulation or extend the grammar using inheritance. Secondly, implementing the grammar becomes much easier. In our specification, each AST node is represented by a TLG class. The semantics part is easy to write node by node and the generation of the corresponding Java objects can be automated with a parser generator, such as CUP. Besides the above two benefits, we also make some adaptation on the basis of the sample approach introduced in [4]: first, instead of using recursive-descent parsing [10], we reuse the lexer and parser generator components JLex and CUP to generate a bottom-up parser, and then traverse the generated abstract syntax tree to implement semantics. Thus we leverage the LALR (1) parsing power of bottom-up parsing and the natural traversal property of top-down semantic analysis. Secondly, we create classes for nonterminals and terminals in contrast to the approach in [4] of creating classes for productions. Therefore, we can delegate concrete semantics of the nonterminals to terminal classes to separate the formal and informal concerns of semantic analysis and we only need to add Java code to terminal classes, which is in a separated file. This actually solves the major drawback for the interpreter pattern, namely that too many classes are to be managed by the user.
5. Separation of Formal and Informal Semantics
....
:};
...
In some interpreter generation approaches such as SableCC [11] and JJForester [12], once the AST is built, semantic actions will be added to every AST node and the interpreter or compiler is implemented by iterative traversal of this tree. However, this kind of method tangles the abstract semantics and concrete semantics together and breaks the formal property of the AST. As a result, the syntax grammar is bounded by embedding the semantic actions and hard to be extended or reused.
Another drawback of the traditional method is that the formal specification of those concrete semantics is very low-level and hard to read. For example, in a grammar production for doing I/O operations, the specification should be used to implement the input or output with the environment, which is operating system related; in an expression for addition calculation, specification may be used to deal with calculating the sum of two expression values. It is not hard for a formal specification to handle addition of two integers, however, the specification will be quite complicated when facing the addition of two matrices unless some additional functions are pre-defined in the formal specification on demand. This hampers the designer in reusing any implementation components of an existing language as they are bounded by low-level domain-related details. For example, the designer of a matrices calculator can take no benefits from the existing implementation of an integer calculator although they share quite a few syntax productions.
In order to address these problems, we apply the chain of responsibility design pattern [4] in the AST to recursively throw the responsibilities of implementing concrete semantics from the upper nodes to the lower nodes, until they reach the leaf nodes, i.e., nodes for terminal symbols. The applicability of this delegation method is explored below. Given a program written in a certain language, each concrete semantics operation is actually represented and distinguished from each other by at least one terminal symbol. For example, an I/O operation is indicated by terminal “print” and a requirement of addition is expressed by terminal “+”. Since each semantic action is represented by a terminal such as “print” or “+”, it is applicable for the concrete semantics operating on nonterminal nodes to finally find a terminal node to delegate the analysis responsibility. Even if no such terminal node can be found or the path between the nonterminal node and the terminal node is too long, we can introduce a dummy terminal, which has no lexeme at all and is only used to delegate the concrete semantics. This idea is actually similar to the well know mechanism of inserting markers in the attribute grammar [10].
Figure 4 is the partial UML diagram of the generated Java classes. Since we separated the concrete semantics with abstract semantics, we keep all the middle nodes (nodes for nonterminals) abstract and formal, and leave the concrete and informal semantics implementation to terminal nodes. For example, the TLG classes for nonterminal print-statement and terminal “print” in Sam can be as following:
nonterminal print_statement.
terminal PRINT.
Lexeme :: “print”.
end terminal.
Notes: The generated Java class of this TLG class will contain an interface of a method for concrete semantics implementation in Java, with Expression as the parameter of the method.
Now the user only needs to add concrete semantics into all the generated terminal classes (represented by gray boxes in Figure 4), using the full-featured operation library of Java. Continuing with the above example, the completed Java class for PRINT is:
class PRINT{
}
}
If we want to modify the language to make it handle matrix computation instead of integer computation, we need to make some adaptation to the semantics since there are different calculation methods and I/O strategies applied to integers and matrices. In our approach, we only need to rewrite the terminal Java classes to achieve this adaptation. As in Figure 4, we only need to rewrite the Java classes of leaf nodes represented by the gray-boxes, e.g. Print, Integer and Plus with the middle nodes intact. In the case of terminal class Print, the new semantics class could be as below:
class PRINT{
}
}
In our real implementation of this language, we utilized lots of existing Java APIs, such as ArrayList to store the matrix value and we used Java applets for polished output of the matrices.
6. Significance
With the help of T-Clipse [13], which is an Integrated Development Environment (IDE) for two-level grammar based on the Eclipse framework [14], we have developed an interpreter for the Sam language. Then we reused the Sam specification to quickly develop a language called BasicM for matrix calculation, as well as reusing the interpreter for Sam to build an interpreter for BasicM. Our experience in developing these two languages show that our approach does improve the readability, extendibility and reusability, as described below:
Readability. The TLG classes embrace a one to one mapping with grammar symbols (except the punctuation such as comma or semicolon). Each grammar symbol’s lexical/syntax rules and semantics are all defined in the same class, which is easy to read. In the TLG level, we only specify the abstract semantics in TLG classes, which makes the formal specification concise; in the Java code level, the user only needs to manage the file that contains terminal Java classes. This reflects the separation of concerns principle in software engineering.
· Extendibility. Adding another output operation in this language (e.g. output to a window instead of the console) can be achieved by make the terminal PRINT as a nonterminal, i.e. make it abstract, and let two new terminal classes named GUIPRINT and BASICPRINT to extend PRINT as in Figure 5. Terminal BASICPRINT can reuse the semantics component for original terminal PRINT. In this manner, to extend the output statements, user only needs to write a semantics class for terminal GUIPRINT.
· Reusability. Swithing the domain of expressions from integer calculation to matrix calculation can be achieved as below. A new grammar symbol Matrix is created, which is composed by some nonterminals and terminals, and replace the Integer class in the TLG level to regenerate the lexer, parser and abstract semantics (nonterminal Java classes) for the new language. To maximize the reuse of the concrete semantics components, only the Java classes of new terminals are automatically regenerated, while the Java classes of existing terminals are changed manually, which is the same approach used in JavaCC when regerating AST node classes[15].
7. Related Work
power
[kW]
Many researchers are working on object-oriented modular specifications from which compilers or interpreters can be automatically produced. Java Comiler Compiler (JavaCC) [15] is a Java parser generator written in the Java programming language. JavaCC integrates lexical and grammar specifications into one file to make specification easier to read and maintain. Combined with tree generators such as JJTree [16] or Java Tree Builder [17], it can be used to generate object-oriented interpreter/compilers. JavaCC (together with the tree generator) use the Visitor pattern [4] for tree traversal. However, JavaCC cannot handle left recursive grammars since it only generates recursive-descent parsers, which are less expressive than LALR(1) parsers. Another drawback of JavaCC is that the Visitor pattern is only applicable when the grammar is rarely changed because changing the grammar requires redefining the interface to all visitors, which is potentially costly [4]. This provides bad reusability for the specifications.
The ASF+SDF Meta-Environment [18] is an environment for the development of language definitions and tools. It combines the syntax definition formalism SDF with the term rewriting language ASF. SDF is supported with Generalized LR (GLR) parsing technology. ASF is a rather pure executable specification language that allows rewrite rules to be written in concrete syntax. However, though ASF is good for the prototyping of language processing systems, it lacks some features to build mature implementations. For instance, ASF does not come with a strong library mechanism, I/O capabilities, or support for generic term traversal [12]. As a major step to alleviate these drawbacks, JJForester [12] was implemented, which combined SDF with the general purpose programming language Java. However, again, it has the same drawback as JavaCC as it uses the Visitor pattern for tree traversal.
The LISA system [19] is a tool for automatic language implementation in Java. LISA uses well-known formal methods, such as regular expressions and BNF for lexical and syntax analysis, and use attribute grammar to define semantics. LISA provides reusability and extendibility by integrating the key concepts of object-oriented programming, i.e. templates, multiple inheritance, and object-oriented implementation of semantic domains [3].
Our major distinction with all the above research is as follows:
· As we use TLG to encapsulate the lexical, syntactic rules and semantics of each grammar symbol in a single class based on an object-oriented manner, we provide good modularization for grammar components making them easily extendible and reusable.
· We successfully separate the formal concerns and informal concerns in language implementation, and combine the feature of automatic code generation from formal specification with the massive library of classes in Java, to precisely specify syntax and semantics and minimize complexity in the use of formal methods.
· We separate the parsing from semantics analysis, realizing bottom-up parsing and top-down semantics analysis. The LALR(1) [10] parsing power and the natural property of recursive descent semantics analysis are combined together.
An additional benefit in our approach which is not discussed in this paper is the TLG specification’s strong computation power compared to other formal methods [20], e.g. TLG can specify the semantics of a loop statement in programming languages while attribute grammar cannot.
8. CONCLUSION & fUTURE WORK
In this paper, with an aim to provide good modularization and abstraction for formal specification in programming language description, Two-Level Grammar is introduced as an object-oriented formal specification language for modeling language components and constructs. Some software design patterns are also applied to help with the organization of the TLG classes and separate the informal concerns from formal concerns in language implementation. Therefore, we provide good modularity, readability, reusability and extendibility for TLG specification while leveraging mature programming language technology such as Java, thereby achieving our research objectives. Therefore, our approach offers a means to take advantage of the synergy between formal methods and general programming languages. The benefits of using our approach have been demonstrated by a sample language.
Besides the interpreter pattern and chain-of-responsibility pattern we described in the paper, there are other possible patterns that could be applied in this framework For example, the generated AST is actually an instance of the Composite pattern [4], with the terminal classes as leaf, and the nonterminal classes as composite. Another pattern we are interested to use in the future is Mediator pattern [4]. Once the grammar becomes large, it is quite common that non-local dependency [21] will appear, which means that the semantics of one AST node is dependent on another node which is contained in another sub-tree, such as the name analysis problem where properties of an identifier use site depends on properties of an identifier declaration site. Attribute grammar uses a propagating method to deliver related attributes through the path of linked nodes. This is obviously inefficient and Hedin has listed four drawbacks of this kind of approach in [22]. Our current practice is to forward the reference of one object to the others by the common ancestor of two node objects, which is similar to Hedin’s reference attribute grammar. However, our strategy is still not as efficient as desired and complicates the formal specification somewhat. We found that the Mediator pattern is well suited to solve this problem as its applicability is to the situation when a set of objects have to communicate in complex ways and the interdependencies are unstructured and difficult to understand. So, for AST nodes that need to communicate to other ones far away, we could create a mediator for them to communicate with. The major challenge is it is hard to design an algorithm for dynamically creating mediators for objects, since the AST is only built dynamically during parsing. We are still working on this.
REFERENCES
[1] F. G. Pagan. Formal Specification of Programming Languages: A Panoramic Primer. Prentice Hall, 1981.
[2] K. Slonneger, B. L. Kurtz. Formal Syntax and Semantics of Programming Languages. Addison-Wesley, 1995.
[3] M. Mernik, M. Leni, E. Avdiauševi, V. umer. A Reusable Object-Oriented Approach to Formal Specifications of Programming Languages. L'Objet Vol. 4, No. 3, pp. 273-306, 1998.
[4] E. Gamma, R. Helm, R. Johnson, J. Vlissides. Design Patterns, Elements of Reusable Object-Oriented Software. Addison-Wesley, 1995.
[5] B. R. Bryant, B.-S. Lee. Two-Level Grammar as an Object-Oriented Requirements Specification Language. Proceedings of 35th Hawaii Intl Conf. System Sciences, 2002.
http://www.hicss.hawaii.edu/HICSS_35/HICSSpapers/PDFdocuments/STDSL01.pdf
[6] A. van Wijngaarden. Revised Report on the Algorithmic Language ALGOL 68. Acta Informatica, Vol. 5, pp. 1-236, 1974.
[7] B. R. Bryant, A. Pan. Formal Specification of Software Systems Using Two-Level Grammar. Proc. COMPSAC ’91, 15th. Intl. Computer Software and Applications Conf., pp.155-160, 1991.
[8] JLex: Java Lexical Analyzer Generator. http://www.cs.princeton.edu/~appel/modern/java/JLex/
[9] CUP: Parser Generator for Java. http://www.cs.princeton.edu/~appel/modern/java/CUP/
[10] A. V. Aho, R. Sethi, J. D. Ullman. Compilers: Principles, Techniques, and Tools. Addison-Wesley, 1988.
[11] E. Gagnon. SableCC, An Object–Oriented Compiler Framework. Master’s thesis, McGill University, Montreal, Quebec, March 1998.
[12] T. Kuipers, J. Visser. Object-Oriented Tree Traversal with JJForester. Electronic Notes in Theoretical Computer Science, Vol. 44, 2001.
[13] B.-S. Lee, X. Wu, F. Cao, S.-H. Liu, W. Zhao, C. Yang, B. R. Bryant, and J. G. Gray. T-Clipse: An Integrated Development Environment for Two-Level Grammar. Proceedings of OOPSLA03 Workshop on Eclipse Technology eXchange, 2003.
[14] Object Technology International, Inc. Eclipse Platform Technical Overview, February 2003
[15] JavaCC: Java Compiler Compiler, Sun Microsystems, Inc. https://javacc.dev.java.net/
[16] Introduction to JJTree. http://www.j-paine.org/jjtree.html
[17] JTB: Java Tree Builder http://www.cs.purdue.edu/jtb/releasenotes.html
[18] M. G. J. van den Brand, J. Heering, P. Klint and P.A. Olivier. Compiling Language Definitions: The ASF+SDF Compiler. ACM Transactions on Programming Languages and Systems, 24(4): 334-368, 2002.
[19] M. Mernik, V. umer., M. Leni, E. Avdiauševi. Implementation of Multiple Attribute Grammar Inheritance In The Tool LISA. ACM SIGPLAN Not., Vol. 34, No. 6, June 1999, pp. 68-75.
[20] M. Sintzoff. Existence of van Wijingaarden’s Syntax for Every Recursively Enumerable Set, Ann. Soc. Sci. Bruxelles, Vol. 2, pp. 115-118, 1967.
[21] J. T. Boyland. Analyzing Direct Non-Local Dependencies In Attribute Grammars. Proc. CC ’98, International Conference on Compiler Construction, Springer-Verlag Lecture Notes in Computer Science, Vol. 1383, pp. 31-49, 1998.
[22] G.. Hedin, Reference Attributed Grammars, in D. Parigot and M. Mernik, eds., Second Workshop on Attribute Grammars and their Applications, WAGA'99, Amsterdam, The Netherlands, (1999), 153-172. INRIA Rocquencourt.
BIOGRAPHY
Xiaoqing Wu is a Ph. D. student in the Computer and Information Sciences Department at the University of Alabama at Birmingham. His research is focusing on compiler design, programming languages, formal specification and software engineering.
Barrett R. Bryant is a Professor and the Associate Chair in the Department of Computer and Information Sciences at the University of Alabama at Birmingham (UAB). He joined UAB after completing his Ph. D. in computer science at Northwestern University in 1983. He has held various visiting positions at universities and research laboratories since joining UAB. Barrett's primary research focus is in theory and implementation of programming languages, especially formal specification languages, and object-oriented and component-based software technology.
Marjan Mernik received his M.Sc. and Ph.D.degrees in computer science from the University of Maribor in 1994 and 1998 respectively. He is currently an associate professor at the University of Maribor, Faculty of Electrical Engineering and Computer Science. He was a visiting professor in the Department of Computer and Information Sciences at the University of Alabama at Birmingham in 2004. His research interests include principles, paradigms, design and implementation of programming languages, compilers, formal methods for programming language description, and evolutionary computations. He is a member of the IEEE, ACM and EAPLS.
POINTCUT DESIGNATORS IN AN ASPECT ORIENTED LANGUAGE
Ján KOLLÁR
Department of Computers and Informatics, Faculty of Electrical Engineering and Informatics,
Technical University of Košice, Letná 9, 042 00 Košice, Slovak Republic, tel. 055/602 2577, E-mail: [email protected]
SUMMARY
The strength of aspect-oriented languages is given by pointcut designators that pick out join points. In this paper we provide an overview of pointcut designators in AspectJ, classifying them with respect of different kinds of joint points. Our aim in the future is to find a general and flexible way of adding a new aspect to an existing language system. The idea behind our approach is the integration of programming paradigms, such that prevents the occasional insufficiency of a programming language when mapping a problem to a corresponding program. Such integration, as we believe, can be done not excluding neither abstract paradigmatic level nor practical programming language level. From this point of view PFL – a process functional language is a perfect basis for studying the aspect phenomenon in a disciplined way as well as for providing practical experiments. In particular, when aspect approach is considered, the goal is not to provide a complete set of defined primitive pointcut designators – we do not think it is possible, since the world of computation may change in the future. Instead of that, more perspective seems to be the determining the different semantic levels of computation and their relation and hierarchy, their sources and the style in which they can be reflected and affected by a programming language. In this matter, this paper is just a step to this research direction.
Keywords: programming paradigms, imperative functional programming, aspect oriented programming, implementation principles, programming environments, control driven dataflow, referential transparency, side effects
9. INTRODUCTION
PFL - an experimental process functional language [5,6,7,8,14,16] integrates the semantics of imperative and functional languages. A programmer is free to decide for using functional programming methodology including monadic approach [12,18], but he can also manipulate the memory cells, if appropriate. Then the imperativeness is reached by application of processes “attached” to the cells by their arguments [8]. “Stateful” evaluation by process application in PFL is more relaxed and less restrictive to a programmer than exploiting side effects encapsulated by monad.
Both monadic and process functional approaches are the same if reduced to purely functional methodology. They differ in exploiting imperative methodology, although both hide assignments to a programmer [2].
There are two main differences between them; monadic approach uses visible side effecting functions unit and bind, hiding memory cells to a programmer. Process functional approach is hiding two functions that perform the access and the update of memory cells, but all memory cells are visible to a programmer.
The strength of process functional approach is a paradigm that reflects the implementation of both imperative and functional languages bringing it to the source form. It means that each PFL program is a highly abstracted expression, which allows to perform source-to-source transformations instead of machine-independent optimization techniques that are well known using directed acyclic graphs and quadruples in imperative languages [9,10,20]. Since
semantic information such as binding names to identifiers is not missing using PFL expressions, this supports the requirement for source-to-source transformations as desired for the implementation of aspect-oriented languages [1,3,4,19].
On the other hand, less positive is the use of process functional language as a “programming language”. Seemingly, its level of abstraction is higher than that of an imperative language, but the methodology of performing side effects by application of processes is still less natural than using assignments. Using PFL, much useless control is eliminated, but the integration of just functional and imperative paradigms is evidently insufficient to break the non-conformance of problems in one side and “programs” on the other side. The weakness is that PFL flexibility is hardly to exploit practically since of insufficient methodology.
As a possible solution to this problem is an extension of process functional to aspect process functional paradigm. Aspect programming methodology (integrating logic and imperative programming) is more general than object oriented approach [7,16,20] as well as multi-paradigmatic approaches, such as concurrent constraint programming [11,15], imperative functional programming [13], and others.
The crucial role in aspect-oriented programming languages play pointcut designators, which we discuss in this paper, as used in AspectJ programming language. The aim of this paper is to provide a systematic but still informal overview of pointcut designators as a basis to their formal analysis in the future and an extension using process functional paradigm.
10. ASPECT PARADIGM AND ASPECT LANGUAGE
The motivation for aspect-oriented programming is the realization that there are issues or concerns that are not well captured by traditional programming methodologies.
For object-oriented programming languages, the natural unit of modularity is the class. But in object-oriented programming languages, crosscutting concerns are not easily turned into classes precisely because they cut across classes, and so these they aren't reusable, they can't be refined or inherited, they are spread through out the program in an undisciplined way, in short, they are difficult to work with.
Aspect-oriented programming is a way of modularizing crosscutting concerns much like object-oriented programming is a way of modularizing common concerns.
AspectJ is an implementation of aspect-oriented programming for Java. AspectJ adds to Java just one a new concept, a join point, and a few new constructs: pointcuts, advice, introduction and aspects. Pointcuts and advice dynamically affect program flow, and introduction statically affects a program's class hierarchy.
A join point is a well-defined point in the program flow. Pointcuts select certain join points and values at those points. Advice defines code that is executed when a pointcut is reached. These are, then, the dynamic parts of AspectJ.
AspectJ also has a way of affecting a program statically. Introduction is how AspectJ modifies a program's static structure, namely, the members of its classes and the relationship between classes.
The last new construct in AspectJ is the aspect. Aspects, are AspectJ's unit of modularity for crosscutting concerns They are defined in terms of pointcuts, advice and introduction.
AspectJ advices are expressions (in most cases of unit type) that are advised to be executed before, after or instead other expressions code parts, depending on pointcut designators.
Then AspectJ advice a would be expressed in PFL style in the next form:
advice :: T1 → T2 → … → Tm → T
advice x1 x2 … xm = e[x1, x2, … , xm]
(before | after | around)
p[x1, x2, … , xm]
which designates the set of constant expressions e[x1, x2, … , xm] selected for join points picked out by pointcut designator
p[x1, x2, … , xm]
This pointcut uses variables x1, x2, … , xm which are substituted by the values (that usually differ for different join points) and are used by expression e[x1, x2, … , xm] — the advice.
The crucial role of pointcut designators is evident, because after a join point and a set of values
[c1, c2, … , cm]
are selected, there is no problem to insert before or after a join point or replace the expression forming a join point (in case of around advice) by constant expression which is obtained by the application
(λ x1 x2 … xm. e[x1, x2, … , xm])
c1, c2, … , cm
performed in the compile time.
The detailed analysis of pointcut designators in this paper is informal. We decided for this approach for these reasons: Instead of detailed formal semantics in the whole, AspectJ documentation is oriented to explanation of many examples, which sometimes make more blur than appropriate. Although formal semantics is often available but just for particular constructs, such as in [19], this is insufficient for our purposes. Before we introduce poincut designators (also called poincuts) we will deal with join points as classified in AspectJ system.
11. JOIN POINTS
While aspects do define crosscutting types, the AspectJ system does not allow completely arbitrary crosscutting. Rather, aspects define types that cut across principled points in a program's execution. These principled points are called join points. A join point is a well-defined point in the execution of a program. The join points defined by AspectJ are:
Method call
Method execution
When the body of code for an actual method executes.
Constructor call
When an object is built and a constructor is called, not including this or super constructor calls.
Constructor execution
When the body of code for an actual constructor executes, after its this or super constructor call.
Initializer execution
Static initializer execution
Object pre-initialization
Before the object initialization code for a particular class runs. This encompasses the time between the start of its first called constructor and the start of its parent's constructor. Thus, the execution of these join points encompass the join points from the code found in this() and super() constructor calls.
Object initialization
When the object initialization code for a particular class runs. This encompasses the time between the return of its parent's constructor and the return of its first called constructor. It includes all the dynamic initializers and constructors used to create the object.
Field reference
Field assignment
Handler execution
12. BASIC PRIMITIVE POINTCUTS
Corresponding to join points introduced in preceding section, AspectJ primitive pointcut designators (primitive pointcuts) are classified as follows:
· Method and Constructor-related pointcuts
· Exception handler execution-related pointcuts
One very important property of a join point is its signature, which is used by many of AspectJ's pointcut designators to select particular join points.
Method-related pointcuts
AspectJ provides two primitive pointcut designators designed to capture method call and execution join points.
call(Signature)
execution(Signature)
At a method call join point, the Signature is composed of the type used to access the method, the name of the method, and the types of the called method's formal parameters and return value (if any).
At a method execution join point, the signature is composed of the type defining the method, the name of the method, and the types of the executing method's formal parameters and return value (if any).
Formally, Signature is the method pattern MethodPat, in the form:
[ModifiersPat] TypePat [TypePat . ] IdPat ( TypePat | .. , … )
[ throws ThrowsPat ]
ModifiersPat (modifiers pattern) may be a keyword, such as private, public, or static. Another wildcard ".." is used to designate any number of type patterns, each TypePat is one of:
IdPat [ + ] [ [] … ]
! TypePat
Here "+" denotes all subtypes and "[]" denotes array patterns.
Further, operators "!", "&&", and "||" are boolean operators not, and, and or, respectively.
In IdPat (the identifier pattern), the "*" wildcard matches zero or more characters except for ".".
The second meaning of ".." wildcard is that it matches any sequence of characters that start and end with a ".", so it can be used to pick out all types in any subpackage, or all inner types.
ThrowsPat is a name of an exception handler being thrown when a method fails its execution yielding an exception.
Both two pointcuts above also pick out constructor call end execution join points.
Object creation-related pointcuts
AspectJ provides three primitive pointcut designators designed to capture the initializer execution join points of objects.
call(Signature)
execution(Signature)
initialization(Signature)
At a constructor call join point, the signature is composed of the type of the object to be constructed and the types of the called constructor's formal parameters.
At a constructor execution join point, the signature is composed of the type defining the constructor and the types of the executing constructor's formal parameters.
At an object initialization join point, the signature is composed of the type being initialized and the types of the formal parameters of the first constructor entered during the initialization of this type.
Formally, Signature is the constructor pattern ConstructorPat, in the form:
[ModifiersPat ] [TypePat . ]
new ( TypePat | .. , …)
[ throws ThrowsPat ]
AspectJ provides one primitive pointcut designator to pick out static initializer execution join points.
staticinitialization(TypePat)
AspectJ provides two primitive pointcut designators designed to capture field reference and assignment join points:
get(Signature)
set(Signature)
At a field reference or assignment join point, the Signature is composed of the type used to access or assign to the field, the name of the field, and the type of the field.
Formally, the Signature is the field pattern FieldPat, in the form:
[ModifiersPat] TypePat [TypePat . ] IdPat
All set join points are treated as having one argument, the value the field is being set to, so at a set join point, that value can be accessed with an args pointcut.
Exception handler execution-related
AspectJ provides one primitive pointcut designator to capture execution of exception handlers:
handler(TypePat)
At a handler execution join point, the signature is composed of the exception type that the handler handles.
All handler join points are treated as having one argument, the value of the exception being handled, so at a handler join point, that value can be accessed with an args pointcut, introduced in the next section.
Except pointcuts above, other primitive pointcuts are provided, as introduced in the next section.
13. OTHER PRIMITIVE POINTCUTS
· State-based pointcuts
State-based pointcuts
Many concerns cut across the dynamic times when an object of a particular type is executing, being operated on, or being passed around. AspectJ provides primitive pointcuts that capture join points at these times. These pointcuts use the dynamic types of their objects to discriminate, or pick out, join points. They may also be used to expose to advice the objects used for discrimination.
this(TypePat or Id)
target(TypePat or Id)
The this pointcut picks out all join points where the currently executing object (the object bound to this) is an instance of a particular type. The target pointcut picks out all join points where the target object (the object on which a method is called or a field is accessed) is an instance of a particular type.
args(TypePat or Id or "..", ...)
The args pointcut picks out all join points where the arguments are instances of some types. Each element in the comma-separated list is one of three things. If it is a type pattern, then the argument in that position must be an instance of a type of the type name. If it is an identifier, then the argument in that position must be an instance of the type of the identifier (or of any type if the identifier is typed to Object). If it is the special wildcard "..", then any number of arguments will match, just like in signatures. So the pointcut
args(int, .., String)
will pick out all join points where the first argument is an int and the last is a String.
Control flow-based pointcuts
Some concerns cut across the control flow of the program. The cflow and cflowbelow primitive pointcut designators capture join points based on control flow.
cflow(Pointcut)
The cflow pointcut picks out all join points that occur between the start and the end of each of the pointcut's join points.
cflowbelow(Pointcut)
The cflowbelow pointcut picks out all join points that occur between the start and the end of each of the pointcut's join points, but not including the initial join point of the control flow itself.
Program text-based pointcuts
While many concerns cut across the runtime structure of the program, some must deal with the actual lexical structure. AspectJ allows aspects to pick out join points based on where their associated code is defined.
within(TypePat)
The within pointcut picks out all join points where the code executing is defined in the declaration of one of the types in TypePat. This includes the class initialization, object initialization, and method and constructor execution join points for the type, as well as any join points associated with the statements and expressions of the type. It also includes any join points that are associated with code within any of the type's inner types.
withincode(Signature)
The withincode pointcut picks out all join points where the code executing is defined in the declaration of a particular method or constructor. This includes the method or constructor execution join point as well as any join points associated with the statements and expressions of the method or constructor. It also includes any join points that are associated with code within any of the method or constructor's local or anonymous types.
Dynamic property-based pointcuts
if(BooleanExpression)
The if pointcut picks out join points based on a dynamic property. It's syntax takes an expression, which must evaluate to a boolean true or false. Within this expression, the thisJoinPoint object is available. So one (extremely inefficient) way of picking out all call join points would be to use the pointcut
if(thisJoinPoint.getKind().equals("
call"))
! Pointcut
picks out all join points that are not picked out by the pointcut.
Pointcut0 && Pointcut1
picks out all join points that are picked out by both of the pointcuts.
Pointcut0 || Pointcut1
picks out all join points that are picked out by either of the pointcuts.
( Pointcut )
picks out all join points that are picked out by the parenthesized pointcut.
It can be noticed that boolean operators are used to combined pointcuts, not type patterns, as it is in type patterns.
15. POINTCUT NAMING AND USING
Pointcut naming
pointcut PointcutId(Type Id, …):
Pointcut;
A named pointcut may be defined in either a class or aspect, and is treated as a member of the class or aspect where it is found. As a member, it may have an access modifier such as public or private.
class C {
}
Pointcuts that are not final may be declared abstract, and defined without a body. Abstract pointcuts may only be declared within abstract aspects.
abstract aspect A {
}
In such a case, an extending aspect may override the abstract pointcut.
aspect B extends A {
}
For completeness, a pointcut with a declaration may be declared final.
Though named pointcut declarations appear somewhat like method declarations, and can be overridden in subaspects, they cannot be overloaded. It is an error for two pointcuts to be named with the same name in the same class or aspect declaration.
The scope of a named pointcut is the enclosing class declaration. This is different than the scope of other members; the scope of other members is the enclosing class body.
Context exposure
Pointcuts have an interface; they expose some parts of the execution context of the join points they pick out. In this case formula Pointcut in pointcut declaration above exposes the arguments Id. This context is exposed by providing typed formal parameters to named pointcuts and advice, like the formal parameters of a Java method. These formal parameters are bound by name matching. On the right-hand side of advice or pointcut declarations, a regular Java identifier is allowed in certain pointcut designators in place of a type or collection of types. There are primitive pointcut designators available, where this is allowed: this, target, and args. In all such cases, using an identifier rather than a type is as if the type selected was the type of the formal parameter, so that the pointcut
pointcut intArg(int i): args(i);
picks out join points where an int is being passed as an argument, but furthermore allows advice access to that argument. Values can be exposed from named pointcuts as well, so
pointcut publicCall(int x):
call(public *.*(int)) && intArg(x);
pointcut intArg(int i): args(i);
is a legal way to pick out all calls to public methods accepting an int argument, and exposing that argument.
There is one special case for this kind of exposure. Exposing an argument of type Object will also match primitive typed arguments, and expose a "boxed" version of the primitive. So,
pointcut publicCall(): call(public
*.*(..)) && args(Object);
will pick out all unary methods that take, as their only argument, subtypes of Object (i.e., not primitive types like int), but
pointcut publicCall(Object o):
call(public *.*(..)) && args(o);
will pick out all unary methods that take any argument: And if the argument was an int, then the value passed to advice will be of type java.lang.Integer.
Pointcut using
PointcutId(TypePattern or Id, ...)
picks out all join points that are picked out by the user-defined pointcut designator named by PointcutId.
16. EXAMPLES
The difference between call and execution join points is as follows: Firstly, the lexical pointcut declarations within and withincode match differently. At a call join point, the enclosing code is that of the call site. This means that
call(void m()) &&
withincode(void m())
will only capture directly recursive calls, for example. At an execution join point, however, the program is already executing the method, so the enclosing code is the method itself:
execution(void m()) &&
withincode(void m())
execution(void m())
Secondly, the call join point does not capture super calls to non-static methods. This is because such super calls are different in Java, since they don't behave via dynamic dispatch like other calls to non-static methods.
Next example illustrate the use of wildcard * and modifiers.
call(public final void *.*() throws
ArrayOutOfBoundsException)
picks out all call join points to methods, regardless of their name name or which class they are defined on, so long as they take no arguments, return no value, are both public and final, and are declared to throw ArrayOutOfBounds exceptions.
The defining type name, if not present, defaults to *, so another way of writing that pointcut would be
call(public final void *() throws
ArrayOutOfBoundsException)
Formal parameter lists can use the wildcard .. to indicate zero or more arguments, so
execution(void m(..))
picks out execution join points for void methods named m, of any number of arguments, while
execution(void m(.., int))
picks out execution join points for void methods named m whose last parameter is of type int.
withincode(!public void foo())
picks out all join points associated with code in null non-public void methods named foo, while
withincode(void foo())
picks out all join points associated with code in null void methods named foo, regardless of access modifier.
call(int *())
picks out all call join points to int methods regardless of name.
call(int get*())
picks out all call join points to int methods where the method name starts with the characters "get".
call(Foo.new())
picks out all constructor call join points where an instance of exactly type Foo is constructed,
call(Foo+.new())
picks out all constructor call join points where an instance of any subtype of Foo (including Foo itself) is constructed, and the unlikely
call(*Handler+.new())
picks out all constructor call join points where an instance of any subtype of any type whose name ends in "Handler" is constructed.
Object[] is an array type pattern, and so is com.xerox..*[][], and so is Object+[].
staticinitialization(Foo || Bar)
picks out the static initializer execution join points of either Foo or Bar, and
call((Foo+ && ! Foo).new(..))
picks out the constructor call join points when a subtype of Foo, but not Foo itself, is constructed.
17. CONCLUSION
Except some inaccuracies in AspectJ definition, such as the ability for use multiple modifiers such as “public final” which does not correspond to syntax in section 4, we may notice the ambiguity of boolean operators (operands may be either type patterns or pointcuts) the ambiguity of wildcard “..” which designate any number of arguments but also any sequence of qualifiers (A.. designate A.B. A.B.C. etc.)
Using PFL we can exclude each initialization, provided that we initialize object by default.
We are able to exclude field manipulation poincuts set and get, because we manipulate environment variables indirectly.
Instead of call and execution it would be probably better to thing about an application as a common pointcut.
Great simplification is omitting all modifiers, such as public, private static, final, etc. that come out from imperative organizing a memory cells. In fact, static cells are just those associated with architecture resources, but then static without exact memory positions is still not sufficient.
Then, of course, it is substantial to deal with not just user organization of his application but also with time and space resources of computation. Hence, defining physical time and space aspects of computation may affect building embedded systems in the future significantly. In particular, it is clear that control flow poincuts are insufficient since of the existence of the second mirroring principle in computation which is data flow [17].
We are not sure, if it is possible to make the combining of different pointcuts more clear. We just recognize experimental basis as wrong. It was the reason why we decided to give attention to pointcuts in AspectJ as a basis for further detailed analysis and extension, based however on process functional language. Its uniform concept of modules, polymorphic classes with multiple superclasses, instances, objects as an application of classes to expressions provide us with simple basis for performing such a task. This however is the future.
REFERENCES
[1]
Avdicausevic, E., Lenic, M., Mernik, M., Zumer, V.: AspectCOOL: An experiment in design and implementation of aspect-oriented language. ACM SIGPLAN not., December 2001, Vol. 36, No.12, pp. 84-94.
[2]
Hudak, P.: Mutable abstract datatypes - or - How to have your state and munge it too. Yale University, Department of Computer Science, Research Report YALEU/DCS/RR-914, December 1992, revised May 1993.
[3]
Kiczales, G. et al: An overview of AspectJ. Lecture Notes in Computer Science, 2072:327-355, 2001.
[4]
Kiczales, G. et al: Aspect-oriented programming. In Mehmet Aksit and Satoshi Matsuoka, editors, 11th Europeen Conf. Object-Oriented Programming, volume 1241 of LNCS, pp. 220-242. Springer Verlag, 1997.
[5]
Kollár, J.: Process Functional Programming, Proc. ISM'99, Ronov pod Radhoštm, Czech Republic, April 27-29, 1999, pp. 41-48.
[6]
Kollár, J.: PFL Expressions for Imperative Control Structures, Proc. Scient. Conf. CEI'99, October 14-15, 1999, Herany, Slovakia, pp. 23-28.
[7]
Kollár, J.: Object Modelling using Process Functional Paradigm, Proc. ISM'2000, Ronov pod Radhoštm, Czech Republic, May 2-4, 2000, pp. 203-208.
[8]
Kollár, J., Václavík, P., Porubän, J.: The Classification of Programming Environments, Acta Universitatis Matthiae Belii, 2003, 10, 2003, pp. 51-64, ISBN 80-8055-662-8
[9]
Mernik, M., Zumer, V.: Incremental language design. IEE Proc. Soft. Eng., April-June 1998, 145, pp. 85-91.
[10]
Mernik, M., Lenic, M., Avdicausevic, E., Zumer, V.: A reusable object-oriented approach to formal specification of programming languages. L’Objet, 1998, Vol.4, No.3, pp. 273-306.
[11]
Parali, M.: Mobile Agents Based on Concurrent Constraint Programming, Joint Modular Languages Conference, JMLC 2000, September 6-8, 2000, Zurich, Switzerland. In: Lecture Notes in Computer Science, 1897, pp. 62-75.
[12]
Peyton Jones, S. L., Wadler, P.: Imperative functional programming, In 20th Annual Symposium on Principles of Programming Languages, Charleston, South Carolina, January 1993, pp. 71-84.
[13]
Peyton Jones, S. L., Hughes, J. [editors]: Report on the Programming Language Haskell 98 - A Non-strict, Purely Functional Language. February 1999, 163 p.
[14]
Porubän, J.: Profiling process functional programs. Research report DCI FEII TU Košice, 2002, 51.pp, (in Slovak)
[15]
Smolka, G.: The Oz programming model, In Jan van Leeuwen, editor, Computer Science Today, Lecture Notes in Computer Science 1000, Springer-Verlag, Berlin, 1995, pp. 324-343.
[16]
Václavík, P.: Abstract types and their implementation in a process functional programming language. Research report DCI FEII TU Košice, 2002, 48.pp, (in Slovak)
[17]
Vokorokos, L.: Data flow computing model: Application for parallel computer systems diagnosis, Computing and Informatics, 20, (2001), 411-428
[18]
Wadler, P.: The essence of functional programming, In 19th Annual Symposium on Principles of Programming Languages, Santa Fe, New Mexico, January 1992, draft, 23 pp.
[19]
Wand, M.: A semantics for advice and dynamic join points in aspect-oriented programming. Lecture Notes in Computer Science, 2196:45-57, 2001.
[20]
Zumer, V., Korbar, N., Mernik, M.: Automatic Implementation of Programming Languages using Object Oriented Approach. Journal of System Architecture, 1997, Vol.43, No.1-5, pp. 203-210.
BIOGRAPHY
Ján Kollár (Assoc. Prof.) was born in 1954. He received his MSc. summa cum laude in 1978 and his PhD. in Computing Science in 1991. In 1978-1981 he was with the Institute of Electrical Machines in Košice. In 1982-1991 he was with the Institute of Computer Science at the University of P.J. Šafárik in Košice. Since 1992 he is with the Department of Computers and Informatics at the Technical University of Košice. In 1985 he spent 3 months in the Joint Institute of Nuclear Research in Dubna, Soviet Union. In 1990 he spent 2 month at the Department of Computer Science at Reading University, Great Britain. He was involved in the research projects dealing with the real-time systems, the design of (micro) programming languages, image processing and remote sensing, the dataflow systems, the educational systems, and the implementation of functional programming languages. Currently the subject of his research is process functional paradigm and its extension to aspect paradigm.
RENEWABLE ENERGY SOURCES IN CZECH REPUBLIC RELATED WITH INTEGRATION TO EU
Jan Mühlbacher, Milan Nechanický
West Bohemia University in Pilsen, Faculty of Electrical engineering, Department of Electrical Power Engineering and Environmental Engineering, Univerzitni 8, 306 14 Pilsen, Czech Republic,
Phone: (+420) 377 634 300, Fax: (+420) 377 634 310, E-mail: [email protected], [email protected]
SUMMARY
This article deals with the future growth of renewable energy sources in Czech Republic respecting the power obligations in face of the European Union. It describes the present legislative conditions and gives the advices for their future changes. The main part takes care of wind energy.
The questions of utilizing the renewable energy sources renewable energy sources are in our country highly up to date. With the entrance to EU, we have singed up for future increasing of renewable energy sources share on the general power production.
Keywords: renewable energy sources, power system, wind energy potential, wind turbine, environmental impacts
18. INTRODUCTION
The plan of "Czech power conception" is the following growth of that share from approximately 2% present-day:
by the year 2010..........................upon 8%
by the year 2030.........................upon 11% to 13%
It results of these particulars that between years 2010 and 2030 RES will not belong to the basic power supplies in Czech Republic (CR), but their utilization will have a significant regional contribution. According to experts evaluation the aim of first period can be fulfilled especially with higher utilization of small hydropower plants, of wind turbines (WT) and above all with higher utilization of biomass. It is assumed an annual power production about 930 GWh of WT, in the year 2010.
It has proved, that the biggest development of wind power is in countries, where was realized a system of minimum redemption prices of electricity for a longer time. Such a system is used in most EU countries and also CR assumed it. For the fulfilment of indicative aims, it is also necessary to create a support system, whereby creates at known high capital expenses an atmosphere for investors.
Another expected development of RES should have generally take place as a consequence of Kyoto memorandum-reducing the emissions of greenhouse gases.
19. EVULATION OF LAWS IN FORCE
The questions of support in RES utilization are so far in Czech law regulated by:
· Power law: The support proceeds as follows:
a) producers of electricity from RES have a priority right of connection into power system
b) electricity produced by RES have a priority right of transmission and distribution
c) power system operator is liable, if it is technically possible, to buy the electricity from RES
d) by price statement of Czech power regulation office were fixed the floor redemption prices for electricity, supplied to power system by RES (tab. 1)
Type of RES
(exchange rate from 20.2.2004)
0.093
0.084
Biomass
0.078
Biogas
0.078
Tab. 1 Floor redemption prices of electricity from RES [1]
· Energy conservation law: The support is implemented as follows:
a) regional power conception must contain an evaluation of RES efficiency
b) government passes the "National program of economical energy treatment and exploitation of its renewable and secondary sources", valid for four-years,for realization of the program, may be granted subsidies from national budget
Furthermore, the RES manufactories of electricity are by other enactments free of income tax, namely for the five years of their commissioning.
These valid enactments however don't guarantee the performance of our power aims, to whereby we have signed up. New enactment is therefore inevitable.
20. POSSIBILITIES OF FUTURE DEVELOPMENT
According to experiences from EU is a RES dynamic development still strong dependent upon the supports of national governments. They must create a legislative ambit and economic instruments for reaching the objectives. The policy, when the power companies were responsible for the exploitation of RES, wasn't widely successful. The effective supporting programs must be clearly established and durable in long-term perspective. At the same time they must be enough motivating, but on the other side enough strict – designed only for serious investors.
TLG
Specification
TLG
Compiler
CUP
JLex
Input
Term
Output
Term
JVM
Single EU countries have chosen the various support attitudes. Mostly are the power companies liable for supplying a certain rate of electricity from RES, but they aren't liable for providing the grid connection at one's own expense.
Price setting mechanisms: A system of "green energy" allows to consumers to pay higher prices for energy from RES. The power company then promises, that a profit of this mark-up will use on a development of ecological energy. This program is from last year established in CR too.
Tax programs: One of the possible solutions is to reduce price discrimination between RES and fossil fuels by implementation of a "carbon tax". RES are free of this tax and at the same time they have a reduced VAT (value added tax).
Investment grants: In CR is possibility to gain a grant or a soft loan. Some countries however gave out from the investment grants and instead of begun to lay stress on the tax programs.
To most successful belong the systems of aerial supports, combined with the unlimited duration of tax relief. Distribution companies are liable for paying up to 90 % of average price for electricity from RES per consumer. Furthermore the law allows a reimbursement of "power" and "carbon tax". These customs will sooner or later have to become valid in CR. Every member state can have the best suitable way of supports, EU doesn't determine it strictly.
21. POTENTIAL OF WIND ENERGY IN CZECH REPUBLIC
The possibilities of wind energy in territory of CR are in no case able to confront with the possibilities of seaside countries. It is given by a continental position of the country and by complicated aerographical conditions, which decrease the wind speeds and make choppiness. Though, it is clearly not to say that in territory of CR it is impossible to exploit wind energy. Some locations, which aren't too little, have the potential easily comparable with wind conditions in Denmark. A primary criterion for location suitability is of course the wind speed. For the lowest value, whereat is the exploitation of wind energy still economic, is considering the speed of 4.5 m/s. Locations, where the average annual wind speed exceeds 6 m/s are excellent.
With the power potential of wind energy deals for years in CR an institution of atmospheres physics of Science Academy in Prague. According to its studies, the possible exploitable wind potential is possible to estimate at 700 MW to 1000 MW, and annual production at 1.5 TWh to 2.5 TWh. From the above mentioned studies comes-out a map in fig. 1 [5], where are marked the locations with possibility of building-up wind turbines. Consequently, the actual state of wind power utilization in CR (0.01TWh per year) doesn't match a technical potential of the territory.
Fig. 1 Wind map of CR
The largest and the most optimal locations in CR are uniquely situated at the extensive territory of Krusne Hory Mountains, in northwest of CR near the Germany frontier. There can be looked at the average annual wind speed of 7 m/s, which comes-out from long-term observations.
22. WIND FARM “CHOMUTOV” IN KRUSNE HORY MOUNTAINS
One of the major up-to-date projects in a sphere of RES will be a build-up of wind farm near city of Chomutov [6]. Selected locations are situated just at the plateaus of Krusne Hory Mountains. These territories up to 950 metres above sea level are characterized by the sufficient wind energy potential [6]. For 70 % to 80 % of the year blows here a wind, which can be used for driving wind turbines. Wind speeds vary here in annual average between 6.0 m/s and 7.5 m/s. The project takes into account with up to 96 wind turbines. Next, these most considerable areas are analysed:
1. wind turbines
3. evaluation of environmental impacts
5.1 Wind turbines
Power production will be realized from WT with rated power from 1.5 MW to 2.0 MW. These are the technologically advanced and efficient wind turbines from Danish (NEG Micon) and Germany-English (DeWind) provenance. The parameters of both types are in table II, the power characteristics are in figure 2 [3, 4].
NEG Micon NM72C/1500
double-feed induction generator, IGBTinverter
Nominal wind speed [m/s]
Tab. 2 Parameters of WT
Safety and minimum risk of these modern WT result also from their security toward an icing (by blades warming), the lightning effects (by sophisticated grounding) and the gusts (by automatic cutting-off). A crucial issue is the warming of blades and so removing the icing, which occurs very often in Krusne Hory Mountains. An optimization of running these wind turbines is provided with the optimal orientation against the wind direction and with the regulation of optimal tip angle position.
It will be installed totally:
58 pcs. NM 72C/1500 = 87 MW
38 pcs. D 8/2000 = 76 MW
Total:
Fig. 2 Power characteristics of WT
This installed capacity will have an annual electricity production of 458 GWh.
5.2 Distribution system
- underground cable line of 110 kV backbone distribution - 22.3 km
- underground cable line of 30 kV - 98.5 km
Total - 120.8 km
- distribution substations (110/30 kV)
DS 1 – western circuit
DS 2 – eastern circuit
- substation DS 3 (400/110 kV), which is situated near heat power plant “Prunéov”
The distribution system build-up is in this case necessary, because current system of proper distribution company is in this area little developed. It was drawn only for the supplies of mountain resorts, thus its capacity is fully used. The plan of project has its own distribution structure, solved with 10 % to 15 % of backlog, which enables connection of other participants. Smooth power distribution is assured by agreement with the Czech power company, namely with harmony of power law.
Each WT will be situated in locations, which are necessary to make accessible for traffic. It is assumed the build-up of service communications, which enable the transport of construction materials, and technology of WT and during the following operation a smooth maintenance service.
5.3 Evaluation of environmental impacts
In a scope of advanced workings was made an ecological evaluation under the law of examination the environmental impacts. In terms of the process were investigated all influences, which will occur during the build-up and subsequently during the operation. The most significant are noise studies, studies of landscape impacts and of fauna and flora impacts.
Noise studies: They were objectified all possibilities of noise generation, especially of noise during build-up and during operation. In phase of build-up it is possible to expect, that the noise emitted by goods traffic demonstrably satisfies the hygienic limits. During operation the dominant sources of noise with markedly lower intensities will be the individual WT. The sound pressure levels were calculated in places of nearest sites, which all as well suit the hygienic requirements.
Landscape and tangible properties impacts: A definition of landscape pattern is based on aesthetic qualities, which have markedly subjective character, and on natural appreciation, which is possible to objectify easily. In case of this project, the natural funds of the territory will not be touched. WT will be throughout the landscape scattered, so that they will not create any conglomerate. The tangible properties and nor no cultural relics will not be touched by the construction. The investor also takes into account the financial compensations, paid to appropriate villages.
Impacts on flora and fauna: A dominant fact is that all built-up area is situated on agricultural lands, which most has permanent herbage. Construction impacts on flora will be then minimal.
The whole area has good conditions for the existence of considerable animal kinds. Any immediate conflict with individuals is not expected due to a character of surrounding area, which forms a biggish zone. Experiences from abroad argue that birds easily adapt on movement and noise of WT. Everything indicates that no one of continual bird lane in this territory will be affected by the construction.
In terms of the project it is assumed, that in section of WT production plant in city of Chomutov will be created new job opportunities during build-up and during operation of the whole system.
23. CONCLUSION
Power generation with the utilization of renewable energy sources today already successfully competes with the conventional sources. Also Czech Republic is able to contribute to constantly increasing percentage of green electricity generation. The necessary condition is however to ensure a cohesion between legislative and economic tools in supporting the renewable sources.
REFERENCES
[1] Czech Republic parliament, “Improvement suggestions for year 2004,” Law of supporting renewable energy sources, 2003. (in Czech)
[2] Czechventi Inc., “Wind farm Chomutov,” Project brochure, 2003
[3] DeWind, “Wind turbine brochure D8,” available at www.dewind.de.
[4] Neg Micon: WT brochure NM72C/1500
[5] Novák, P.: Use of wind energy in Krusne Hory region, Proceedings of the Conference on the Effective Use of Physical Theories of Conversion of Energy, Pilsen 2003, pp 93-98. (in Czech)
[6] Velek, V.: Wind power and economic efficiency in present state of Czech Republic, Proceedings of CIRED Conference, 2000, C-4: pp 24-31. (in Czech)
BIOGRAPHY
Jan Mühlbacher graduated at Electrotechnical faculty of Mechanical and Electrotechnical university in Plze in 1980, where in 1987 finished his scientific postgraduate study (PhD.). In 1996 he was promoted as Associate Professor of Electric power engineering. From 1981 he works on West Bohemia University. First on the Department of Electrical Machines and then on the Department of Power Engineering and Ecology. He is an author of 1 monograph and 5 university textbooks and more then 50 scientific and professional papers mainly in foreign journals and proceedings. His specialisation is transient phenomena in electric networks, stability of synchronous machines and modelling of electrical network.
Milan Nechanický graduated at Faculty of Electrical Engineering, West Bohemia University in Pilsen in 2001. Now he is an internal post graduate (PhD.) student. Thesis title is “Stability of renewable energy sources in the electricity supply system”.
INTEROPERABLE COMPONENT-BASED GIS APPLICATION FRAMEWORK
Leonid STOIMENOV, Aleksandar STANIMIROVI, Slobodanka DJORDJEVI-KAJAN
CG&GIS Lab, Department of Computer Science, Faculty of Electronic Engineering, University of Niš, Serbia and Montenegro, E-mail: [email protected] , [email protected] , [email protected]
SUMMARY
In this paper we present research in Geographic Information Systems (GIS) interoperability. Also, this paper describes interoperability framework called GeoNis. GeoNis uses new technologies, proposed in this paper, to perform integration task between GIS applications and legacy data sources over the Internet. Our approach provides integration of distributed GIS data sources and legacy information systems in local community environment. The proposed framework uses the technology based on mediators, to allow communications between GIS applications over the Internet/Intranet. The problem of semantic heterogeneity will be resolved by concept of mediation and ontology.
To provide integrated access to various distributed geo-information sources, we have developed components as an extension to existing GIS application called Ginis. Component-based architecture of Ginis is also presented in this paper. We have developed DataConsumer module that encapsulates physical data access details from the rest of application. In this way we have completely separated GIS application from details of data access.
Also we have described our implementation of OpenGIS standards for uniform access to heterogeneous and distributed data sources. These standards are based on Microsoft Universal Data Access specification and OLE DB technology. Existing GIS application has been extended with new components for data access. Crucial part of this implementation is Ginis OLE DB Data Provider that is responsible for providing spatial data. The basic architecture of Ginis OLE DB Data Provider is also shown in this paper.
Keywords: nteroperability framework, component-based development, mediation, geographic information systems, Ginis, LE DB
24. INTRODUCTION
Geographic information systems (GIS) are computerized systems for managing data about spatially referenced objects. GIS data are typically used by various groups of users with different views and needs. Also, GIS applications are often built on different software platforms and execute on different hardware platforms. Nowadays, there is a strong trend of information systems integration in chain of systems among public information structures such as Internet. Another trend in GIS is publishing maps for World Wide Web community and development of web-based GIS applications.
Research in information systems interoperability is motivated by the ever-increasing heterogeneity of the computer world. Interoperability means openness in the software industry, because open publication of internal data structures allows users to build applications that integrate software components from different developers [15].
Heterogeneity in GIS is not an exception, but the complexity and richness of geographic data and the difficulty of their representation raise specific issues for GIS interoperability. Popularity of GIS in government and municipality institutions induce increasing amount of available information [20]. In local community environment (city services, local offices, local telecom, public utilities, water and power supply services, etc) different information systems deal with huge amount of available information, where significant amount of this data is geo-referenced. Information that exists in diverse data sources may be useful for many other GIS applications. But, information communities find it difficult to locate and retrieve data from other sources, in reliable and acceptable form. Each of these user groups has a different view of the world and available information is always distributed and mostly heterogeneous.
The systems that own this data must be capable of interoperation with systems around them, in order to make access to this data to become feasible. These applications also must deal with issues unique to geospatial data, including translating data formats into a uniform transient data structure, consistent coordinate systems, cartographic projections and platform-dependent data representations, and retrieval of associated attributes and metadata [7][26].
By joining the trend towards interoperation and openness, resource holders gain the ability both to better utilize their own information internally, and to become visible to an increasingly sophisticated user community, no longer satisfied with ringing up, writing to, physically visiting, or working on-line with the proprietary interfaces of a host of providers [9]. In this new environment, the interoperable organizations will be visible, usable and customer focused, whilst still maintaining their own unique branding within the Portals through which their content is available.
Component-oriented methodology is predominant programming methodology today. It allows reusability, extensibility, and maintainability. To provide integrated access to various distributed geo-information sources, we have developed components as an extension to existing GIS application called Ginis. This application is part of GeoNis interoperability platform. In this paper we present GeoNis and component-based architecture of Ginis.
The paper is structured as follows. In the second part, we shortly describe related work. The goals of our research activities, described in the third part of this paper, are defining component-based GIS application development framework as part of GeoNis interoperability framework. In fourth part of this paper we described extension of component GIS application with components for data access and our proposal for basic architecture of Ginis OLE DB Data Provider.
25. RELATED WORK
The need to share geographic information is well documented [26]. Recent reviews o