itada: interfaces tangibles avanzadas en domótica asistencia · domótica asistencia (advance...

12
Jornada de Seguimiento de Proyectos, 2010 Programa Nacional de Tecnologías Informáticas ITADA: Interfaces Tangibles Avanzadas en Domótica Asistencia (Advance Tangible Interfaces in Assistive Domotics) TIN2007-67993 Dr. Francisco José Perales López Departament de Matematiques e Informatica de la UIB EPS, A. Turmeda, C/Valldemossa km 7.5 07122, Palma de Mallorca, Spain email: [email protected] http://dmi.uib.es/~ugiv/itada/ Abstract The project we present hopes to advance another step in the definition of new multimodal interaction paradigms between the computer and the final user. Nowadays, electronic communication among people at different levels and different environments (chat, GMS services and virtual immersive scenes) is a fact. In previous experiences VBI (Visual Based Interface) systems have been developed, since they are the main font of information and interaction with the environment. On the other hand, conventional haptic systems have been explored and implemented. This project pretends to enlarge the interface’s multimodal degree of the tangible interface to include the study and application of senses such as hearing, taste, smell and their combination in a new design. Likewise, distributed intelligent agents will allow the proper modeling of the system’s behavior and its interaction with the user. The project’s final application is the implementation of a domotic home assistance system for the elderly and disabled. Consequently, an important aspect to be considered is the implementation of reliable verification methods for the identification of the elderly or disabled and their caregivers. To this effect, a multi biometric module is included in the proposed project. Finally, the implementation of the virtual assistance methods in a robot that helps an elderly in certain chores is an interesting challenge that will demonstrate the feasibility of natural interfaces applied to mobile robotics. As a basic application of the previous theoretical results, we propose the development of a tangible avatar in a domotic environment for the elderly daily assistance and with tele-assistance functionalities in chronic cases. Our experience in previous projects (TIN2004-07926, TIC2001-0931) includes VBI systems that allow movement capture and the interpretation of the user’s actions using intelligent agents. Therefore, this project’s implementation is based on acquired experience in VBI and haptics, in order to advance to more portable and tangible systems in the final paradigms of the natural interface. We will concentrate in the development of multibiometric authentication, intelligent virtual avatars, TTS-ARS tools, and their combination in intelligent models that handle this multimodality. The tools and software used must adjust to actual standards and seek the highest level of portability and compatibility (OpenGl, OpenVC, OpenAl, MPEG4..) Keywords: multimodal interfaces, computer vision, computer graphics, intelligent agents, virtual avatars, social and assistive applications.

Upload: others

Post on 02-Mar-2020

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: ITADA: Interfaces Tangibles Avanzadas en Domótica Asistencia · Domótica Asistencia (Advance Tangible Interfaces in Assistive Domotics) TIN2007-67993 Dr. Francisco José Perales

Jornada de Seguimiento de Proyectos, 2010 Programa Nacional de Tecnologías Informáticas

ITADA: Interfaces Tangibles Avanzadas en Domótica Asistencia

(Advance Tangible Interfaces in Assistive Domotics)

TIN2007-67993

Dr. Francisco José Perales López Departament de Matematiques e Informatica de la UIB

EPS, A. Turmeda, C/Valldemossa km 7.5 07122, Palma de Mallorca, Spain

email: [email protected] http://dmi.uib.es/~ugiv/itada/

Abstract

The project we present hopes to advance another step in the definition of new multimodal interaction paradigms between the computer and the final user. Nowadays, electronic communication among people at different levels and different environments (chat, GMS services and virtual immersive scenes) is a fact. In previous experiences VBI (Visual Based Interface) systems have been developed, since they are the main font of information and interaction with the environment. On the other hand, conventional haptic systems have been explored and implemented. This project pretends to enlarge the interface’s multimodal degree of the tangible interface to include the study and application of senses such as hearing, taste, smell and their combination in a new design. Likewise, distributed intelligent agents will allow the proper modeling of the system’s behavior and its interaction with the user. The project’s final application is the implementation of a domotic home assistance system for the elderly and disabled. Consequently, an important aspect to be considered is the implementation of reliable verification methods for the identification of the elderly or disabled and their caregivers. To this effect, a multi biometric module is included in the proposed project. Finally, the implementation of the virtual assistance methods in a robot that helps an elderly in certain chores is an interesting challenge that will demonstrate the feasibility of natural interfaces applied to mobile robotics. As a basic application of the previous theoretical results, we propose the development of a tangible avatar in a domotic environment for the elderly daily assistance and with tele-assistance functionalities in chronic cases. Our experience in previous projects (TIN2004-07926, TIC2001-0931) includes VBI systems that allow movement capture and the interpretation of the user’s actions using intelligent agents. Therefore, this project’s implementation is based on acquired experience in VBI and haptics, in order to advance to more portable and tangible systems in the final paradigms of the natural interface. We will concentrate in the development of multibiometric authentication, intelligent virtual avatars, TTS-ARS tools, and their combination in intelligent models that handle this multimodality. The tools and software used must adjust to actual standards and seek the highest level of portability and compatibility (OpenGl, OpenVC, OpenAl, MPEG4..)

Keywords: multimodal interfaces, computer vision, computer graphics, intelligent agents, virtual avatars, social and assistive applications.

Page 2: ITADA: Interfaces Tangibles Avanzadas en Domótica Asistencia · Domótica Asistencia (Advance Tangible Interfaces in Assistive Domotics) TIN2007-67993 Dr. Francisco José Perales

TIN2007-67933

1 Project Objetives The following reasons:

1. Development of a tangible interface that combines VBI basics with advanced TTS/ARS and if necessary with haptic systems and other senses (smell and taste). 2. Improvement of the models based on distributed intelligent agents for multimodal environment control with natural information of the user. Formalization of the concept of agent and compatibility with technologies and computational platforms 3. Improvement in the biometric systems and particularly exploration of fusion systems in biometrics. 4. Definition of a multibiometric system that must be hybrid with multisampling and multimodality. 5. Adaptation of the designed systems to mobile robotic elements for home aid (articulated arm and mobile robot). 6. Specification of all of these aspects in commercial products used in domotic environments and in routine medical e-Health and if possible of low cost.

Whit these reasons in we try to satisfy the proposed objectives: 1. Development of a tangible interface. 2. Use of Intelligent Humanoids Simulation techniques (3D Agents). 3. Definition of new low cost tangible Interfaces and their application to automated environments. 4. Establishment of some concrete prototypes for automated and medical e-Health.

2 Level of Success In order to assure the main objectives of the project, a full strong collaboration between partners

has been done. So the main initial tasks were the UML specification of the application prototype) and after that we have included the functionality needed in this prototype in coherence with the original projects objectives defined at beginning of the paper. We have organized periodical working sessions with all the partners in order to solve the main problems and establish the standards protocols communication between shared modules and applications. The project is organized in several subtasks and periods sections. In the next table we can summarize the evolution of the project.

Activity Partner Year 1 Year2 Year3 Actual StateT0: Web Site Maintenance and Project Coordination

UIB (P.Palmer) full full full On time

T1: Development of Tangible Interface VBI + ARS/TTS + Haptic. Other senses

UIB (F. J. Perales, R. Mas)

full full On time

T2: Multibiometrics System UIB (A. Igelmo)

full full On time

T3: Distributed Intelligent Agent UIB (G. Fiol) 2º half full 1º half On time

T4: UML Design UIB(G. Fontanet) full full 1º half On time T5: Modular Integration & Specific Commercial Prototypes

UIB, EPTRON, C. Rotger (F. Perales)

full On time

Page 3: ITADA: Interfaces Tangibles Avanzadas en Domótica Asistencia · Domótica Asistencia (Advance Tangible Interfaces in Assistive Domotics) TIN2007-67993 Dr. Francisco José Perales

TIN2007-67933

From the previous sections and the time table presented, we can conclude that we are on time in all the main tasks proposed as objectives in the project. In the task 0 & 5 the project is actively the three years, so continuous contributions are added in order to satisfy the requirements of applications. The task T2 & T4 are reaching the basics objectives proposed with some trial versions of an intelligent agent and some multimodal VBI systems, of course the final integration is not yet done. The facial computer animations modules are developed. At the moment we are increasing the contacts with enterprises to define carefully the final prototype real validation. T0: Web Site Maintenance and Project Coordination

In the task 0, the project is actively the three years, so continuous contributions are added in order to satisfy the website requirements. The main task developed includes the dissemination of results as international papers (see bibliography) and conferences. Also the local coordination and real collaborative papers and developments are done with others universities involved in the project (GIGA, Univ Zaragoza) and EPOS (C. Rotger, EPTRON). We plan to conclude this task at the end of the project, including all results and demos in website. http://dmi.uib.es/~ugiv/itada/

T1: Development of Tangible Interface VBI + ASR/TTS + Haptic. Other senses (taste and smell) In the ITADA project, the task 1 is very important because we combine several models of inputs and outputs to generate a multimodal user interfaces (MUI) y particular perceptual information (perceptual user interfaces, PUI). The main subtasks are the infrastructure design and interface enlargement by means of the inclusion of haptics and other senses. Exploring the taste and smell senses, we haven’t found commercial devices to use in a practical way inside ITADA main tasks, so we concentrate with visual, speech and haptics devices and practical applications with and articulate low cost robotic arm.

Subtask 1.1: System’s Architecture, communication’s infrastructure design.

We have developed the prototype of a domotic and medical application in collaboration with EPOS. A system capable of recognizing the face and zones of the face is has been used and improved (SINA and PINES related projects), a repertoire of basic gestures that allows us to enrich the present number of available actions has been incorporated. One of the fundamental qualities of the functionality of the system is its adaptability to the user. Evaluation and learning tasks for all symbols and recognizable gestures has been designed, guaranteeing the best adaptation to the patient limitations. Our investigation is centered in the exploration of new combinations of inputs, especially voice and haptics. Since we are not speech specialists, we have carried out a thorough study of commercial ASR/TTS systems arriving to the conclusion that the Loquendo’s Software is one of the best in all of its aspects besides having a quite extensive and functional SDK. The integration of visual signals will be carried out with commands delivered and/or recognized by Loquendo. The use of free software has been also guarantee using open ARS systems but the quality is lower that the commercial adopted. We are working yet in the final medical application.

Subtask 1.2: Interface enlargement by means of the inclusion of Haptics and other Senses

Description: The main goal of this subtask is to design and implement both, a user interface and an interaction mechanism especially conceived for people with reduced hand mobility and who can carry

Page 4: ITADA: Interfaces Tangibles Avanzadas en Domótica Asistencia · Domótica Asistencia (Advance Tangible Interfaces in Assistive Domotics) TIN2007-67993 Dr. Francisco José Perales

TIN2007-67933

out some tasks by means of a lever’s displacement. We manipulate an articulated arm to allow the user a natural interaction with the objects of its environment. We have described and implemented a VBI guided environment to control an articulated arm robot. Such a device can be used as a home aid for handicapped or old people who have limited mobility for simple tasks in a controlled environment. Experiments have been performed to prove the suitability of these tools using an Erik Robotnik mechanical arm with 6 degrees of freedom plus a gripping extremity that has been specifically acquired to test our environment. As a starting point, we have first designed and test a visual interface which allows for the interactive guidance of the robotic arm directly changing the values of the degrees of freedom. As this approach using forward kinematics is neither easy to use for the intended user nor intuitive, we have added higher-level control primitives that allow the direct control of the robot grip using inverse kinematics. The user can choose one of seven basic operations (up, down, forward, backward, open, close, stop) to guide the grip. The user interface includes the possibility for the record of several kinds of complex motions like “take the glass” or “open the drawer”. At this moment, our research is centered in the visual guided operation of the robotic arm using a web cam. This task allows much more complex operations such as object detection and automatic grip. The system is divided into two steps, first we detect the “goal” object using either user selection (ie. to take an objects that lies inside the field of view of the camera) or automatic selection of new objects in the scene (ie. to take a glass someone has recently put on the table if we need to drink). The result of this first step is a grip point, which describes the position where the open grip has to be located. Using image analysis, a reference height point and vanishing points of the table plane obtained during the initial configuration of the system we are able to convert (x,y) coordinates of the grip point from the 2D image to a 3D (x,y,h) position to locate the grip. At this moment, tests are being performed using small objects due to the reduced dimensions of the grip in order to assess the validity of the system

T2: Multibiometrics System

In the task 2 present models for design and implementation of a multi biometric systems. Particularly, our interest lies in the specifications on how to build a basic biometric system that can latter be extended to a multi biometric system. Basically, the different levels of biometric fusion are explored. We consider the following levels: sensor level, "feature" or characteristic level, rank level. Several final projects for degree fulfillment have been already developed using fingerprint recognition systems and some systems based on the iris recognition have been tested and developed. Unfortunately the iris software and commercial tools don’t able to integrate with other biometric signals. Integration of both procedures has not been possible due to access problems to the SDK software of the LG iris system. So finally we have implement a finger print biometric system with a facial recognition procedure. Both have been tested exhaustively and now we are integrating the both biometric inputs. In this task, we are developing fusion criteria more centered in the characteristics fusion level and rank level than in the device level. Normalization, characteristic selection and their transformation will be studied at a "feature" level, and methods based on PCA, ICA and MDS has been used. In particular we have developed and adaptative template system of facial biometric authentication. Features are a collection of measurable details, obtained from the biometric trait that defines the identity of a certain person. This collection of data is known as template, and it’s stored in the database. The acquired biometrics quality must be controlled in order to model the identity of the individual in a unique and distinct way. The creation and update of templates is a critical task for the correct use of a biometric application. In this paper we propose the implementation of a model that, using biometric-independent tools, intends to update, select and improve the templates stored in the database, in what we have called “adaptive biometric templates”. It has been tested with a fingerprint biometric database of 60 users. We have

Page 5: ITADA: Interfaces Tangibles Avanzadas en Domótica Asistencia · Domótica Asistencia (Advance Tangible Interfaces in Assistive Domotics) TIN2007-67993 Dr. Francisco José Perales

TIN2007-67933

obtained an average improvement over traditional templates of 26% for FMR and of 53% for FNMR, we consider these results very successful. Our paradigm is different from classical criteria and we could see in the picture.

Fig 1. Adaptive Criteria vs Classical Criteria

We have validated our proposal using different parameters for each template: a) Similarity with the other users (SO – Similarity others), b) Similarity with himself (SS – Self Similarity). For real testing of the adaptative biometric templates system we have emulated a scenario similar to the one that we could found in a little lab of a university or research centre or in an office with confidential information of some enterprise. The results could see in next picture.

Comparative ROCs

00,250,5

0,751

1,251,5

1,752

2,252,5

2,753

0 0,25 0,5 0,75 1 1,25 1,5 1,75 2 2,25 2,5 2,75 3

FMR

FNMRNormal templates

Adaptative templates

Figure 2. Comparative ROC on performance between normal and adaptative templates

We plan to finish this module including the equivalent and compatible criteria’s to faces, if is possible, and compare the curves for adaptative multibiometric templates. Finally this module will be integrated in the commercial application to validate the access of end users in applications. This is very important because the elderly o handicapped user in some cases can’t type passwords so the camera could validate the acceptance to system in a non invasive manner and configuration the system to specific parameters to user needs.

Page 6: ITADA: Interfaces Tangibles Avanzadas en Domótica Asistencia · Domótica Asistencia (Advance Tangible Interfaces in Assistive Domotics) TIN2007-67993 Dr. Francisco José Perales

TIN2007-67933

T3: Distributed Intelligent Agent Subtask: Development of a computational model for semantic description of the context in which the character is immersed. The objective of this subtask is to have a proper description of the world that surrounds the agent, so he can be aware of the events that are occurring. Also, a description of its inner world, which could be considered as its psychological aspects, would provide the agent with the “knowledge” to react in a coherently affective manner. To achieve this, we proposed the use of ontologies. According to Tom Gruber, in the context of AI, we can describe the ontology of a program by defining a set of representational terms. In such an ontology, definitions associate the names of entities in the universe of discourse (e.g., classes, relations, functions, or other objects) with human-readable text describing what the names mean, and formal axioms that constrain the interpretation and well-formed use of these terms. Formally, ontology is the statement of a logical theory. Using this concept, we have created classes, entities, and relations that represent the elements that form an event, based on 4 questions: what is the action of the event; where the event occurs; when it occurs; and who executes and/or receives the action (person or animals). Figure 3 shows the diagram with the event ontology.

Figure 3: Ontology Structure of ITADA Project

To provide the context with emotional content, each event, person, place, time, and any other object needs to be classified as emotional. This is achieved by organizing each element into categories, which then are translated into a set of emotions. The second ontology we defined is the personality-emotion ontology, which defines the goals, preferences, personality, and emotions of the character, and relates them with emotional characteristics. As in the former ontology, each element belongs to categories which determine the emotions to be triggered. Personality is determined by the Big Five Model, which has the following traits: openness, conscientiousness, extraversion, agreeableness, and neuroticism. As a result of this computational model the important semantic components, in the description of an event in the domotics environment, will have an affective content associated. Therefore, a set of

Page 7: ITADA: Interfaces Tangibles Avanzadas en Domótica Asistencia · Domótica Asistencia (Advance Tangible Interfaces in Assistive Domotics) TIN2007-67993 Dr. Francisco José Perales

TIN2007-67933

positive and negative emotions is produced for each agent in the system. The elicited emotions, together with mood and personality traits, are used as input by the affective model. The affective model is based on the PAD Space, which is a 3D affective space where emotions and mood are distributed along the eight octants given by the axis of Pleasure, Arousability, and Dominance. It was proposed by the psychologist Albert Mehrabian, and the reason why we decided for it was that allowed us to mathematically combine personality, emotions, and mood in the same space. Dynamism in this model is given by the change of intensities of emotions and mood, which are represented as vectors that move in the space, in different time instants. Personality is considered a fixed vector that influences the movement of the emotions vectors, and therefore, the change of a mood and its intensity. In this project, the affective output is represented through facial expressions of emotions, associated with a mood. Subtask: Facial Expressions One of the main goals of the ITADA project is to have an agent that could be visually realistic and empathic. This objective has been developed in this subtask. Realism is given by the visual representation which should include details that make the character looks like a person, avoiding the named “uncanny valley”. We have worked with a self modified version of Xface, which is a software developed by the Cognitive and Communication Technologies (TCC) division of the Fondazione Bruno Kessler. This software is a set of open source tools for creation of MPEG-4 and keyframe based 3D talking heads. The head mesh we have used was created using a tool named FaceGen. The mesh is MPEG-4 compatible, and using Xface, we have been able to manipulate all the parameters, according to the intensity of the emotions elicited by the computational model. As a result of the computational model, we can evaluate an event and trigger the corresponding facial expression. That is the case of the following example. Our virtual agent, Alice, has a disagreeable and neurotic personality, which makes her being in a “disdainful” mood (according to PAD Space). When she experiences the situation: “She loves research. Got a rejection from a very important conference”, her mood changes to be “bored” with an intensity of 0.26.

Figure 4: Different emotional states

And when “She argues with her best friend and she is really concerned” situation happens after the former one, her mood “bored” increases to 0.77. At the moment we are working on the evaluation of a platform named JADE, where we can program the behavior of the intelligent agents according to the computational model proposed before. More examples could be downloaded at http://dmi.uib.es/~ugiv/itada/Itada_caras/mayordomo.html

Page 8: ITADA: Interfaces Tangibles Avanzadas en Domótica Asistencia · Domótica Asistencia (Advance Tangible Interfaces in Assistive Domotics) TIN2007-67993 Dr. Francisco José Perales

TIN2007-67933

T4: UML Design Parting from the previous results of the TIN2007-67993 project, the objective of this task to plan and to control changes, adaptations and the additions proposed in other tasks. As a consequence of requirements requests by EPOS and Task 1, was done using use cases technique. Towards the end of the project a migration toward UML and its methods began. The project has been developed using the methodology based on UML besides carrying out all the new tasks with this method. It has been a very positive experience that has permitted teamwork and that due to the magnitude of the project is completely justified. The selection of UML is supported by the fact that it has becomes standard to work in analysis and design of applications oriented to objects. T5: Modular Integration & Specific Commercial Prototypes The main objective of Task 5 is the integration of all previous tasks in this section creating prototype applications. We have all the resources of the previous tasks at our disposal; we are implementing a low cost system. Also our functionalities will be integrated in some already extensively tested system is the case of Legrand Company (Spanish Division) developing an IOBL Client to control all domotic devices. The domotic house is really installed and running and a medium level of support is given. Also we are fixing at the moment, the real requirements of medical applications with C. Rotger Hospital, in particular cardiological control of patients at home. This year we must to reach a prototype of this application.

Figure 5: Elder end-user working with ITADA prototype and controlling the heart rate and pressure

3 Results

During the last two years, several national and international publications have been presented in important newsletters by the partners of the project. In the reference section, we present the most relevant contributions and only the published work at the moment (many under revision papers are at this time). Also the UIB has been organized an International workshop (AMDO2008, LNCS 5098) to promote the topics of the projects and this event is today an international worldwide reference conference in the areas of the project. This year we plan to organize also the AMDO 2010. The big numbers of ITADA results are: 36 papers distributed in 8 Journals and 28 Conferences Proceedings or Invited Talks. I would like to remark the local, national and international awards reached by the work directly related with results of ITADA:

Page 9: ITADA: Interfaces Tangibles Avanzadas en Domótica Asistencia · Domótica Asistencia (Advance Tangible Interfaces in Assistive Domotics) TIN2007-67993 Dr. Francisco José Perales

TIN2007-67933

1) CES 2008 Award (Consejo Economico y Social del Govern Balear) technological project SINA I (Sistema de Interaccion Natural Avanzada), with 6000€ y and book publication ISBN 978-84-613-1740-0. Local Government Illes Balears.

2) European Award “Acces-it 2009 good practice label” given by E-isotis. 3) e-accessibility Award of Consell de Mallorca project SINA II. Consell de Mallorca,

18/12/2009. Local Government Illes Balears.

The SINA Project is a VBI system developed partially inside ITADA and using Computer Vision & Graphics techniques to interact with the multimodal interface. In this time also several Ph. D. Thesis have been developed by the members of the project in

particular Dra. Cristina Manresa and Dr. Antoni Jaume from UIB and we plan to defence the Ph. D. of Diana Arellano the next July 2010, before finish the ITADA project. Actually are four theses under process and many students’ projects has been developed.

In the last year, we have developed and extension proposal of ITADA project as a European Project named AMUSINE (Last Call of VII UE program), where the coordinator was the UIB. We have also to remark the strong relation with project TIN2007-67896. Prototipos de Interacción Natural (PINes) and (TIN2007-63025) TANGIBLE: Humanos Virtuales Realistas e Interacción Natural y Tangible project that had increased the final synergies and the whole results.

Page 10: ITADA: Interfaces Tangibles Avanzadas en Domótica Asistencia · Domótica Asistencia (Advance Tangible Interfaces in Assistive Domotics) TIN2007-67993 Dr. Francisco José Perales

TIN2007-67933

4 References [1] A. Jaume-i-Capó, J. Varona, F. J. Perales. Representation of Human Postures for Vision-Based Gesture Recognition in Real-Time . Gesture-Based Human-Computer Interaction and Simulation. Lecture Notes in Computer Science, volume 5085/2009, pp. 102-107. [2] A. Jaume-i-Capó. Human gestures recognition for VR interaction. 2nd Franco-Spanish Meeting. Virtual Reality and Graphical Interaction (VRGI 2009), Rennes, France, 2009. [3] D. Arellano, I. Lera, J. Varona, F.J. Perales. Integration of a semantic and affective model for realistic generation of emotional states in virtual characters. International Conference on Affective Computing & Intelligent Interaction (ACII09), Amsterdam, Países Bajos. 2009. [4] D. Arellano, I. Lera, J. Varona, F.J. Perales. Generating Affective Characters for Assistive Applications. EMOTIONS & MACHINES Workshop, Geneva, Suiza. 2009. ( [5] D. Arellano, F.J. Perales. UGIVIA: ITADA Project. In 2nd Franco-Spanish Meeting. Virtual Reality and Graphical Interaction (VRGI 2009), Rennes, France, 2009. [6] F.J. Perales, S. Garcés. Una aplicación didáctica para la comunicación alternativa por ordenador. In Congreso de Comunicación Aumentativa y Tecnología, Zaragoza, Spain. 2009. [7] F. J. Perales, C. Manresa-Yee, J. Varona. Functional Rehabilitation using advanced HCI for children's with cerebral palsy. Congreso de la Sociedad Española de Neurociencia, SENC09. Tarragona, 16-19 Septiembre, 2009 (poster). [8] J.J. Muntaner, F. Negre, F. J. Perales, J. Varona, C. Manresa-Yee. SINA: acceso natural al ordenador para personas con PCI. IV Jornadas Iberoamericanas de Tecnologías de Apoyo a la Discapacidad "LAS TECNOLOGÍAS DE APOYO EN PARÁLISIS CEREBRAL", Madrid, 26-27 Octubre 2009. [9] J. Rossi, F. J. Perales, J. Varona, M. Roca. COL.diesis: transforming colour into melody and implementing the result in a colour sensor device. IV09 International Conference on Information Visualisation, 2009. ( [10] M. González, A. Mir, D. Ruiz-Aguilera and J. Torrens. Image analysis applications of morphological operators based on uninorms. IFSA-EUSFLAT-2009, Lisbon, Portugal, July 20th-24th, 2009. [11] M. González-Hidalgo, A. Mir. Noise Reduction Using Alternate Filters Generated by Fuzzy Mathematical Operators Using Uninorms (PhiMM-U Morphology). EUROFUSE 2009, Pamplona, Spain, 16-18 September, 2009. [12] M. González, A. Mir and D. Ruiz-Aguilera. A robust edge detection algorithm based on a fuzzy mathematical morphology using uninorms (FMM-U Morphology). VIPIMAGE-2009. Porto, Portugal, 14-16 October, 2009. [13] M. González-Hidalgo, A. Mir Torres, J. Torrens Sastre. Noisy Image Edge Detection Using an Uninorm Fuzzy Morphological Gradient. IEEE ISDA 2009, Pisa, Italy, November 30th - December 3rd, 2009. (To be published). [14] P. Ponsa, C. Manresa-Yee, D. Batlle, and J. Varona. Assessment of the use of a human-computer vision interaction framework. HSI09 2nd Human System Interaction , Catania, Italy. 2009. [15] A. Jaume-i-Capó, J. Varona, F.J. Perales. "Real-time recognition of human gestures for 3D interaction". In AMDO 2008, Mallorca (Spain), July 2008, ISSN 0302-9743 (Print) 1611-3349. [16] C. Manresa-Yee, J. Varona, R. Mas and F.J. Perales. "Hand Tracking and Gesture Recognition for Human-Computer Interaction", in World Scientific. [17] D. Arellano, J. Varona, F. J. Perales. "Generation and visualization of emotional states in virtual characters.". In Computer Animation and Virtual Worlds, 2008. [18] F.J. Perales. "Multimodal & Perceptual User Interfaces. Applications in Disable and Elderly People", Seminar in University of West Bohemia (45 min talk + 15 min discussion). [19] G. Fiol-Roig, D. Arellano, F. J. Perales, P. Bassa, M. Zanlongo. "The Intelligent Butler: a virtual agent for disabled and elderly people assistance.". International Symposium on Distributed Computing and Artificial Intelligence 2008 (DCAI 2008), Vol. 50/2009, Salamanca, Spain, pp. 375-384, 2008.

Page 11: ITADA: Interfaces Tangibles Avanzadas en Domótica Asistencia · Domótica Asistencia (Advance Tangible Interfaces in Assistive Domotics) TIN2007-67993 Dr. Francisco José Perales

TIN2007-67933

[20] J. Varona, A. Jaume-i-Capó, J. Gonzàlez, F. J. Perales. "Toward natural interaction through visual recognition of body gestures in real-time", Interacting with Computers, ISSN 0953-5438, DOI: 10.1016/j.intcom.2008.10.001. [21] I. Lera, D. Arellano, J. Varona, C. Juiz, R. Puigjaner. "Semantic Model for Facial Emotion to improve the human computer interaction in AML.". In 3rd Symposium of Ubiquitous Computing and Ambient Intelligence 2008, Vol. 51/2009, Salamanca, Spain, pp. 139-148, 2008. [22] M. Clapés, M. González Hidalgo, A. Mir Torres, P. A. Palmer Rodríguez. "Interactive Constrained Deformation of NURBS Surfaces: N-SCODEF.". In Articulated Motion and Deformable Models. Lecture Notes in Computer Science No. 5098, Srpinger Verlag, ISBN: 978-3-540-70516, ISSN: 0302-9743, Berlin, Germany, pp. 359-369, 2008. [23] M. González Hidalgo, A. Jaume Capó, A. Mir, G. Nicolau Bestard. "Analytical Simulation of B-Spline Surfaces Defomation.". In Articulated Motion and Deformable Models. Lecture Notes in Computer Science No. 5098, Srpinger Verlag, ISBN: 978-3-540-70516, ISSN: 0302-9743, Berlin, Germany, pp. 338-348, 2008. [24] M. González Hidalgo, A. Mir, G. Nicolau Bestard. "Dynamic parametric surface deformation using finite elements based on B-splines.". In International Journal for Computational Vision and Biomechanics. Vol.1, No.2 No. 5098, Serials Publications - New Delhi, India, ISSN: 0973-6778, pp. 151-161, July-December 2008. [25] M. González-Hidalgo, A. Mir Torres, D. Ruiz-Aguilera, J. Torrens Sastre. "Fuzzy Morphology based on Uninorms: Image Edge-detection. Opening and Closing.". In Computational Vision and Medical Image Processing. Taylor & Francis Group, London, ISBN: 978-0-415-45777-4, pp. 127-133, 2008. [26] M. González, A. Mir, D. Ruiz y J. Torrens. "Aplicaciones de operadores morfológicos basados en uninormas.". Presented in ESTYLF-08, Mieres, Asturias (Spain), September 17-19, 2008. [27] R. García, F.J. Perales. "Adaptive Templates in Biometrical Authentication", in WSCG 2008. 16-th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision 2008, Plzen, Czech Republic, February 2008. [28] A. Jaume-i-Capó, J. Varona, F.J. Perales. "Representation of human postures for vision-based gesture recognition in real-time", in The 7th International Workshop on Gesture in Human-Computer Interaction and Simulation 2007, Lisbon, Portugal, May 20007 [29] C. Manresa-Yee, J. Varona, F.J. Perales. "Reconocimiento robusto de gestos en tiempo real para su uso en interfaces basadas en visión", in Colloquium in Interacción 07 (AIPO – CEDI2007). [30] D. Monllor-Satoca, R. Gómez, M. González-Hidalgo, P. Salvador. "The "Direct–Indirect" model: An alternative kinetic approach in heterogeneous photocatalysis based on the degree of interaction of dissolved species with the semiconductor surface". Catalysys Today, available online at www.sciencedirect.com (Elvesier, ISSN: 0920-5861, 2007). [31] E. Cerezo, I. Hupont, C. Manresa, J. Varona, S. Baldassarri, F.J. Perales, and F.J. Seron. "Real-Time Facial Expression Recognition for Natural Interaction", in IbPRIA 2007, Part II, LNCS 4478, pp. 40–47, 2007. [32] E. Cerezo, S. Baldassarri, F.J. Seron. "Interactive agents for multimodal emotional user interaction", in Proceedings of Interfaces and Human Computer Interaction 2007. Ed. A. Palma dos Reis, K. Blashki, Y. Xiao. ISBN: 978-972-8924-39-3. pp. 35-42, 2007. [33] E. Cerezo, I. Hupont, C. Manresa, J. Varona, S. Baldassarri, F.J. Perales, and F.J. Seron. "Real-Time Facial Expression Recognition for Natural Interaction", in IbPRIA 2007, Part II, LNCS 4478, pp. 40–47, 2007. [34] G. Montoro, M. García-Herranz, P. Haya, X. Alamán, D. Brande, S. Baldassarri, E. Cerezo, F.J. Serón. "Integración de un agente virtual en un entorno de inteligencia ambiental", in UCAmI'07: II Simposio sobre Computación Ubicua e Inteligencia Ambiental. ISBN: 978-84-9732-605-6 Ed. J. Bravo, X. Alaman, pp. 135-141, 2007. [35] M. González-Hidalgo, A. Mir, G. Nicolau-Bestard. "An evolution model of B-spline parametric surface". The 2007 European Simulation and Modelling Conference (EUROSIS-ETI, ISBN: 978-90-77381-36-6, 2007), pp. 74-80. [36] M. González-Hidalgo, A. Mir, G. Nicolau. "An evolution model of parametric surface deformation using finite elements based on B-splines". In Computational Modelling of Objects Represented in Images (Taylor & Francis Group, ISBN: 978-0-415-43349-5, 2007), pp. 205-210.

Page 12: ITADA: Interfaces Tangibles Avanzadas en Domótica Asistencia · Domótica Asistencia (Advance Tangible Interfaces in Assistive Domotics) TIN2007-67993 Dr. Francisco José Perales

TIN2007-67933