adaptable space

47
Adaptable Space. Matthew Hunter. An investigation into analytic spaces with the ca- pacity to adapt to changing demands of human behaviour and social interactions.

Upload: matthew-hunter

Post on 30-Mar-2016

242 views

Category:

Documents


0 download

DESCRIPTION

Architectural Computing Graduation Project by Matthew Hunter

TRANSCRIPT

Page 1: Adaptable Space

AdaptableSpace.

Matthew Hunter.

An investigation into analytic spaces with the ca-pacity to adapt to changing demands of human behaviour and social interactions.

Page 2: Adaptable Space

Matthew Hunter.

ACKNOWLEDGEMENTS

UNSW Faculty of Built Environment, Tam Nguyen, Client & SupervisorUNSW Faculty of Built Environment, Stephen Peter, Lecturer

AdaptableSpace.

Page 3: Adaptable Space

Table Of Contents.

AdaptableSpace.

ACKNOWLEDGEMENTS

ABSTRACT

HYPOTHESIS

TABLE OF CONTENT

1.0 INTRODUCTION 2.0 CONCEPTUAL FRAMEWORK 2.1 Background information 2.2 Precedent Studies Building Around the Mind OpenFloor Tunable Sound Cloud Lost in Space Human Activity Analysis: A Review Enhancing Social Interaction in Elederly Communities 2.3 Theoretical framework Perception of the Senses through Computer Vision Adaptable Environments Architecture Capable of Enhancing

3. PROTOTYPE 3.1 Timeline 3.2 Methodology 3.2.1 Interpreting Sensory Data Blob Tracking Kinect 3.2.2 Sorting Sensory Data Observation Analyses Calculating Velocity Calculating Permutation Calculating Proximity Calculating Point of Intersection Identifying Social Interactions 3.2.3 Prototype Responds to Enhance Social Interactions Prototype 1.0 Prototype 2.0

4. DISCUSSIVE

5. CONCLUSION

REFERENCE LIST

APPENDIX

Page 4: Adaptable Space

Adaptable Space.

HYPOTHESIS

Spaces can be dynamically adjusted in real-time to rede-fine and enhance a space for social interactions. Digitally perceiving human activities and social interactions is seemingly possible through the collection and analysis of sensory data. The experiments outlined within this report demonstrate a component of the desired system, which will examined in further research and investigations.

ABSTRACT

The Adaptable Space project is an investigation into analytic environments with the capacity to adapt to the changing demands of their occupants.

Two mechanical prototypes were developed during the course of this project. These prototypes propose to de-velop an Architecture capable of sensing and perceiving human movements and social interactions with the objec-tive of encouraging social dynamism.

The prototypes aim to recognise and anticipate social interactions between 2 to 6 occupants. The conducted experiments and research; examine studies and methods for collecting and interpreting computer vision sensory data. The skeleton tracking capabilities of Microsoft Kinect are employed to develop a system capable of analysing components of human movement, deriving data such as the occupants’ centre of mass and velocity.

The architecture responds to prompt and stimulate social interactions through adjusting its spatial configuration to manipulate the circulation of its occupants.

Page 5: Adaptable Space
Page 6: Adaptable Space

Introduction.

The core purpose of Architecture has always been to facilitate for human needs. As needs evolve, so do the demands on a space. The project aims to explore the possibilities of a dynamically adaptable architecture that evolves with the individual and social needs of its oc-cupants. It aims to “unfreeze architecture, to make it a fluid, vibrating, changeable backdrop for the varied and constantly changing modes of life” (Zuk 1970).

The purpose of this project is to develop an Architec-tural system that reduces the environmental impact of a structure through increasing the life cycle of the build-ing. The production of waste, use of energy, greenhouse gases emissions and depletion of natural resources associ-ated with building construction and demolition has huge environmental implications (Australian Bureau of Statistics 2003). According to the Australian Bureau of Statistics; Australian construction and demolition of buildings con-tribute “eight million tonnes” of waste to landfill per year. This waste can be significantly reduced through increas-ing the architectures longevity by expanding its capacity to adapt to its evolving demands. In 1973, Norman Foster future proofed the ‘Willis and Faber headquarters’ build-ing by anticipating for changes in technology. At the time everyone was using typewriters and foster designed the building to be “wired for change” by ensuring that it was flexible enough to accommodate for the technologies of “a future which was essentially unknown” (Foster 2007). The ‘Adaptable Space’ project will take this approach to design, by developing a flexible spatial configuration capable of completely transforming itself to facilitate for adaptability.

The project proposes a mechanised prototype of a dynamic architecture that facilitates for adaption by enhancing and extending the activities of its occupants. The prototype aims to develop an environment capable of perceiving human movements and interactions through the collection and sorting of computer vision sensory data. The focus of the prototype is to examine opportunities for implementing calculated perceptions for developing an adaptable environment that is capable of enhancing its social dynamics.

The objective of the space is to enhance social dynamism by guiding and encouraging identified and anticipated interactions. The system also seeks to recognise and remove instances of social isolations by prompting these occupants to interact. It decomposes components of human movement to extract the parameters of position and velocity. This helps the system establish the occu-pant’s direction of movement, acceleration, forward facing direction and centre of mass. The intention of the prototypes is to develop a system capable of recognising the social interactions of 2 to 6 occupants. The prototypes will aim to determine the intensities of these interactions, using time in conjunction with the parameters. The veloci-ties and positions of all tracked occupants are calculated to project and anticipate for potential intersections where occupants may meet and initiate an interaction. The system also seeks to recognise instances of social isolations occurring amongst occupants. The architecture responds to prompt users to interact and enhance interactions by adjusting its spatial configuration to manipulate the way in which people circulate through-out the space. The architecture links isolated occupants through generating passageways to others in isolation or to those who are already taking part in an established interaction. The configuration is composed of a single transformable form that is capable of redefining the space by imitating established architectural mechanisms; walls, ceilings, openings and passageways.

There is great potential for further implementing analytic systems capable of perceiving human activities and interactions into many types of architectural applications. By embedding perception within Architectural systems, architects can design to facilitate for adaptability through sensing change, rather than anticipating for it. Perception of activities and interactions could be utilised to optimize thermal, visual, lighting and acoustic conditions and to promote sharing or collaboration in space. The role of these sorts of intelligent systems will be paromount in an-ticipating for change in architectural design of the future.

Page 7: Adaptable Space
Page 8: Adaptable Space

Conceptual Framework

Figure 1.0 - Prototype 1.0 Rack & Pinion Gear

Page 9: Adaptable Space

Background InformationThis section presents research conducted for conceptual-ising the ideas and direction of this project. The framework examines a series of precedent studies closely aligned to this research, which range from literature, to art instal-lations. These studies have informed the project on both technological and conceptual levels.

Precedent 01: “Building Around the Mind” provides an insight into how systems of architecture and the physical environment can influence the operation of human mind (Anthes 2009). The article examines a series of observa-tional analyses that present the latest findings in brain research. This study suggests ways in which architectural systems can be implemented to enhance social interac-tions.

Precedent 02: “Open Floor” is an interactive art installa-tion that examines the ways in which humans interact with their physical environment (Cazan 2011). The installation tracks human movement using blob tracking. This techno-logically influenced the direction of the prototype experi-ments, outlined within the methodology.

Precedent 03: “Tunable Sound Cloud” is a research proj-ect that focuses on performance based architecture with the capacity to tune itself to enhance and manipulate acoustic performance (Mani 2009). The project presents some key conceptual ideas that are relevant to the ob-jective of the ‘Adaptable Space’ to develop; an architec-tural system with the ability to perceive and adapt.

Precedent 04: “Lost in Space” is a research project framed as an investigation into how spatial situations can

be manipulated to affect the behaviour and movement of humans occupying the space. The project experi-ments with ‘smart materials’ and their ability to change in physical form and structure. The smart materials that used and discussed in their experiments range from latex rub-ber to electro-active polymer. It examines the possibilities for implementing these ‘smart materials’ within dynamic spaces (Gassmann & Muxel 2011). This research informed experiments conducted in building the Prototype outlined in the methodology.

Precedent 05: “Human Activity Analysis: A Review” is a literature review that presents, theories for methods of digitally perceiving human social interactions and activi-ties using sensory technologies (Aggarwal, J & Ryoo, M).

Precedent 06: The paper “Enhancing Social Interaction in Elderly Communities” is based on a research project that examines a perceptive architecture targeted at maintain-ing the physical, mental and emotional wellbeing of el-derly people. The projects approach to forming a percep-tion of the spaces social dynamics and encourage and enhance social dynamism has significantly informed the conceptual framework and methodology of the ‘Adapt-able Space’ project.

These studies provide a structure for supporting under-pinning theories developed in the following Theoretical Framework. This platform helps respond to the research question; Can the human senses be perceived digitally through computer vision to create adaptable environ-ments capable of enhancing social interactions?

Figure 1.1 - Prototype 1.0 Rack & Pinion Gear

Page 10: Adaptable Space

Description:‘Building Around the Mind’, by Emily Anthes was published in the April/May edition of the 2009 Scientific American. Emily Anthes is a freelance science and health writer, who has had her writing published is a vast assortment of distinguished publications. She has a master’s degree in science writing from MIT and a bachelor’s degree in the history of science and medicine from Yale, where she also studied creative writing (Anthes 2011).

Problem/Concept/Issue:The article examines the possibilities of brain research in helping us better understand how the operation of human brain responds to its physical environment, suggesting ways to “craft spaces that relax, inspire, awaken, comfort and heal” (Anthes 2009).

The article presents a series of experiments that link con-nections between the physical environment and the way in which the human brain functions. The experiments observe and compare the response of people and their ability to process information under different environmental circumstances.

In 2007; professor Joan Meyers-Levy from the University of Minnesota reported that the “ceiling height affects the way you process information”. She conducted an experi-ment where she randomly assigned 100 people to room with either an eight or 10 foot ceiling, where they were subjected to a series of tests. Participants were given a list of 10 sports and were asked to list them into different categories. The results suggested that people within the room with higher ceilings, were experiencing more of an abstract thought process. They responded with much more abstract categories, such as “challenging” sports or “sports they would like to play”. Whilst those of whom were subjected to the lower ceilings; responded with “more concrete groupings, such as the number of participants on

a team” (Anthes 2009).

In 2000; environmental psychologist Nancy Wells, from Cornell University, conducted studies that suggested views of a natural setting helps improve mental focus. Nancy and her colleagues followed 7 to 12-year-old chil-dren before and after a family move. Wells and her team discovered a link between those kids who were subjected to views of natural settings and gains on a standard test of attention (Anthes 2009). The findings suggest the following:- Lighter, brighter spaces with full-spectrum lighting increase alertness and help guard against depression- Access to views of a natural setting helps im-prove mental focus.- Rooms intended mainly for relaxation should fea-ture darker colours, dimmer lighting, fewer sharp edges on furniture and bookshelves (these activate the part of the brain that alerts us to danger)- Lower ceilings improve performance in detail-oriented tasks, whereas high ceilings encourage abstract creative thought.

Relationship and Benefit:The findings of the experiments and research presented in this paper demonstrate; “how can we utilize the rigorous methods of neuroscience and a deeper understanding of the brain to inform how we design” (Edelstein, cited in Anthes 2009, p. 55). This paper influenced the ideas and direction of the project through demonstrating ways in which Architecture can manipulate our thoughts and social interactions. This provides a platform for developing an architectural system cable of perceiving and adjust-ing itself to manipulate and enhance social interactions.

Building Around the Mind

Precedent Study 01

Page 11: Adaptable Space

Description:The OpenFloor project is an interactive floor projection installation that tracks human movement using blob tracking. The floor projection interacts with its occupants through projecting animated dynamic elements that respond to their position. Open floor was created by Vlad Cazan who at the time was a 4th Year Undergraduate at Ryerson University (Toronto, Canada) studying Radio & Television Arts. Vlad is now a research assistant at Ryerson University. The installation took place in April 2010, and he has since released the packaged source files on an open source repository (Cazan 2011).

Problem/Concept/Issue:The installation is framed as a dynamic floor projection. The lack of documentation and available material sug-gest that this installation is conceptually unclear. The proj-ect seems to lack some substance with the only stated objective being to understand “the ways people interact with the outside world, when they are not expecting it.”

Vlad has successfully created an advanced system ca-pable of measuring and interacting with human move-ments through integrating the advanced blob tracking algorithms of the openCV (Open Source Computer Vision Project) libraries into his own application.

Relationship and Benefit:An abundance of interactive projection installations have emerged in recent years. The significance of this particu-lar project lies within the fact that Vlad has released it as an open source project, giving novice programmers such as myself access to the source code and technological methodology.

This methodology has provided an insight into the limita-

tions of blob tracking algorithms for tracking people within a space. They are heavily reliant on controlled lighting conditions, which are not guaranteed in the exhibition space.

This persuaded the ‘Adaptable Space’ methodology to steer away from implementing blob tracking, inclining the project towards employing the skeleton tracking capabili-ties of the Microsoft Kinect.

OpenFloor

Precedent Study 02

Page 12: Adaptable Space

Description:The ‘Tunable Sound Cloud’ is a research project and installation in the form of a responsive ceiling element that adjusts its form to enhance the acoustic performance of the space. The structure is made of an array of trian-gulated hinged modules, with each point on the array being controlled by a pulley system and servos. Controlled through Grasshopper and Rhino, the Tunable Sound Cloud is actuated with Arduino micro-controller and servo mo-tors. It is ongoing collaborative research project led by Toronto based designer and Architect, Mani Mani. Mani’s research/work has been published, reviewed exhibited internationally. This particular project was exhibited in Sep-tember 2009.

Problem/Concept/Issue:The ‘Tunable Sound Cloud’ aims to examine the connec-tion between music of the past and the architecture that facilitated for its performance at the time. It focuses on re-sponsive systems in Architecture that can be adjusted and optimized to meet the sound requirements for a specific performance of music. The project demonstrates a sophis-ticated and innovative approach to building a dynamic architectural system (Mani 2009).

Relationship and Benefit:This study is technologically relevant to my research, as it presents an alternative approach to developing a dy-namic ceiling structure. The system relies heavily on grav-ity to push points of the structure in a downwards direc-tion and this demonstrates the potential of the dynamics that are achievable from such a system. The aim for my project is to develop a much more dynamic structure. This therefore persuaded my project to steer away from such a system.

The project presents some key conceptual ideas that are relevant to the objective of the ‘Adaptable Space’ to de-velop; an architectural system with the ability to perceive and adapt.

Precedent Study 03

Tunable Sound Cloud

Page 13: Adaptable Space

Description:“Lost in Space” is a research study and series of experi-ments that attempts to find new possibilities of dynamic spaces, with abilities to change their physical form and structure. This project is still a work in progress and is currently in the very early stages of its research, which will eventually lead into uncovering more sophisticated systems of dynamically changing architecture. It was exhibited in Germany 2010.

The project was conducted by Florian Gassmann and An-dreas Muxel. They are both proven academics teaching as visiting professors at various universities across Germany. Florian is a qualified architect and works at the Institute of Design at the Faculty of Architecture at the University of Applied Sciences in Cologne, Germany. His research focus area deals with ‘the basics of physical space and its influence on human behaviour’. Andreas is an interaction and interface designer for the MARS-Exploratory Media Lab at the Fraunhofer Institute for Media Communication. His works are is interested in the mixture of digital code and physical material and the man-machine interface.

Problem/Concept/Issue:The project is an investigation into how spatial situations can be manipulated to affect the behaviour and move-ment of humans occupying the space. It is particularly looking at the parameters within an environment that stimulate human behaviours and the ways in which people respond and interact to these (Gassmann & Muxel 2011).

The research experiments with smart materials and the parameters explored seems to be more focused on ma-teriality, rather than spatial situations. The smart materials

that are experimented with and discussed in their experi-ments range from latex rubber to electro-active polymer. The project is still very much unresolved and has a long way to come, before they are able to achieve the out-lined scope of works.

Relationship and Benefit:This scope of this project is very much in line with research being conducted by myself. I am investigating systems of Architecture with the ability to analyse and facilitate for changing needs of its occupants. This project provides an insight into their theories of how spatial configurations can be manipulated to affect behaviour and movement of its occupants.

The implemented work flow for operating the mechani-cal prototype is the same as the one proposed by the ‘Adaptable Space’ project, which proposes a physical prototype that is controlled by a parametric grasshopper model and driven by stepper motors. They have provided a detailed documented methodology for this proposed workflow, which has significantly influenced the methodol-ogy of the ‘Adaptable Space’ project.

Precedent Study 04

Lost in Space

Page 14: Adaptable Space

Description:This precedent study is based on an essay that examines various research papers on human activity recognition. The study was conducted by academics; Aggarwal, J & Ryoo, M from the ‘Electronics and Telecommunications Research Institute’ at The University of Texas at Austin. The paper was published in ‘Volume 43 Issue 3, April 2011’ edi-tion of the renowned ‘ACM Computing Surveys (CSUR)’ journal.

Problem/Concept/Issue:The objective of this paper is to provide a complete overview of state-of-the-art human activity recognition techniques for computer vision. It discusses various types of approaches designed for the recognition of different levels of activities.

The paper describes the hierarchical approach in which the human mind uses to measure human activity by analysing gestures to understand actions, then using ac-tions to perceive interactions, and group activities. These sub-activities can be processed and interpreted through this same hierarchical approach of modelling a high-level activity as a string of atomic-level sub-activities using techniques of computer vision (Aggarwal, J & Ryoo, M).

This is very significant to my project as it provides a useful insight into how human position tracking and body pos-ture analysis can be used to reconstruct the interactions and activities occurring in a space.

Relationship and Benefit:The ideas presented in the paper support developing theories for methods of digitally perceiving human social interactions and activities through sensory technologies. The objective of my research is to create a system capa-ble of constructing a “high-level” human activities¸ such as a social interaction. The paper suggests that achiev-ing this sort of sophisticated level of construction requires a string of atomic-level sub-activities. These activities are categorised within the paper into “four different levels: gestures, actions, interactions, and group activities”. Body position and gesture helps describe actions or activities, which then allows for us to understand interactions involv-ing two or more persons and/or objects, which leads into group activities. This hierarchical logic can be directly ap-plied to measuring human activities within the ‘Adaptable Space’ project.

Precedent Study 05

Human Activity Analysis: A Review

Page 15: Adaptable Space

Description:The paper ‘Enhancing Social Interaction in Elderly Com-munities’ is based on a research project that is currently in its early stages of development. The paper was written as a collaborate by academics Joshua J. Estelle (Computer Science and Engineering), Ned L. Kirsch (Dept. of Physi-cal Medicine and Rehabilitation) and Martha E. Pollack (Computer Science and Engineering) from the University of Michigan, USA. It was written in April, 2006 at the Univer-sity of Michigan, USA.

Problem/Concept/Issue:The paper examines a perceptive architecture targeted at maintaining the physical, mental and emotional well-being of elderly people. Social participation and relation-ships are identified as important contributing factors in achieving this. The objective of this project is to reduce social isolation by enhancing social interactions of elderly people.

The project presents an innovative approach to maintain-ing social participation by using sensory data to identify occurring cases of social isolation. It uses this sensory data to construct a model of the social network of an Assisted Living Facility using a wireless sensor networks to track the location and co-location of elderly residents.

This sensory data is used to construct a model of the social network which allows for the system to perceive oc-curring social isolations and prompt ‘users’ to participate

in activities and interact with other occupants.

Relationship and Benefit:This project embodies relevant ideas of using sensory data of its occupants’ movements to form a perception of the spaces social dynamics and encourage and enhance social dynamism.

The system identifies instances of social isolation and provides these occupants with suggestive prompts to en-courage and enhance social interactions. This approach to enhancing social dynamics through recognising and removing social isolations, has considerably informed the methodology.

The described method for gaining a perception of occur-ring social dynamics through constructing model of the spaces social network has also significantly informed the conceptual framework and methodology of the ‘Adapt-able Space’ project.

Precedent Study 06

Enhancing Social Interaction in Elderly Communities

Page 16: Adaptable Space

CAN THE HUMAN SENSES BE PERCEIVED THROUGH COMPUTER VISION TO CREATE ADAPTABLE ENVIRONMENTS CAPABLE OF ENHANCING SOCIAL INTERACTIONS?

PERCEPTION OF THE SENSES THROUGH COMPUTER VISIONHuman beings have a multitude of senses. Although con-sciously unaware the human mind is constantly organizing and interpreting sound, speech, touch, smell, taste and sight sensory information to construct a perception of our surroundings. The primary sensory device that humans employ for measuring human activity and interactions is sight (Marr, D 1982). Through vision we are able to interpret social interactions, and activities, by reading body language and actions of individuals. Using this hierarchical approach the mind is able to model a high-level activity as a string of atomic-level sub-activities. This hierarchy of sub-activities is discussed in the paper ‘Human Activity Analysis: A Review’ (Aggarwal & Ryoo):- Gestures/Body PositionGestures are elementary movements of a person’s body part, and are the atomic components describing the meaningful motion of a person. `Stretching an arm’ and `raising a leg’ are good examples of gestures (Aggarwal & Ryoo).

- ActionsActions are single person activities that may be composed of multiple gestures organized temporally, such as `walk-ing’, `waving’, and `punching’ (Aggarwal & Ryoo).

- InteractionsInteractions are human activities that involve two or more persons and/or objects. For example, `two persons fight-ing’ is an interaction between two humans and `a person stealing a suitcase from another’ is a human-object inter-action involving two humans and one object (Aggarwal & Ryoo).

- Group ActivitiesFinally, group activities are the activities performed by conceptual groups composed of multiple persons and/or objects. `A group of persons marching’, `a group having a meeting’, and `two groups fighting’ are typical examples of them (Aggarwal & Ryoo).

These sub-activities can be processed and interpreted through this same hierarchical approach using techniques of computer vision (Aggarwal & Ryoo). Blob Tracking algorithms and more recent forms gesture recognition technologies such as the Microsoft Kinect; are allowing for components of human movements and interactions amongst subjects to be analysed and decomposed. By linking this decomposed data to real world studies and observation analyses of human activities, perceptions of the occurring activities and interactions can be formed.

Theoretical Framework.

Page 17: Adaptable Space

These perceptions provide an insight into the nature of the occurring individual activities and social interactions.

ADAPTABLE ENVIRONMENTS Adaptability in Architecture generally refers to the ability of space being flexible enough to accommodate for the changing demands of its occupants (Fox & Kemp 2009, p. 96). This is traditionally in reference to adaptable systems that are performance based and are focused on optimiz-ing the requirements of a building, for example; an adjust-able louver system to encourage ventilation.

Another example of this is Building Automation Systems (BAS). BAS are computerised intelligent control systems, designed to control everything from the “lighting, climate, security and entertainment” of the building (Fox & Kemp 2009, p. 98). These systems are typically motivated by convenience and energy-use optimisation. Although these types of systems have the ability to adapt to changing demands on the system, they generally rely on a schedule (Daintree Networks 2010) or occupants manually manipu-lating the system control. They are not interactive in the sense that they are able to sense and form a perception that is shaped by “learning, memory and expectation” (Gregory 1987, p. 598–601) of their occupants and environ-ment (Fox & Kemp 2009, p. 96). They seek to detect and react, but do not perceive.

Recent forms of adaptable environments are emerging that are capable of forming perceptions of occupant presence and environmental conditions. The ‘Nitrogen Logic Automation Controller’ project is a home automa-tion and lighting control system using the Microsoft Kinect. The ‘Nitrogen Logic Automation Controller’ is “a power-saving automation controller that can run standalone in simple automation systems” (Nitrogen 2011). Unlike motion or occupation sensors the system uses the skeleton track-ing capabilities of the Microsoft Kinect, which is able to detect presence rather than motion. The project proposes a system capable of using gesture to perceive the activi-ties and interactions of its occupants (Nitrogen 2011). This application demonstrates an environment facilitating for adaptability by constructing a perception to enhance and extend the activities of its occupants.

ARCHITECTURE CAPABLE OF ENHANCINGThe oxford dictionary defines the notion of enhancing as an act of “intensifying, increasing, or further improving the quality, value, or extent of” a state of being (Oxford Dic-tionary Online 2011). Recent forms of perceptive architec-tures are emerging to address the need to enhance and extend activities of their occupants. The primary targets for enhancing and extending activities in the past have been for ‘the military, the elderly, and the handicapped’ (Fox & Kemp 2009, p. 122).

Perceptive architectures targeted at reducing social isolation of elderly people through the enhancement of social interactions are discussed in the paper ‘Enhancing Social Interaction in Elderly Communities’ (Estelle, Kirsch & Pollack). The paper explores technologies aimed at measuring social networks of Aged Care Facilities to iden-tify occurring instances of social isolation. The described system uses wireless sensor networks to track the location and co-location of elderly residents. It uses this sensory data to construct a model of the social network which allows for the system to perceive occurring social isolations and prompt ‘users’ to participate in activities and interact with other occupants. This system proposes an architecture with the ability to sense and perceive information, which allows for it to adapt by prompting and enhancing social interactions. This approach of enhancing is achieved by manipulating the behaviour of occupants through an ar-chitectural system that prompts users to respond in certain ways (Estelle, Kirsch & Pollack).

Manipulative environments are discussed in the article “Building Around the Mind” (Anthes 2009) published in the April/May edition of the 2009 Scientific American. The article explores the possibilities of brain research in helping us better understand how the human mind responds to its physical environment. It examines a series of observation analysis and experiments that link connections between the physical environment and the way in which the hu-man brain functions. The experiments observe and com-pare the response of people and their ability to process information under different environmental circumstances. The studies provide an insight into how architecture can influence the way in which the human mind functions and enhance particular activities and social interactions.

The studies examined in this article suggest the following:- Spaces intended for maintaining focus should feature lighter colours, brighter lighting and sharp edges. These acti¬vate the part of the brain that alerts us to dan-ger (Anthes 2009).- High ceilings encourage abstract creative thought (Anthes 2009). - Lower ceilings improve performance in detail-oriented tasks (Anthes 2009).- Spaces intended to calm and relax its occupants should feature darker colours, dimmer lighting, fewer sharp edges on furniture (Anthes 2009).

Page 18: Adaptable Space

Prototype

Page 19: Adaptable Space
Page 20: Adaptable Space

COMMENCE RESEARCHCONDUCT PRECEDENT STUDIESARDUINO & GRASSHOPPER EXPERIMENTSINTERIM PROTOTYPE DESIGN

FINALIZE PROPOSALSDESIGN AND BUILD PROGRESS BLOG

WEEK

01

WEEK

02

WEEK

03

WEEK

10WEEK

11WEEK

12

WEEK

09

WEEK

13

CONTINUE RESEARCHCONDUCT PRECEDENT STUDIESARDUINO, FIREFLY, GRASSHOPPER EXPERIMENTSRESEARCHREFINED PROJECT PROPOSAL

PROTOTYPE – PREPARE FOR FABRICATIONDOCUMENTATION FOR REPORTREPORT – METHODOLOGYSTART COMPILING SHOWREEL

*1ST REPORT DRAFT DUEPROTOTYPE FABRICATION – FINALISE LASER CUTTINGDOCUMENTATION FOR REPORTREPORT – METHODOLOGYREFINE SHOWREELGATHER ALL CATALOGUE MATERIAL

PROTOTYPE FABRICATION – FINALISE ELECTRONICSDOCUMENTATION FOR REPORTREPORT – METHODOLOGYREFINE CATALOGUE MATERIAL PREPARE SHOWREEL

PROTOTYPE FABRICATION – BUILD MODELGATHER CATALOGUE MATERIALSPUT TOGETHER PORTFOLIO REFINE SHOWREELREPORT – METHODOLOGY

PROTOTYPE COMPLETE - HARDWAREREFINE SHOWREELREPORT – CONCLUSION / DISCUSSIONWEBSITE CATALOGUE MATERIALS DUE

Timeline

Page 21: Adaptable Space

WEEK

04WEEK

05WEEK

06WEEK

07WEEK

08

WEEK

14

WEEK

15

WEEK

16

WEEK

17

CONTINUE RESEARCHCONDUCT PRECEDENT STUDIESCONDUCT OBSERVATION ANALYSISREFINED PROJECT PROPOSALARDUINO, FIREFLY, GRASSHOPPER & STEPPER MOTOR EXPERIMENTSRESEARCH ON HUMAN BEHAVIOR AND INTERACTIONS

CONTINUE RESEARCHCONDUCT PRECEDENT STUDIESCONDUCT OBSERVATION ANALYSISOBSERVATION ANALYSIS / RESEARCH APPLIED TO MATRIXCONSTRUCTION ON PROTOTYPE*ORDER ALL NECESSARY PARTS FOR FINAL MODEL

CONTINUE RESEARCHCONDUCT PRECEDENT STUDIESPROTOTYPE FABRICATION – BUILD AND PRESENT INTERIM MODEL IN SEMINARSHUMAN BEHAVIOR AND INTERACTION MATRIX COMPLETE*PRESENTATION

PROTOTYPE FABRICATION – DEVELOP INTERIM MODEL DOCUMENTATION FOR REPORTCOMMENCE REPORT – ABSTRACT, HYPOTHESIS, INTRODUCTION AND METHODOLOGY

PROTOTYPE FABRICATION – DEVELOP INTERIM MODELDOCUMENTATIONREPORT – CONCEPTUAL FRAMEWORK AND METHODOLOGY

PROTOTYPE – PREPARE FOR FABRICATIONDOCUMENTATION FOR REPORTREPORT – METHODOLOGYSTART COMPILING SHOWREEL

PROTOTYPE COMPLETE - HARDWAREREFINE SHOWREELREPORT – CONCLUSION / DISCUSSIONWEBSITE CATALOGUE MATERIALS DUE

REPORT SENT OFF TO PRINT*PRINTED CATALOGUE MATERIALS DUEFILM PROTOTYPE FOR SHOWREEL

*REPORT DUE*SHOWREEL DUE *PORTFOLIO DUEPROTOTYPE COMPLETE – PROGRAMMED MATRIX

PREPARE FOR EXHIBITION

EXHIBITION

Page 22: Adaptable Space

Methodology.

The following experiments propose to develop an Archi-tecture capable of sensing and perceiving human move-ment with the objective of enhancing the social dynamics of its space.

The prototype aims to recognise and anticipate for social interactions between 2 to 6 occupants. The system analy-ses components of human movement utilising the centre of mass and velocity of occupants, to project and antici-pate for potential social interactions. This is determined by calculating the paths of intersections where occupants are likely to meet.

Social interactions are determined using the velocity, posi-tion and time parameters, which allow for the system to establish the proximities and angles in which occupants are facing. This allows for the system to reconstruct and test whether occupants are facing and are within proxim-ity of one another. This is measured up against previously identified interactions and time, which allows for the sys-tem construct a model of the spaces’ social network. The system is then able to recognise the intensities of interac-tions and instances of social isolations occurring amongst occupants.

The architecture responds to nurture and stimulate identi-fied interactions, and prompt occupants experiencing social isolation to interact with others. This is achieved by manipulating the circulation amongst occupants. The structures form morphs to mimic established architectural elements, linking occupants by creating openings and passage ways.

The architecture attempts to recognise links between interacting occupants. It responds to enhance the social dynamics by encouraging interaction amongst those who have participated within interactions the least.

Microsoft Kinect Skeleton Tracking

OSCeleton via OpenNI

Firefly - OSC listener

component

Page 23: Adaptable Space

Firefly - Serial Port Write

Arduino Micro-Controller

EasyDriver Board

12V Stepper Motors

Grasshopper / Rhinoceros - Data sorted and Contructed

Input, Process, OutPut

Page 24: Adaptable Space

1. Interpreting Sensory DataTo develop a system capable of understanding com-ponents of human movement and social interactions, a method of collecting and analysing appropriate sensory data must first be established.

Based on the research conducted in the “Perception of the Senses through computer vision” it is apparent that a reconstruction of a high-level activity, such as a social in-teraction requires a string of atomic-level sub-activities. As stated previously these activities are categorised into “four different levels: gestures, actions, interactions, and group activities” (Aggarwal & Ryoo). Body position and gesture help describe actions or activities, which allows for us to understand interactions involving two or more persons and/or objects.

The focus of the prototype is to utilise components of body movement to construct a model of the analysed spaces’ social network. The system should be capable of extract-ing data for the centre of mass and velocity of occupants circulating throughout a space. This data would enable the system to further extract parameters such as accelera-tion, direction of travel and forward facing direction. The following experiments examine methods for collecting and analysing this data through computer vision.

Experiment: Blob TrackingComputer vision technologies can be applied to thehierarchical approach to sense and perceive thesesub-activities. The most established method of digitallyperceiving vision is through blob detection. Blob detec-tion is achieved through the detection of changes in differential regions of an image. Blob detection is capable of measuring approximate human movements within a space, but is limited in regards to measuring gesture.

Figure 1.3 - Microsoft Kinect

Page 25: Adaptable Space

Through utilising open source software, a number of blob tracking programs were experimented with. The mainwere OpenTSPS (Toolkit for Sensing People in Spaces)and OpenFloor (tracks humans on the floor). Both of whichintegrate the OpenCV (Open Source Computer Vision) library, which uses a ‘blob’ tracking algorithm to sense people moving around a space. They were both fairly similar in regards to their reliability of collected data, but OpenTSPS was more user-friendly as it allows to manually adjust thresholds to suit the applied space.

OpenTSPS communicates data such as the persistent id, age, center of mass, contours (the shape of the blob)and velocity through an OSC/TUIO server. I was able to access this data inside grasshopper (parametric model-ling plugin for Rhinoceros) via gHowl (set of components which extend Grasshopper’s ability to communicate andexchange information with other applications).This method was fairly reliable for measuring humanmovements within a relatively simple scene, but becamea problem when people clustered together and when new objects were introduced to the scene.

I was able to communicate persistent id, age, center of mass with grasshopper via the UPD receive component in gHowl and visualise the movements of people within Rhinoceros. The potential of utilizing the parameters of velocity, age and contours to create a much moresophisticated analytic system is promising.

Experiment: KinectMicrosoft Released the Kinect in late November 2010.The Kinect technology enables advanced gesturerecognition, facial recognition and voice recognition.

I was immediately intrigued with the devise and itspotential for measuring activities within a space.

Prior to the release of the Microsoft Kinect SDK, OpenNI was the primary framework for digitally perceiving data from the Kinect. OpenNI provides a platform for voiceand voice command recognition, hand gestures, and body motion tracking. This framework provides verythorough data streams for tracking human movements.

Following the release of the Kinect numerous opensource software tailored to access the OpenNI data streams emerged, the most prominent being OSCeleton. OSCeleton takes kinect skeleton data from the OpenNI framework and spits out the coordinates of the skeleton’s joints via OSC messages. I was able to access this datausing the UDP Receive from gHowl in Grasshopper.

Although data streams from the kinect are much more thorough and reliable than traditional blob trackingalgorithms, the standard indirect skeleton tracking of the Kinect (Microsoft SDK) only supports the tracking of upto 6 people, which for the scenario of the exhibitionis not ideal.

Page 26: Adaptable Space

2. Sorting Sensory DataThe gathered sensory data containing the position of oc-cupants is fed into a matrix, which sorts and constructs the incoming information. Using this data the matrix is able to extract parameters such as velocity and proximity, which gives an indication of the occupants forward facing angles, acceleration and direction of travel. This allows for the system to draw relationships and links to the nature of the interactions occurring amongst occupants. The ob-jective is to use this information to manipulate the space to encourage and enhance social dynamism.

The following observation analyses aim to decompose the body movements of interacting subjects with the objective of linking parameters of movement including positions, forward facing angles, acceleration, direction of travel and time to occurring social interactions.

The scope of this project aims to develop a system ca-pable of:- Recognising and determining the intensities of occurring social interactions.- Projecting and anticipating for potential social interactions - Recognising instances of social isolations occurring amongst occupants

The system then constructs a model of the spaces’ social network through the recording of identified interactions to a database. The database records information such as number of identified interactions and information about these interactions including the amount of time they oc-curred. The database allows for the system to form a per-ception of the spaces social interactions that is shaped by “learning, memory and expectation” (Gregory 1987, p. 598–601).

Experiment: Observation AnalysesAll observation analyses were conducted on the grounds of University of New South Wales for ethical reasons. These analyses were conducted on both participating and non-participating subjects. Precautions were taken to ensure that these analyses were conducted in a non-biased manner and that the privacy of participants was respected at all times.

The focus of the following analyses is to decompose the human movements and body language, which influence the perception of the human mind when determining the occurrence of a social interaction. The results of the observation analyses suggest that there are a number of contributing factors which determine an occurring social interaction; the primary being proximity and forward fac-ing direction of subjects.

Observation 01Type: Non-participating subjectsLocation: Lawn opposite Red Centre, University of New South WalesTime of Day: 13:05Environment: Outdoors, Warm sunny day, Lots of passing pedestrian traffic.Number of Occupants: 3 in totalOccupant descriptions: [i] Male, Asian, late teens early twenties[ii] Male, Caucasian, late teens early twenties[iii] Female, Asian, late teens early twentiesObservation:Subjects [i] and [ii] sitting on ledge of lawn engaging in conversation. Subject [iii] walking along university mall in the direction of Anzac Parade, approaches subjects [i] and [ii]. Subject [iii] remains standing whilst conversation continues for 30 seconds. Conversation ends and Subject [ii] stands. Subject [ii] and [iii] depart. Subject [i] remains sitting and [ii] and [ii] continue to walk along University Mall towards Anzac Parade.Observation assumptions:Scenario 01: Subject [ii] was meeting [iii] for a pre-ar-ranged engagement.Scenario 02: Subject [iii] friends with subject [ii]. Subject [iii] encountered [ii] by coincidence. They could have made plans or just have just been travelling in the same direc-tion.

Observation 02Type: Non-participating subjectsLocation: Physics lawn, University of New South WalesTime of Day: 14:30Environment: Outdoors, Clear sunny day, minimal pedes-trian traffic.

Figure 1.4 - Extracted parameters of human

movement diagram

Page 27: Adaptable Space

Number of Occupants: 4 in totalOccupant descriptions: [i] Male, Caucasian, early twenties[ii] Male, Caucasian, early twenties[iii] Female, Caucasian, early twenties[iv] Female, Caucasian, early twentiesObservation:All subjects sitting on lawn beside a tree next to Science Rd. Subjects are surrounded by bags. Subjects engaged in conversation. Subject [ii] is eating sandwich. Subjects [iii] and [iv] are laughing and seem very happy. Subject [i] is very quiet and timid whilst [ii] seems to be talking a lot and is the center of attention. Conversation continues for 16 minutes. Subjects arise to their feet. Subject [iv] brushes grass of her pants. Subjects walk towards University Mall. Subjects maintain walking distance until the reach univer-sity mall where they all depart in separate ways.Observation assumptions:Subjects are engaged in friendly conversation sitting on lawn eating lunch, it was not clear as to whether any of the subjects were engaged in relationships. The general body language and their departing in separate ways sug-gest that they were not.

Observation 03Type: Non-participating subjectsLocation: Library Level 8, University of New South WalesTime of Day: 11:30Environment: Indoors, congested, surrounded by lots of peopleNumber of Occupants: 8 in totalOccupant descriptions: [i] Male, Middle Eastern, late teens early twenties[ii] Male, Middle Eastern, late teens early twenties[iii] Male, Caucasian, late teens early twenties[iv] Male, Caucasian, late teens early twenties[v] Male, Caucasian, late teens early twenties[vi] Female, Middle Eastern, late teens early twenties[vii] Female, Middle Eastern, late teens early twenties[viii] Female, Middle Eastern, late teens early twentiesObservation:Group of 3 subjects [i], [vi] and [vii] sitting at a library study station, engaged in conversation. Subject [vi] and [vii] are both staring at subject [i] laptop in deep concentra-

tion. Occupants [ii], [iii], [iv], [v] and [viii] arrive from lift lobby area. These occupants are very loud and disrupt-ing within the library environment area. Subjects engage in 15 minute conversation, where [i] and [iv] continue to act in an loud obnoxious manner. Male subjects [ii], [iii], [iv], and [v] depart in the opposite direction, whilst female subject [viii] remains standing talking to subjects [vi] and [vii].Observation assumptions:I was able to listen into this conversation which may have affected the following assumptions. Initial subjects [vi] and [vii] seem to be critiquing a piece of work of Sub-ject [i]. This is interrupted as the group of predominately male approach the group. All subjects are engaged in conversation where body language and dialogue of male subjects [i] and [iv] suggests that they are trying to impress female subjects [viii], [vi] and [vii]. The females are engaged within a sub-conversation and do not seem interested in the male subjects. Males depart whilst females and subject [i] continue to discuss what appears to be a piece of work by subject [i].

acceleration -

- forward facing direction

centre of mass -

- direction of travel

Page 28: Adaptable Space

Calculating VelocityBoth of the tracking methods outlined in ‘Interpreting Sensory Data’, are able to track the centre of mass for each of the occupants. The centre of mass coordinates is communicated to grasshopper as X, Y and Z coordinates (GitHub Sensebloom Repository 2011).

The system is able to determine the occupants rate of change in movement (both speed and direction of travel) also known as velocity. Velocity is calculated by subtract-ing the position of a Person x at point in time t from their Position x at the increment of time ∆t, then dividing this by the increment of time ∆t.

Velocity = (x(t+∆t) – x(t) ) / ∆t (The Physics Classroom 2011).

Assuming that the occupants are moving around the space in a forwards direction, the angle in which occu-pants are facing can then be calculated by connecting an initial point x(t) with a terminal point x(t+∆t).

The data contains the following set of parameters:- centre of mass- acceleration- direction of travel- forward facing direction- time

Calculating the PermutationTo develop a system capable of calculating interrela-tionships amongst occupants, some form of calculation for determining all possible combinations of interactions needs to be established.

This was resolved by applying a permutation on the set identified occupants. For example if you have the follow-ing set of occupants moving through the space:{A,B,C} n=3We want to test every possible intersection:{A,B}; {A,C}; {B,A}; {B,C}; {C,A}; {C,B}The possible arrangements are calculated by n!/(n-2)3! = (3x2x1)/3-2 = 6 There are 6 possible arrangements.At first I thought this was correct then I discovered that all of the combinations were being calculated twice (for example {A,B}; and {B,A};) and there should in fact only be three values in the set {B,A}; {C,A}; {C,B};A script was then developed (see appendix) to spit out the following culling pattern that can be operated on the data set. True, True, False, True, False, False{A,B}; {A,C}; {B,A}; {B,C}; {C,A}; {C,B}

Proximity:

As people move throughout the space the system is continually calculating their proximity using the above ‘Permutation’ calculation.This is calculated by measuring the distance between grouped occupants.For example: If A = {10,2,0}, B = {20,5,0} C = {20,15,0}The distance between: {B,A}; = 10.4 {C,A}; = 16.4 {C,B}; = 10

Page 29: Adaptable Space

Point of Intersection:As occupants move throughout the space the system calculates and projects the anticipated points of inter-sections or shared focal point amongst occupants. It uses their velocity proximity; both position and speed to calculate the projected paths where occupants are likely to intersect. This is determined by calculating the angles between the vectors and then using the law of sines:

where R is the radius of the circumscribed circle of the triangle:

The system calculates this for every possible combination through applying a permutation on the set of vectors. This allows for the system to anticipate the likely intersection of people moving around the space, and determine the opportunities for social interactions.

The system then calculates whether or not occupants will meet at their projected path of intersection using their distance to intersection point and velocity (speed). This is calculated by dividing the projected distance of travel by the speed in meters per time interval. This returns a value which indicated the time that it would take occupants to meet their projected intersection. These values are compared and if the time is within a specified threshold then the system is able to determine whether or not the occupants will intersect. Time = Distance / Rate of Movement For Example: There are two occupants who are on path to intersect. Occupant 01 needs to travel 4 meters, whilst Occupant 02 only needs to travel just 3 meters. Occupant 01 is travel-ling at a rate of 1.2 meter per second, whilst Occupant 02

is travelling at 1 meter per second. Occupant 01: 4 meters /1.2 meter per second = 3.33 Seconds Occupant 02: 3 meters / 1 meter per second = 3 Seconds IF difference between occupants < nominated threshold THEN Intersection = True ELSE Intersection = False

Social Interaction IdentifiedThe system performs numerous calculations, to determine whether occupants are engaged in social interactions. A number of contributing factors influence this calculation; the primary being proximity and forward facing direction of occupants.

These parameters are compared against time and previ-ously identified interactions, which assists the system in forming a perception.

For example:IF occupants remain within proximity of one another & are within facing within a positive view range THEN Start CounterIF counter > predetermined time THEN Interaction Identified ENDEND

The identified interactions are recorded to a database to assist the in understanding the spaces social network as well as the intensities of occurring interactions. These cal-culations examine whether occupants purposely moved towards each other, by comparing the initial projection of time to the actual time taken to reach the point of inter-section. These calculations give some form of indication as to whether occupants are engaging in an interaction.

Figure 1.5 - Point of intersection Diagram

Page 30: Adaptable Space

3. Prototype Responds to Enhance Social InteractionsThese experiments propose an Architecture capable of utilising perceived sensory data to generate an Architec-tural configuration capable of encouraging and enhancing social dynamism. Through manipulating the way in which occupants circulate throughout the space; the space nur-tures and stimulates occurring interactions, whilst encourag-ing interactions amongst those experiencing social isolation.

To achieve a system as such; the architecture itself must be able to manipulate in its configuration. The first step was to devise a dynamic architectural form capable of morphing to accommodate for the desired spatial configurations of the occupants. Figure 1.6 was the first attempt at design-ing such a system. As the research developed this design became unfeasible due to theform being impracticable to manipulate as a dynamic structure.

A system for controlling a much simpler form was then developed. Figure 1.7 and 1.8 demonstrates a conducted experiment where a grid of points is projected onto asurface using the generative modelling ‘Grasshopper’plugin for Rhinoceros. Once the points are projected onto the surface the distance between the projected points and their origins can be calculated. These measurements can then be actuated to physical devises usingcomponents of Firefly and the Arduino micro-controller.

Firefly is a set of software tools developed to bridge the gap between Grasshopper , the Arduino, and other data sources. “It allows near real-time data flow between the digital and physical worlds”.

The Arduino is an “open-source electronics prototyping platform based on flexible, easy-to-use hardware and soft-ware”. The arduino is able to control devices such as servosand stepper motors, which are capable of controllingthe desired system. Figure 1.7 demonstrates firefly andthe Arduino controlling 3 servos, which are synced toa grasshopper model.

Figure 1.6 - Early concept prototype paper model

Figure 1.7 - Surface being trans-lated to physical prototype from Grasshopper

Page 31: Adaptable Space

Figure 1.8 - Early concept prototype render

Page 32: Adaptable Space
Page 33: Adaptable Space

Continued >

PROTOTYPE 1.0The next stage was to develop a system to drive the grid of control points using motors. Different methods of control-ling various pulley systems were experimented with, but they didn’t perform as desired as they relied on gravity to push points of the structure downwards. A rack and pinion gear system was then devised (see Fig 3.6), which was composed of a laser cut frame of Perspex to guide the shafts and hold the motors in place. Before the full scale model was built, an interim model was developed as a scaled down model to test the devised system.

Due to the nature of the system a set of custom paramet-ric pinion and shaft parts were designed within the para-metric programming environment of ‘grasshopper’. The first set as seen in Fig 3.7 were a failure, but after further research into the mathematics behind the gear a system was developed that worked very effectively (see Fig 3.8 & Fig 3.9). The gear system was driven by a set of four step-per motors. The system was driven by the firefly compo-nents developed for grasshopper.

A custom firefly firmata was adapted and developed from the “MODIFIED FIREFLY FIRMATA TO CONTROL A STEPPER MOTOR WITH A POTENTIOMETER” By Jason K Johnson, March 18, 2011 (SEE APENDIX). The fermata was edited to receive data sent from firefly to drive multiple stepper mo-tors. A custom grasshopper definition was also developed to send data for the multiple motors through the “serial write” firefly component.

When the model was set up to be laser cut the width of the 0.15mm laser beam was not accounted for. This meant that some of the junctions were not as tight as desired and as a result they required to be filled with plastic strips

to counter for the 0.3mm (2*0.15mm) gap. Standard acrylic was used in the prototype for demonstration purposes. Acrylic proved to be too fragile, which was especially evi-dent within the bending of the shafts.

The materiality of the morphing ceiling surface itself is fun-damental to the execution of this project. This was proving to be a difficult task with the system requiring for the surface material to stretch and retract about 200-300%. The most promising material discovered during the research was the Super-Elastic Plastic from the ‘Inventables’. The plastic is incredibly soft and stretches to about eight times its origi-nal size without ripping. Unfortunately this material is too expensive ($140 per sq/m) for this project. A spandex mate-rial was tested as a less expensive alternative. This material performed well when being stretched diagonally against the course of the thread, but failed for the purpose of the prototype when being stretched along the course of the thread. A latex rubber material was purchased towards the end of these experiments. This material was initially avoided due to some misleading advice from a local retail outlet, who suggested that precast thin sheets of latex were rare to find and that this would have to be manually cast. They also suggested that it would tear quite easily and is not suitable for this project. Some time was spent further researching the properties of the material and it was discovered that rubber is rated to be much stretchier than the spandex material previously experimented with. A sheet of 1000 x 1000 x 0.25 mm latex was sourced internationally from a latex fashion outlet in the UK.

The intention of this interim prototype was to develop and test the materials and mechanisms for a component of the desired system, which is to be investigated in Prototype 2.0.

Page 34: Adaptable Space

Methodology.~ (continued)

PROTOTYPE 2.0This experiment proposes the components of prototype 1.0 at larger scale a larger scale. This prototype was initially designed as a module for system of units. The prototype aimed to develop two modules for the exhibition with each of the modules frame being 600 x 800 mm, contain-ing a grid of 2 by 3 motors. Due to the uncertainties of possible complications that may arise, the prototype was designed in stages. Stage 1 was to develop the initial mod-ule unit, whilst Stage 2 aimed for 2 units. Each module will be suspended from the ceiling in an array. The extent of this prototype is very much reliant on the exhibition space and its capacity to be able to suspend these modules from the ceiling.

During the inspection of the gallery space, it became clear that there may be possible complications with suspending multiple modules from the plasterboard ceil-ing. The weight of each modules frame will likely exceed the limitations of the plasterboard. I came to the realisa-tion that the final prototype will likely be just one module. The module was re-evaluated to see how well it would demonstrate the desired system as single entity. I came to the conclusion that 2 x 3 motors would not be sufficient in articulating this and the frame was re-designed to mount a grid of 3 x 3 motors.

After testing prototype 1.0 is was clear that the single pin-ion shafts of 6mm acrylic were very flimsy and would not strong enough to morph the prototypes form. To resolve this issue the frame and gear system was redesigned using a 10mm acrylic instead of 6mm. The received quotes for laser cutting this system using 10mm was almost triple that of the 6mm. Also using 10mm acrylic meant that the frame was going to nearly double in weight.

After lots of thought and consideration, I negotiated a way of strengthening the pinion shafts using 6mm acrylic. The shafts were reinforced with support strips of acrylic, which were to be attached using custom laser cut dowel pins and an acrylic binding agent. This seemed like the most obvious solution as it meant that the system would cheaper and lighter. It also allowed for the shafts to be 18mm and therefore much stronger then the outlined 10mm system.

The frame and gear system was then sent to be laser cut. All precautions were taken to ensure that all of the tight fitting junctions of the frame and pinion shafts were offset to account for the width of the laser beam. Upon collect-ing the laser cut sheets of acrylic it was discovered that

the junction should have been offset a little bit less than the advised 0.15mm. This meant that I had to manually file back all 44 slots in the frame along with all of the dowel connecting tabs. I unfortunately managed to snap one of the motor supports trying to force it into the frame. Luck-ily acrylic glue is a miracle acrylic binding agent and I was able to resolve this with ease. I glued the dowels and supports to the shafts, and right away it was evident that these were going to be more then strong enough for the purpose of this prototype. These were left to dry overnight and in the morning I discovered that the glue had melted the shafts to the supports so tight that the teeth of the rack gears would not even fit into the pinion slots. This was resolved by slightly sand back the teeth of the rack gear, which left the teeth opaque white in colour.

The next stage was to fabricate a more reliable and per-manent system for the electronics. The system was devised using a prototyping board rather than the breadboard, which was proving to be unreliable and a pain to set up. I needed to plug in 9 Easy drivers to drive the 9 stepper mo-tors of the final prototype. There are a total of 9 input / out-puts required for each of the easy drivers and only three of these could be shared (power, ground, and ground for the direction / step). The system needed to be portable in the sense that it could be easily assembled within the exhibition space. Male headers and female plugs seemed to be the most appropriate way of achieving this. The pins of the male headers were soldered into the easydriver boards and the female plugs were attached to the step-per motors and Arduino leads. I was initially using a ‘vero’ prototyping board, which has parallel strips of copper cladding running in one direction all the way across one side of the board. Due to all the intersecting pins using this would require me to drill out 90+ breaks in the tracks, which I was not prepared to do. I then purchased a Perfboard, which similar to the vero board contains a grid of predrilled holes spaced 2.54 mm apart, but does not have tracks of copper. Instead this board has singular pads of copper that can be joined using solder or wires. This board also proved to be not suitable as it would be far too messy with leads going everywhere. I was then able to source a prototype board with a conductive trace layout similar to the breadboard. I was able to share both the main power and ground through linking the tracks using solid core wire, which greatly reduced the amount of wires being plugged into the board. The adapted firefly fermata and grasshop-per definition was re-devised to drive the 9 stepper motors instead of 4.

Page 35: Adaptable Space
Page 36: Adaptable Space

Discussion

Perceiving Sensory DataThe technologies investigated for tracking human move-ments were the Open Toolkit for Sensing People in Spaces (OpenTSPS) and the Microsoft Kinect. OpenTSPS was able to track large groups of people, but was unable to pro-vide accurate data for individual occupants, which was not useful in developing an advanced system capable of perceiving social interactions. Whilst the skeleton tracking capabilities of the Kinect only made it possible to perform indirect, centre of mass tracking on upto 6 occupants, this technology provided very accurate data, which was more appropriate to the objective of this project.

Through tracking the centre of mass using the Kinect we were able to extract the occupants’ proximity, forward facing direction, acceleration and direction of travel. This allowed for the system to construct a model of the spaces’ social network and form a level of perception, shaped by “learning, memory and expectation” (Richard 1987). This perception could have been enhanced through utilising more communicative forms of sensory information such as gesture, facial expressions and speech. These will be examined in further investigations.

The calculated perception was computed in the grass-hopper programming environment, which at the time seemed to be the most logical choice seeing that the physical model was being actuated using the firefly components of grasshopper. In retrospect this perception could have been developed externally in a more appro-priate programming environment, as grasshopper did not perform ideally with these heavy calculations in real-time.

Dynamic Architectural Prototype The mechanised prototype embodied the calculated perception to develop an adjustable spatial configura-tion that was capable of manipulating the circulation and interactions of its occupants. The objective of this manipu-lation was to encourage and enhance social dynamism. The extent of this enhancement was restricted due to the limited awareness of the gained perception. The cur-rent extent of the system was capable of anticipating for

and identifying social interactions, as well as recognising instances of social isolations. This could be expanded to better facilitate for enhancement with a more insight-ful understanding of the occupants activities and social interactions.

The digital prototype was developed within Grasshopper, which was able to actuate the physical prototype using firefly and the Arduino micro-controller. Grasshopper was the most appropriate platform with the firefly components providing extensive capabilities for communicating with the Arduino and controlling motors. Translating the dy-namic grasshopper form to the prototype was achieved by rationalising the surface as a grid of segments, which allowed for it to be broken down and actuated through a physical model. This method of projecting points to trans-late a physical form was an effective way of averaging out a pre-generated form, but was not always a true represen-tation of the desired form. In hindsight the system could have benefitted from taking a slightly different approach; where instead of averaging out the form it could perform calculations where it attempts to simplify and replicate the form. This could be achieved through taking the highest and lowest points that define the form and syncing these with the control points that are closest.

The segments of physical prototypes’ form were controlled using a custom laser cut acrylic, rack and pinion gear system. This was effective in demonstrating operation of the prototype, but would not be viable at an architectural scale. The form was driven by a grid of 9 motors, which ideally should have been 16 to 25 to achieve the desired spatial configuration. This was more than adequate as a proof of concept for demonstrating a component of the desired system.

The next stage in developing this conception would be investigating the required materials, and mechanics for operating the proposed model as a full scale architectural system. These investigations would experiment with using DC-gear motors or industrial brushless motors instead of stepper motors. Further investigations would involve looking into implementing light and sound to further develop the manipulative environment.

Page 37: Adaptable Space

Conclusion

The results of the conducted experiments suggest that is feasible to develop an analytic environment capable of sensing and perceiving human social interactions. The experiments were able to achieve perception to the ex-tent of identifying occurring interactions and instances of social isolations amongst 2 to 6 occupants. The system was capable of analysing components of human movement including the subjects’ centre of mass, and velocity, which allowed for the further extraction of parameters such as acceleration, direction of travel and forward facing direc-tion.

Using this data the project was able to develop a system capable of; recognising and determining the intensities of occurring social interactions, projecting and anticipating for potential social interactions and recognising instances of social isolations occurring amongst occupants. This al-lowed for the system to construct a model of the spaces’ social network and form a level of perception, shaped by “learning, memory and expectation” (Richard 1987).The prototypes developed throughout this investigation demonstrated an architectural system capable of manip-ulating its spatial configuration to redefine the conventions of space. The spatial configuration was able to harness the gained perception of its occupant’s interactions to en-courage and enhance social dynamism through guiding and manipulating the occupant’s circulation. Removing social isolation proved to play an important role in en-

hancing these dynamics.

The attained perception gained within these experiments was limited in the sense that it did not consider more com-municative forms of human movements such as gesture or facial expression. The calculated perception was based entirely on vision and would be enriched through integrat-ing additional sensory data such as sound or dialogue in conversation. The next stage for this project is to employ this additional sensory data to develop a much more intelligent understanding of the occupants activities and interactions. A more comprehensive perception of the occupants’ activities and interactions would increase the currently restricted architectures ability to enhance.

This prototype is intended as a conception for the desired system, which will be examined in further research and investigations. It would entail developing a system ca-pable of enhancing through addressing desires to have public or private space, to optimize thermal, visual, lighting and acoustic conditions and to promote sharing or col-laboration in space. There would be an investigation into the materials, and mechanics required for operating the proposed model as a full scale architectural system. They would examine beyond form and into implementing light and sound to further develop the manipulative environ-ment.

Page 38: Adaptable Space

Reference List

Aggarwal, J & Ryoo, M 2011, “Stochastic Representation and Recognition of High-level Group Activities”, Interna-tional Journal of Computer Vision (IJCV), 93(2):183-200, June 2011.

Aggarwal, J & Ryoo M & Chen, C & Roy-Chowdhury, A 2010, “An Overview of Contest on Semantic Description of Human Activities (SDHA) 2010”, International Conference on Pattern Recognition (ICPR) Contests, August 2010.

Aggarwal, J & Ryoo, M 2011, “Human Activity Analysis: A Review”, ACM Computing Surveys (CSUR), 43(3), April 2011.

Anthes, E 2009, “Building around the Mind”, Scientific American, April 2009.

Anthes, E 2011, accessed 20 October 2011 <http://emily-anthes.com/about>

Arduino 2011, accessed 22 September 2011, <http://www.arduino.cc/>

Australian Bureau of Statistics 2003, Construc-tion and the environment, accessed 20 October 2011, <http://www.abs.gov.au/ausstats/[email protected]/Previousproducts/1301.0Feature%20Article282003?opendocument&tabname=Summary&prodno=1301.0&issue=2003&num=&view=>

Cazan, V 2011, accessed 28 August 2011, <http://www.vladcazan.com/projects/openfloor/>

Daintree Networks 2010, ‘The Value of Wireless Lighting Control’, accessed 20 October 2011

Foster, N 2007, ‘Norman Fosters Green Adgenda’, 2007 DLD Conference, Munich.

Fox, M and Kemp, M 2009, “Interactive Architecture”, Princeton Architectural Press, 2009.

Gassmann, F & Muxel, A 2011, accessed 18 August 2011, <http://space.andreasmuxel.com/>

GitHub Sensebloom Repository, accessed 28 September 2011, <https://github.com/Sensebloom/OSCeleton>

Mani, M 2009, accessed 18 August 2011, <http://www.fishtnk.com/2009/09/28/tunable-sound-cloud/>

Marr, D 1982, Vision: A Computational Investigation into the Human Representation and Processing of Visual Infor-mation, p. 3-7.

Nitrogen Posterous 2011, accessed 20 October 2011 <http://nitrogen.posterous.com/>

Nitrogen Logic 2011, accessed 20 October 2011 <http://www.nitrogenlogic.com/products/automation_controller.html>

Payne, A & Johnson, JK 2011, accessed 22 September 2011, <http://www.fireflyexperiments.com/>

Physics Classroom 2011, accessed 28 September 2011, <http://www.physicsclassroom.com/Class/1DKin/U1L1d.cfm>

Richard, G 1987, “Perception” in Gregory, Zangwill 1987, p. 598–601

Xia, L & Chen, C, and Aggarwal, J 2011, “Human Detec-tion Using Depth Information by Kinect”, International Workshop on Human Activity Understanding from 3D Data in conjunction with CVPR (HAU3D), Colorado Springs, CO, June 2011.

Zuk, W & Clark, R 1970, Kinect Architecture, Van Nostrand Reinhold, New York.

Page 39: Adaptable Space
Page 40: Adaptable Space

Custom Arduino Firefly Firmata:/* [ARDUINO + FIREFLY] THIS IS A MODIFIED FIREFLY FIRMATA TO CONTROL A STEPPER MO-TOR WITH A POTENTIOMETERBy Jason K Johnson, March 18, 2011. Visit: www.fireflyexperiments.com for more info// Drive one Stepper Motor with a Potentiometer using the EasyDriver v4.3 board by Sparkfun _ info: http://schmalzhaus.com/EasyDriver/// This uses the Arduino EasyDriver.h library. In the example I am using 1.8 degree stepper with both the MS1 and MS2 pins set to HIGH (for 1/8 step resolution).// Clock-wise around EasyDriver v4.3: MOTOR Input [A1-Yellow, A2-White, B1-Blue, B-2 Red; connected to a 4 wire Stepper Motor]; MS2 to 5v; GND; M+ to 5V; DIR to Pin 3; STEP to Pin 2; GND; MS1 to 5V// Potentiometer: Black to GND; Middle to AnalogIn 1; Red to 5V */#include <EasyDriver.h> // download the library here: http://www.arduino.cc/cgi-bin/yabb2/YaBB.pl?num=1251509480 // copy txt to your Arduino > Libraries folder; then rename them EasyDriver.h and Easydriver.cp // Built upon the MotorKnob example: http://www.arduino.cc/en/Refer-ence/Stepper

#define BAUDRATE 9600 //Set the Baud Rate to an appropriate speed, 9600 is recom-mended#define BUFFSIZE 256 // buffer one command at a time, 12 bytes is longer than the max length

////// ED_v4 Step Mode Chart ////// http://danthompsonsblog.blogspot.com/2010/05/easydriver-42-tutorial.html// MS1 MS2 Resolution //// L L Full step (2 phase) //// H L Half step //// L H Quarter step //// H H Eighth step // 5V jumpers into MS1 and MS2 to set them as “H”//////////////////////////////////// int DIR = 3; int STEP = 2; int DIR2 = 5; int STEP2 = 4; int DIR3 = 7; int STEP3 = 6; int DIR4 = 9; int STEP4 = 8; int DIR5 = 11; int STEP5 = 10; int DIR6 = 25; int STEP6 = 24; int DIR7 = 27; int STEP7 = 26; int DIR8 = 27; int STEP8 = 26; int DIR9 = 29; int STEP9 = 28;

Stepper stepper (200, DIR, STEP); //Stepper(int number_of_steps, int dir_pin, int step_pin) steps per revolutionStepper stepper2 (200, DIR2, STEP2); //Stepper(int number_of_steps, int dir_pin, int step_pin) steps per revolutionStepper stepper3 (200, DIR3, STEP3); //Stepper(int number_of_steps, int dir_pin, int step_pin) steps per revolutionStepper stepper4 (200, DIR4, STEP4); //Stepper(int number_of_steps, int dir_pin, int step_pin) steps per revolutionStepper stepper5 (200, DIR5, STEP5); //Stepper(int number_of_steps, int dir_pin, int step_pin) steps per revolutionStepper stepper6 (200, DIR6, STEP6); //Stepper(int number_of_steps, int dir_pin, int step_pin) steps per revolutionStepper stepper7 (200, DIR7, STEP7); //Stepper(int number_of_steps, int dir_pin, int step_pin) steps per revolutionStepper stepper8 (200, DIR8, STEP8); //Stepper(int number_of_steps, int dir_pin, int step_pin) steps per revolutionStepper stepper9 (200, DIR9, STEP9); //Stepper(int number_of_steps, int dir_pin, int step_pin) steps per revolution

char buffer[BUFFSIZE]; // this is the double bufferuint16_t bufferidx = 0; // a type of unsigned integer of length 8,16, or 32 bitsuint16_t p1, s1, p2, s2, p3, s3, p4, s4, p5, s5, p6, s6, p7, s7, p8, s8, p9, s9;

int readCounter = 0;int prev1 = 0;int prev2 = 0;int prev3 = 0;int prev4 = 0;int prev5 = 0;int prev6 = 0;int prev7 = 0;int prev8 = 0;int prev9 = 0;

/*============================================================================== * GLOBAL VARIABLES *============================================================================*/

char *parseptr;char buffidx;

int APin0 = 0; //declare all Analog In pinsint APin1 = 0;int APin2 = 0;int APin3 = 0;int APin4 = 0;int APin5 = 0;

// /*int DPin2 = 0; //declare all Digital In/out pinsint DPin3 = 0;int DPin4 = 0;int DPin5 = 0;int DPin6 = 0;int DPin7 = 0;int DPin8 = 0;int DPin9 = 0;int DPin10 = 0;int DPin11 = 0;

int DPin24 = 0;int DPin25 = 0;int DPin26 = 0;int DPin27 = 0;int DPin28 = 0;int DPin29 = 0;

// */

int writecounter = 0; //declare the write counter

/*============================================================================== * SETUP() This code runs once *============================================================================*/

void setup(){ pinMode(2, OUTPUT); // sets the pin for digital output pinMode(3, OUTPUT); // sets the pin digital output pinMode(4, OUTPUT); // sets the pin digital output pinMode(5, OUTPUT); // sets the pin for digital output pinMode(6, OUTPUT); // sets the pin digital output pinMode(7, OUTPUT); // sets the pin digital output pinMode(8, OUTPUT); // sets the pin for digital output pinMode(9, OUTPUT); // sets the pin digital output pinMode(10, OUTPUT); // sets the pin digital output pinMode(11, OUTPUT); // sets the pin for digital output pinMode(12, OUTPUT); // sets the pin digital output pinMode(13, OUTPUT); // sets the pin digital output pinMode(24, OUTPUT); // sets the pin digital output pinMode(25, OUTPUT); // sets the pin for digital output pinMode(26, OUTPUT); // sets the pin digital output pinMode(27, OUTPUT); // sets the pin digital output pinMode(28, OUTPUT); // sets the pin digital output pinMode(29, OUTPUT); // sets the pin digital output Serial.begin(BAUDRATE); // Start serial communication}

/*============================================================================== * LOOP() This code loops *============================================================================*/void loop(){

serialread(); // Call the Serial Write function if (writecounter == 1500){ // Wait every 1500th loop to then call the Serial Write func-tion writecounter = 0; serialwrite(); } writecounter = writecounter +1;}

/*============================================================================== * WRITE FUNCTION() *============================================================================*/

void serialwrite(){

// READ SENSORS + BUTTONS FROM ARDUINO

APin0 = analogRead(0); // Read analog input pin APin1 = analogRead(1); // Read analog input pin APin2 = analogRead(2); // Read analog input pin APin3 = analogRead(3); // Read analog input pin APin4 = analogRead(4); // Read analog input pin APin5 = analogRead(5); // Read analog input pin

Appendix - Code

Page 41: Adaptable Space

DPin2 = digitalRead(4); // Read digital input pin DPin4 = digitalRead(7); // Read digital input pin DPin7 = digitalRead(8); // Read digital input pin /*============================================================================== * SERIAL WRITE FUNCTION() *============================================================================*/

// Sending Sensor Data (comma seperated) to Serial / GH Serial.print(APin0); Serial.print(“,”); // Send the value and a comma Serial.print(APin1); Serial.print(“,”); // Send the value and a comma Serial.print(APin2); Serial.print(“,”); // Send the value and a comma Serial.print(APin3); Serial.print(“,”); // Send the value and a comma Serial.print(APin4); Serial.print(“,”); // Send the value and a comma Serial.print(APin5); Serial.print(“,”); // Send the value and a comma Serial.print(DPin4); Serial.print(“,”); // Send the value and a comma Serial.print(DPin7); Serial.print(“,”); // Send the value and a comma Serial.print(DPin8); Serial.print(“,”); // Send the value and a comma Serial.println(“eol”);}

/*============================================================================== * SERIAL READ FUNCTION() *============================================================================*/

void serialread(){

char c; // holds one character from the serial port if (Serial.available()) { c = Serial.read(); // read one character buffer[bufferidx] = c; // add to buffer

if (c == ‘\n’) { buffer[bufferidx+1] = 0; // terminate it parseptr = buffer; // offload the buffer into temp variable

//********************************************************

p1 = parsedecimal(parseptr); // parse the Xth number parseptr = strchr(parseptr, ‘,’)+1; // move past the “,” s1 = parsedecimal(parseptr); // parse the Xth number parseptr = strchr(parseptr, ‘,’)+1; // move past the “,” p2 = parsedecimal(parseptr); // parse the Xth number parseptr = strchr(parseptr, ‘,’)+1; // move past the “,” s2 = parsedecimal(parseptr);

parseptr = strchr(parseptr, ‘,’)+1; // move past the “,” p3 = parsedecimal(parseptr); // parse the Xth number parseptr = strchr(parseptr, ‘,’)+1; // move past the “,” s3 = parsedecimal(parseptr);

parseptr = strchr(parseptr, ‘,’)+1; // move past the “,” p4 = parsedecimal(parseptr); // parse the Xth number parseptr = strchr(parseptr, ‘,’)+1; // move past the “,” s4 = parsedecimal(parseptr);

parseptr = strchr(parseptr, ‘,’)+1; // move past the “,” p5 = parsedecimal(parseptr); // parse the Xth number parseptr = strchr(parseptr, ‘,’)+1; // move past the “,” s5 = parsedecimal(parseptr); parseptr = strchr(parseptr, ‘,’)+1; // move past the “,” p6 = parsedecimal(parseptr); // parse the Xth number parseptr = strchr(parseptr, ‘,’)+1; // move past the “,” s6 = parsedecimal(parseptr); parseptr = strchr(parseptr, ‘,’)+1; // move past the “,” p7 = parsedecimal(parseptr); // parse the Xth number parseptr = strchr(parseptr, ‘,’)+1; // move past the “,” s7 = parsedecimal(parseptr); parseptr = strchr(parseptr, ‘,’)+1; // move past the “,” p8 = parsedecimal(parseptr); // parse the Xth number parseptr = strchr(parseptr, ‘,’)+1; // move past the “,” s8 = parsedecimal(parseptr);

parseptr = strchr(parseptr, ‘,’)+1; // move past the “,” p9 = parsedecimal(parseptr); // parse the Xth number parseptr = strchr(parseptr, ‘,’)+1; // move past the “,” s9 = parsedecimal(parseptr);

//********************************************************

stepper2.setSpeed(s2); //set speed to x rpms (200 to 600?) stepper2.step(p2 - prev2); // move a # of steps 0 to 1600 for 360 prev2 = p2; // remember the previous val of the sensor stepper3.setSpeed(s3); //set speed to x rpms (200 to 600?) stepper3.step(p3 - prev3); // move a # of steps 0 to 1600 for 360 prev3 = p3; // remember the previous val of the sensor stepper4.setSpeed(s4); //set speed to x rpms (200 to 600?) stepper4.step(p4 - prev4); // move a # of steps 0 to 1600 for 360 prev4 = p4; // remember the previous val of the sensor stepper5.setSpeed(s5); //set speed to x rpms (200 to 600?) stepper5.step(p5 - prev5); // move a # of steps 0 to 1600 for 360 prev5 = p5; // remember the previous val of the sensor stepper6.setSpeed(s6); //set speed to x rpms (200 to 600?) stepper6.step(p6 - prev6); // move a # of steps 0 to 1600 for 360 prev6 = p6; // remember the previous val of the sensor stepper7.setSpeed(s7); //set speed to x rpms (200 to 600?) stepper7.step(p7 - prev7); // move a # of steps 0 to 1600 for 360 prev7 = p7; // remember the previous val of the sensor stepper8.setSpeed(s8); //set speed to x rpms (200 to 600?) stepper8.step(p8 - prev8); // move a # of steps 0 to 1600 for 360 prev8 = p8; // remember the previous val of the sensor stepper9.setSpeed(s9); //set speed to x rpms (200 to 600?) stepper9.step(p9 - prev9); // move a # of steps 0 to 1600 for 360 prev9 = p9; // remember the previous val of the sensor // }

//********************************************************

bufferidx = 0; // reset the buffer for the next read return; //return so that we don’t trigger the index increment below } // didn’t get newline, need to read more from the buffer bufferidx++; // increment the index for the next character if (bufferidx == BUFFSIZE-1) { //if we get to the end of the buffer reset for safety bufferidx = 0; } }}

uint32_t parsedecimal(char *str){ uint32_t d = 0;

while (str[0] != 0) { if ((str[0] > ‘50’) || (str[0] < ‘0’)) return d; d *= 10; d += str[0] - ‘0’; str++; } return d;}

Permutation Code:Dim k As Integer = x - 1Dim i As IntegerDim ki As IntegerDim ji As IntegerDim J As Integer = 1 For i = 0 To y Step 1If k > 0For ki = 1 To kprint(True)NextFor ji = 1 To jprint(False)Nextk = k - 1j = j + 1i = i + 1End IfNext

Page 42: Adaptable Space

Prototype Frame >

Translating Form to Physical Model >

Validating Projected intersection >

Projecting intersection

Appendix - Grasshopper Definitions

Page 43: Adaptable Space
Page 44: Adaptable Space

Communicating Data to Arduino / Stepper Motors ^

Initiating Arduino Communication ^

Communicating Data to Arduino / Stepper Motors >

Page 45: Adaptable Space
Page 46: Adaptable Space

Incoming Kinect data >

Incoming Kinect data >

Calculating Permutation >

Page 47: Adaptable Space

Calculating Proximity ^