a design space for trust-enabling interaction design

8
A design space for Trust-enabling Interaction Design Sónia Sousa [email protected] Ilya Shmorgun [email protected] David Lamas [email protected] Arman Arakelyan [email protected] Tallinn University Narva road, 25. Estonia ABSTRACT The purpose of this article is to introduce and assess the expressiveness of a design space for trust-enabling interaction design; or in other words aims to assess the extent to which the design space can explain/describe trust- enabling interactions. It starts by situating trust in the domains of Human-Computer Interaction and Computer- Mediated Interactions. Then, presents the proposed design space analytical tool. A tool, which serves for moving away from artifact-centered design to an intentional creation of value, i.e. support interaction designers on further reflecting on trust-enabling interactions design values. These design space's dimensions are rationally derived from the model of Human-computer Trust, which builds on uses perceived trust. This model has been previously validated and was also used as a research lens for providing a greater understanding of how individuals interact with systems, in interaction processes like openness, knowledge sharing, user's privacy awareness, and collaboration. The article concludes with two examples of application through a comparative inspection of 2 peer-production platforms: Wikipedia and Wordpress. And a reflection on how the proposed analytical tool can be used as a facilitator for supporting trust-enabling interaction design processes. Author Keywords Human-computer trust; Interaction design; Design space; Engagement; Value based design ACM Classification Keywords H.5.3. Group and Organization Interfaces}{Design, Reliability, Human Factors INTRODUCTION Living in an increasingly technology-reliant world has made people more dependent on technology for carrying out everyday activities. This tension has direct implications on how technology influences society's fundamental values. This is reflected in the attempt to develop strategies to provide a more trustworthy digital society. [16] While agreeing that we became part of a digital crowd when clearly supporting the development of trust and security technology, we miss a need for focusing on how it influences the domains of Human-Computer Interaction and Computer-Mediated Interactions. At the same time and within both the domains of Human- Computer Interaction and Computer-Mediated Interactions we see clear efforts to move beyond tackling the pragmatic challenges of interactive systems (such as effectiveness and efficiency) to address less tangible and mostly hedonic qualities. Human-computer trust (or perceived trustworthiness) can be considered one such hedonic quality. Further, evidence show that just by focusing only on the purely operational definition of trust does not lead to a more trustworthy digital society and in fact results in creating systems that end up being designed for computers rather than humans. One such example is Google Buzz's failure due to the lack of informing users that their list of contacts is being made public by default, which caused significant criticism leading to undermining user's trust and eventually shutting down the service. [5][29] This tendency illustrates that designing for trust requires not only a focus on understanding the nuances, such as security, privacy, and reputation, but above all it requires understanding the subtleties of the perception of trust. We believe that trustworthy fostering systems must be built upon trust-enabling qualities drawn from the social science [4] and that embedding trust requires acknowledgement of the psychological social dimensions that come attached to trust. This eventually leads to a greater understanding of how individuals interact with systems and the extensive impact of trust in those interactions. This article begins with a theoretically contextualizing trust, and then, situates the effects of trust in the field of computer science. Then, it continues by presenting the model of Human-computer Trust, from which the design space's dimensions are derived. Subsequently, the design space's analytical aptness is assessed and examples are provided on how to conduct assessment with the proposed space through a comparative inspection of 2 peer-production platforms Wordpress and Wikipedia. The article ends with a reflection on the potential of the presented design space as an interaction design facilitator. TRUST AS A SOCIAL PHENOMENA Open Access: The author(s) wish to pay for the work to be open access. The additional fee must be paid to ACM.

Upload: tallinn

Post on 25-Nov-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

A design space for Trust-enabling Interaction Design Sónia Sousa

[email protected]

Ilya Shmorgun [email protected]

David Lamas [email protected]

Arman Arakelyan [email protected]

Tallinn University Narva road, 25. Estonia

ABSTRACT The purpose of this article is to introduce and assess the expressiveness of a design space for trust-enabling interaction design; or in other words aims to assess the extent to which the design space can explain/describe trust-enabling interactions. It starts by situating trust in the domains of Human-Computer Interaction and Computer-Mediated Interactions.

Then, presents the proposed design space analytical tool. A tool, which serves for moving away from artifact-centered design to an intentional creation of value, i.e. support interaction designers on further reflecting on trust-enabling interactions design values.

These design space's dimensions are rationally derived from the model of Human-computer Trust, which builds on uses perceived trust. This model has been previously validated and was also used as a research lens for providing a greater understanding of how individuals interact with systems, in interaction processes like openness, knowledge sharing, user's privacy awareness, and collaboration.

The article concludes with two examples of application through a comparative inspection of 2 peer-production platforms: Wikipedia and Wordpress. And a reflection on how the proposed analytical tool can be used as a facilitator for supporting trust-enabling interaction design processes.

Author Keywords Human-computer trust; Interaction design; Design space; Engagement; Value based design ACM Classification Keywords H.5.3. Group and Organization Interfaces}{Design, Reliability, Human Factors

INTRODUCTION Living in an increasingly technology-reliant world has made people more dependent on technology for carrying out everyday activities. This tension has direct implications on how technology influences society's fundamental values. This is reflected in the attempt to develop strategies to provide a more trustworthy digital society. [16]

While agreeing that we became part of a digital crowd when clearly supporting the development of trust and security technology, we miss a need for focusing on how it influences the domains of Human-Computer Interaction and Computer-Mediated Interactions.

At the same time and within both the domains of Human-Computer Interaction and Computer-Mediated Interactions we see clear efforts to move beyond tackling the pragmatic challenges of interactive systems (such as effectiveness and efficiency) to address less tangible and mostly hedonic qualities. Human-computer trust (or perceived trustworthiness) can be considered one such hedonic quality. Further, evidence show that just by focusing only on the purely operational definition of trust does not lead to a more trustworthy digital society and in fact results in creating systems that end up being designed for computers rather than humans. One such example is Google Buzz's failure due to the lack of informing users that their list of contacts is being made public by default, which caused significant criticism leading to undermining user's trust and eventually shutting down the service. [5][29]

This tendency illustrates that designing for trust requires not only a focus on understanding the nuances, such as security, privacy, and reputation, but above all it requires understanding the subtleties of the perception of trust. We believe that trustworthy fostering systems must be built upon trust-enabling qualities drawn from the social science [4] and that embedding trust requires acknowledgement of the psychological social dimensions that come attached to trust. This eventually leads to a greater understanding of how individuals interact with systems and the extensive impact of trust in those interactions.

This article begins with a theoretically contextualizing trust, and then, situates the effects of trust in the field of computer science. Then, it continues by presenting the model of Human-computer Trust, from which the design space's dimensions are derived. Subsequently, the design space's analytical aptness is assessed and examples are provided on how to conduct assessment with the proposed space through a comparative inspection of 2 peer-production platforms Wordpress and Wikipedia. The article ends with a reflection on the potential of the presented design space as an interaction design facilitator.

TRUST AS A SOCIAL PHENOMENA

• Open Access: The author(s) wish to pay for the work to be open access. The additional fee must be paid to ACM.

dlamas
Text Box
In: Proceedings of the International Conference on Multimedia, Interaction, Design and Innovation MIDI 2014. ACM (2014)

Trust is referred in a relatively broad set of constructs. In social science it is a topic addressed in many fields of knowledge like sociology, economics, philosophy, psychology and social-psychology. Trust as a social phenomena contemplates a complex two-way relationship between individuals that are constituents of a society. Trust as a social phenomena emerges from three main social contexts: [42, 13, 24, 25, 20, 10]

An interpersonal or organizational context – we may trust a particular individual more than others;

A specific social situation – we may put our trust in a work colleague, sharing work related sensitive information, but not with our personal thoughts;

A specific social cultural context – we are required to trust our family doctor with our health problems in spite of not knowing him or her personally.

Trust cross-disciplinary nature has prompted a considerable debate about what is trust, how it is influenced, and how it is represented, making it difficult to define narrowly, or just as a "single static" concept. As so, Trust carries many meanings and plays a role in divergent contexts. For example Sociologists tend to see trust as structural in nature, [15] [20] or in terms of behavior [14] or even as a moral choice. [13] [42] Psychologists examine it as a personal attribute and analyze trust as behavioral intention (related to the predict acceptance of behaviors by others). [12] [34] Social psychologists tend to view trust as an interpersonal phenomenon, [9] [27] [45] a social structure for providing interpersonal relationships, known as institution-based trust or willingness to trust within a more social physiological perspective. Economists are more inclined to view trust as a rational choice mechanism, as a game theory. The philosophic perspective sees trust and distrust attitudes as something that affects our feelings and the way we think and act. [3]

HUMAN-COMPUTER TRUST Computer scientist’s concept emerged from above notions but tended to separate above approaches into two distinct domain definitions: operational and internal. One reflects the tendency to examine trust from a more operational standpoint, observing it as a sort of rational choices vs measurable risks. This operational approach requires rational decisions based on the knowledge of the possible rewards for trust or lack of it (by trusting we enable higher gains and by not trusting we avoid potential losses). Examples can be found in literature that address issues like reputation-based trust, policy-based trust, or in a sort of general models of trust in information systems. Can be also connected with the notion of security, reliance and privacy. [11] [1]

Another, approach to conceptualize the trust influence in computer science tends to aggregates it notions as an internal state [41, 12]. A state that reflects individual’s state of belief in the motivation of others; such a view often

comes associated with the notions of willingness, motivation and cooperation. Examples can be found in literature that address issues like computer-supported collaborative work, communities of practice, design for trustful interactions, social capital, organizational trust, and technology-mediated social participation. [2, 26, 25, 7, 45, 10]

The differences between these perspectives diverge in nature, from a computer-based approach into a user-based approach. A more operational perspective sees trust as a statistical or deterministic measure, i.e. as a measurable risk, reflecting a willingness to risk (positive measure of trust) or not to risk (negative measure of trust). Trust in this context can be translated as the willingness to share or not to share information (privacy and security).

Subject Reflect Observed

Sociology Social structure VS A moral choice

Behaviours Vs Intention

Psychology Intention Vs Perception

Attitude, Willingness

Socio-psychology Decision Vs Belief

Willingness, choice to cooperate

Computer science

Risk Vs Belief Vs Intention

Choice to risk, predisposition to

interact

Table 1. Trust conceptualisation framework

Our argument is that this view narrows down the trust approach to just a single variable, which is the risk of others using information for a malevolent purpose. This can be useful if our desire is to design reliable systems, which aim to prevent risks, although it is not sufficient for designing interactive systems where a group's or individual's trust comes across many dimensions (including time), like peer-production platforms. [22, 4]

More, we argue that by solely rely in those technical trust enabling features to leverage trustworthy systems is not enough to induce users fully trust in computer system, as latest research efforts clearly demonstrate in various areas of knowledge that humans don’t fully rely on rational decision in what concerns trust. Systems should also be designed to clearly provide trust-enabler hints to users on the social-technical subtleties that are embedded in the system design.

Our contribution is to proposed design space analytical tool. A tool, which serves for moving away from artifact-centered design to an intentional creation of value, i.e. support interaction designers on further reflecting on trust-enabling interactions values. These design space's dimensions are rationally derived from the model of

Human-computer Trust, which builds on perceived trustworthiness of a system.

A Model of Human-Computer Trust The model of Human-Computer Trust depicts trust as a construct informed by seven (7) individual observed qualities, such as: motivation, willingness, reciprocity, predictability, honesty, benevolence, and competence, and determines the extent to which one relates with one's social and technical environment. [36, 39] This model was validated and used as a research lens to establish relations that linked trust with qualities of online interactive systems, like openness, [35] willingness to cooperate, [39] share information (privacy), [21] and collaborate. [37]

This model was achieved by an extensive literature review on trust as a social phenomena and was complemented by a participatory design procedure that resulted in:

• The Identification of most common trust notions (design of a concept map);

• A personal unified view of possible trust implications in today's online community structures (participatory design session with experts and users). [38]

The design of the model was also complemented by the unification of Davis’s and Venkatesh’s Technology Acceptance Models [8, 44].

The resulting model takes into consideration 7 observable qualities of trust. These qualities support users' expectations, rational and emotional beliefs, as well as provides insights into what leads users to construct a set of intentions (predisposition to trust) that result in more or less engaging experiences; interactive experiences, that are reflected through the users' behaviours (commitment to a relationship or to cooperation). This is an iterative process that evolves through time and can be seen on Figure 1. [41]

TRUST-ENABLING INTERACTION DESIGN SPACE

This tool has a main purpose for moving away from artifact or user-centered design approaches to an intentional creation of value [6, 23]. We envision two main applicability’s: one serves to help interaction designers to better understand the potential design options and the reasons for choosing them; the other serves to help interaction designers to assess the existing design solutions for their intentional creation of value.

Figure 1. A model of Human-Computer Trust

It aims mainly to serve as a measuring tape for assessing (measuring) trust-enabling interactions features. The dimensions of this tool are rationally derived from above described model.

This section starts by describing the proposed design rationale, and then illustrates through examples how to assess systems features through the comparative inspection of 2 peer-production platforms: Wikipedia and Wordpress.

The rationale behind the choice of these peer production platforms was due to the authors' ongoing work on the LearnMix project, [18] which aims to re-conceptualize the e-textbook as a collection of professional and user-contributed content available on a wide variety of devices. In this case Wikipedia and Wordpress could represent potential solutions or at least could influence design decisions.

Procedure Our attempt to rationalize the design space was supported by what MacLean et al. [23], proposed as elements of design space analysis. The basic building blocks of the Design Space Analysis (DSA) are Questions, Options, and Criteria (the QOC notation). The Questions identify the design issue, the Options provide possible answers to the questions, and the Criteria are the means to assess and compare the options.

The dimensions defining the analytical lens of the proposed design space include two distinct components.

• A Static component – The static component of the design space includes the main driven question, the sub-set of questions, and the set of analytical dimensions, as described in figure 2; and

• A Dynamic component – The dynamic component of the design space is dependent of the assessment context; in this case the features of Wikipedia and Wordpress represent them.

The main driven question of this value based design space process is: “What influences the user's predisposition to trust a particular system?” This question aims to better understand what are the potential design options, which leverage user’s predisposition to trust.

The subsequently derived sub-set of questions aimed to provide a more granular view of the problem, as so, each of the three sub-set of questions focus in one particular belief that according to above model, see figure 1, contribute to leverage user’s predisposition to trust. The sub-set of question are represented as followed,

• SQ1: “What features support the users' belief that the system's features will benefit them?”

• SQ2: “What features support the users' confidence in someone or something to perform a particular desired action?”

• SQ3: “What features support the users' belief in the integrity of the system and its users?”

Finally, the analytical dimensions that define the analytic lens of the design space are expressed by the 7 observable qualities of trust, represented in the model.

These observable qualities besides support users’ expectations, rational and emotional beliefs provide as well insights on user’s intentions towards trust, reflected in the above model as users’ predisposition to trust.

The observable qualities are:

[Motivation] – disposition to believe (even under conditions of vulnerability and dependence) that other’s actions will benefit them; [19]

[Willingness] – represents the extent to which one’s is willing to participate in a given action while considering the risk and incentives involved. [22, 33, 30, 17]

[Predictability] - is represented by a subjective probability of the system or its users completing an expected function during an expected interval of time;

[Competency] – is represented by user's confidence that another user will perform a desired action in accordance with what is expected.

[Reciprocity] – is based on a mutual responsibility and obligations that both parties will act and perform a particular expected social action. [13] It implies as well a dependable action or to cope with others in order to achieve a particular and expected result. [22, 31, 32]

[Honesty] – The ability to perceive the nature of the intentions of others, more open and transparent attitudes ensure users that something or someone is not deceiving and will act accordingly. [7]

[Benevolence] – both parts have the responsibility to act beneficently (a proactive stance). Good will, sympathy, and

kindness raise empathy, which results in the perception of safety and in the increase of confidence. [7, 31, 32].

Examples of Use Following are examples provided to illustrate how to perform assess of existing systems with the proposed design space through comparative inspection. The analysis is conducted on the 2 previously mentioned peer-production platforms.

Analytical Assessment of Wikipedia

Static component includes:

A driving question, Q1: What influences the user's predisposition to trust Wikipedia?

The analytic dimensions: motivation, willingness, competency, predictability, reciprocity, benevolence and honesty. The dynamic component includes: Wikipedia features and Wordpress features.

The assessment tool was presented as a matrix where the static elements of the proposed design space were positioned in the top row, and the dynamic elements were position in the first left column, see table 1.

Each feature was assessed within each analytical dimension. A three values rating scale, was used as measurement tool, 1 means that “it contributes to the intended value", -1 means that “it diminishes the intended value” and 0 for not applicable.

Then, as a final step the results were sorted in ascending order. This procedure was tested using interaction design experts. An example of the matrix described above is provided in table 3 and 4. A brief description of the comparative inspection results, are provided in the following section.

RESULTS OF THE COMPARATIVE INSPECTION

As referred before, the main goal of the work reported in this article was to assess the extent to which the proposed design space can be used to identify trust-enabling features in existing or prototyped systems. Above results enable us to conclude that either Wikipedia, either Wordpress included a considerable amount of Trust-enabling interaction design features.

This is the list of Wikipedia’s features that can explain/describe trust-enabling interactions:

• Administration pages • Patrolled pages • Notifications • Page curation • User access levels • Viewing and restoring deleted pages • Article feedback • Editor engagement • Help namespace

This is the list of Wordpress features that can explain/describe trust-enabling interactions:

• Built-in comments • Publishing with ease • User management • Freedom of distribution of the platform • Own your data • Community • Contributions from users

By using this tool we were able to perceive that to provide better trust-enabling interactions we need:

First, to support users with clear norms, policies and guidelines on how to behave and act in those peer production spaces. These will facilitate the creation of trust bounds, reflected in a form of consensus and clear behavioral rules on how to contribute and what will be others’ contributes in those spaces.

Second, to complement this process we should offer as well online moderating roles; this will support the sharing information and mediating communications processes, also ensures credibility to the interaction and creates empathy.

Third, ensure that the editing process happens peacefully and in accordance with the policies. The use of collaborative tagging or similar features, like ratting, comments to evaluate the content according to it appropriateness, can ensure the quality (competency) and create reciprocity.

Another trust fostering feature can be represented in a form of "Notification" or "Page Curation" or even "Editor engagement". Those serve to highlight interactions and incentive new users to participate.

Finally results revealed as well that explicitly states of privacy, increase user’s privacy awareness, which helps to establish more transparent and honesty policies. This, complemented with clear and supportive communities, like forums, activities guides, tutorials reflect honesty, competency and predictability.

Thus, in this context we see clearly how the proposed design space can effectively serve as a criteria assessment lens.

CLOSING REMARKS As addressed in the article we envision a crescent need for clearly support trust-enabler hints to users. We proposed that this can be achieved by using trust as a value creation; Our contribution goes towards proposing a design space analytical tool. A tool, which serves for moving away from artifact-centered design to an intentional creation of value, i.e. support interaction designers on further reflecting on trust-enabling interactions design values.

The proposed design space analysis can serve as a criteria or assessment lens. In this sense we envision two maim applicability’s for this proposed analytical tool, from one hand it can help understand the potential design options and the reasons for choosing them; from another it serves to help to assess the existing design solutions for their intentional creation of value.

In this regard, we see the design space more as supportive tool for a design process predicated by an humanistic approach based on Human-Centered Design (HCD) and Participatory Design (PD), than as a substitute for logical or engineering-based design.

As possible future perspectives for the proposed design space, the tool is seen being used by experts and serving as a complement to foster user's trust needs. Or it could also serve as a heuristic set for Human-Computer Trust, as suggested by Vaananen [43]

What leads user's to be predisposed to

trust Wikipedia? Motivation Willingness Predictability Competency Reciprocity Honesty Benevolence Ranking Administration 1 1 1 1 1 1 1 7

Editor engagement/Team 1 1 1 1 1 1 1 7

Getting Started 1 1 1 1 1 1 1 7 Guided tours 1 1 1 1 1 1 1 7

Help namespace 1 1 1 1 1 1 1 7 New pages

patrol/patrolled pages 1 1 1 1 1 1 1 7

Notifications 1 1 1 1 1 1 1 7 Page Curation 1 1 1 1 1 1 1 7

User access levels 1 1 1 1 1 1 1 7 Viewing and

restoring deleted pages 1 1 1 1 1 1 1 7

Article feedback 1 1 1 1 1 1 1 7 Editor engagement 1 1 1 1 1 1 1 7

Growth (team) 1 1 1 1 1 1 1 7 Table 2. Wikipedia features, which can explain/describe trust-enabling interactions.

What leads user's to be predisposed to

trust Wikipedia? Motivation Willingness Predictability Competency Reciprocity Honesty Benevolence Ranking Built-in Comments 1 1 1 1 1 1 1 7 Publish with Ease 1 1 1 1 1 1 1 7 User Management 1 1 1 1 1 1 1 7

Own Your Data 1 1 1 1 1 1 1 7 Freedom 1 1 1 1 1 1 1 7

Community 1 1 1 1 1 1 1 7 Contribute 1 1 1 1 1 1 1 7

Theme System 1 1 1 1 1 1 1 7 Application Framework 1 1 1 1 1 1 1 7

Table 3. Wordpress features, which can explain/describe trust-enabling interactions.

What influences the user's

predisposition to trust a particular

system?

Motivation

Willingness

Competency

Predictability

Reciprocity

Benovelence

Honesty

What supports users’ beliefs that the system

features will benefit them?

Driven Question Sub-set of Question Analytic dimentions

What features support the users' confidence in someone or something to perform a particular

desired action?

What features support the users' belief in the integrity of the system

and its users?

Expectations

Rational perception

Emotional perception

Contributes

Contributes

Set of trust-enable interaction Features

Trust Predisposition

Relationships

Commitments

Intentions

Predisposition to cooperate

Predisposition to relate

Behaviours

Engagement

Figure 2: The proposed Trust-enable design space.

REFERENCES 1. Abdul-Rahman, A., and Hailes, S. Relying on trust to

find reliable information. In In: Proceedings 1999 International Symposium on Database, Web and Cooperative Systems (DWACOS’99 (1999).

2. Bachrach, M., and Gambetta, D. Trust as Type Detection. In Trust and deception in virtual societies, C. Castelfranchi, Ed. Kluwer Academic Publishers, 2001, 1–22.

3. Baier, A. Trust and antitrust. Ethics 2, 96 (1986), 231–60.

4. Camp, L. J. Designing for trust. In Trust, Reputation, and Security (2002), 15–29.

5. CNET. What Google needs to learn from Buzz backlash.

6. http://www.cnet.com/news/what- google- needs- to- learn- from- buzz- backlash/ [Accessed:March, 2014].

7. Cockton, G. A development framework for value-centred design. In CHI ’05 extended abstracts on Human factors in computing systems - CHI ’05, ACM Press (New York, New York, USA, 2005), 1292.

8. Constantine, L. L. Trusted interaction: User control and system responsibilities in interaction design for information systems. In CAiSE (2006), 20–30.

9. Davis, F. D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quartely 13, 3 (1989), 319–340.

10. Deutsch, M., Ed. The resolution of conflict: Constructive and destructive processes. Yale University Press, New Haven, CN, 1973.

11. Dong, Y. The role of trust in social life. In Trust Modeling and Management in Digital Environments: From Social Concept to System Development, Z. Yan, Ed. IGI Global, Finland, 2010, 421–440.

12. e Joobin Choobineh, A. K. Trust in electronic commerce: Definition and theoretical considerations. In HICSS (4) (1998), 51–61.

13. Erikson, E. H., Ed. Identity, youth and crisis. Free Press, New York, 1968.

14. Fukuyama, F., Ed. Trust: The social virtues and the creation of prosperity. Free Press, New York, 1995.

15. Gabbay, S., and Leenders, R. A perceptional view of the coleman model of trust. Research report, University of Groningen, Research Institute SOM (Systems, Organisations and Management), 2002.

16. Garfinkel, H. A conception of, and experiments with, T ́rusta ́s a Condition of stable Concerted actions.

Motivation and social Interaction. Ronal Press, New York, 1963.

17.Digital agenda for europe 2010 - 2020. Digital Agenda for Europe. http://ec.europa.eu/digital- agenda/en.

18. Kramer, R. Trust and distrust in organizations: Emerging perspectives, enduring questions, vol. 50. Academy of Management, California, 1999.

19. Lamas, D., Valjataga, T., Laanpere, M., Rogalevits, V., Arakelyan, A., Sousa, S., and Shmorgun, I. Foundations for the Reconceptualization of the e-Textbook. In Proceedings of the International Conference on e-Learning ICEL 2013 (Aug. 2013), 510.

20. Lewicki, R. J., Tomlinson, E. C., and Gillespie, N. Models of interpersonal trust development: Theoretical approaches, empirical evidence, and future directions. Journal of Management 32, 6 (2006), 991–1022.

21. Lewis, J. D., and Weigert, A. J. Trust as a social reality. Social Forces 63, 4 (1985), 967–985.

22. Lorenz, B., Sousa, S., and Tomberg, V. Privacy awareness of students and its impact on online learning participation - a case study. In Open and Social Technologies for Networked Learning: IFIP WG 3.4 International Conference, OST 2012, Springer Verlag (2013), 189 – 192.

23. Luhmann, N. familiarity, confidence, trust: Problems and alternatives. In Trust: Making and Breaking Co-operative Relations, D. Gameta and B. Blackwell, Eds. Basil Blackwell, Oxford, 1998.

24. MacLean, A., Young, R. M., Bellotti, V., and Moran, T. P. Questions, Options, and Criteria: Elements of Design Space Analysis. Human–Computer Interaction 6, 3-4 (1991), 201–250.

25. Mayer, Roger C.and Davis, J. H., and Schoorman, F. D. An integrative model of organizational trust. The Academy of Management Review 20, 3 (1995), 709–734.

26. McKnight, D., and Chervany, N. Trust and distrust definitions: One bite at a time. In Trust in cyber-societies: integrating the human and artificial perspectives, R. Falcone, M. P. Singh, and Y. Tan, Eds. Springer, Berlin, 2002, 27–54.

27. Mcknight, H., and Chervany, N. L. The meaning of trust. Tech. rep., University of Minnesota - MIS Research Center, 1996.

28. Mishra, A. K. Organizational responses to crisis: The centrality of trust. In Trust in organizations: frontiers of theory and research, R. Kramer and T. Tyler, Eds. SAGE publications Inc., California, 1996, 261–287.

29. Norman, D. A. Emotional design. Ubiquity 2004, January (Jan. 2004), 1–1.

30. PCworld. Google Buzz Criticized for Disclosing Gmail Contacts. http://www.pcworld.com/article/189081/ google_buzz_criticized_for_disclosing_gmail_ contacts.html[Accessed:March, 2014].

31. Preece, J. Online Communities: Designing Usability and Supporting Socialbilty. John Wiley & Sons, Inc., New York, NY, USA, 2000.

32. Preece, J. Etiquette, empathy and trust in communities of practice: Stepping-stones to social capital. Journal of Universal Computer Science 10, 3 (2001), 194–202.

33. Preece, J., and Shneiderman, B. The reader-to-leader framework: Motivating technology-mediated social participation. AIS Transactions on Human-Computer Interaction 1, 1 (2009), 13–32.

34. Ries, S. Certain trust: a trust model for users and agents. In SAC ’07: Proceedings of the 2007 ACM symposium on Applied computing, ACM (New York, NY, USA, 2007), 1599–1604.

35. Rotter, J. B. Generalized expectancies for interpersonal trust. The American Psychologist 26 (1971), 443—452.

36. Sousa, S., Laanpere, M., Lamas, D., and Tomberg, V. Interrelation between trust and sharing attitudes in distributed personal learning environments: The case study of lepress ple . In Advances in Web-based Learning, Lecture Notes in Computer Science, H. Leung, E. Popescu, Y. Cao, R. Lau, and W. Nejdl, Eds., Springer Verlag (2011), 72–81.

37. Sousa, S., and Lamas, D. Emerging trust patterns in online communities. In CPSCom 2011: The 4th IEEE International Conference on Cyber, Physical and Social Computing, IEEE Computer Society (2011).

38. Sousa, S., and Lamas, D. Trust as a leverage for supporting online learning creativity – a case study. eLearning Papers, 30 (2012), 1–10.

39. Sousa, S., Lamas, D., and Dias, P. A framework for understanding online learning communities. In ECEL

2011 - 7th International Conference on e-Learning, H. Leung, E. Popescu, Y. Cao, R. Lau, and W. Nejdl, Eds., Academic Publishers (2011), 1000—1004.

40. Sousa, S., Lamas, D., and Dias, P. The interrelation between communities, trust and their online social patterns. In SCA2011 - International conference on Social Computing and its Applications, F. Xia, Z. Chen, G. Pan, L. T. Yang, and J. Ma, Eds., IEEE Computer Society (2011), 980–986.

41. Sousa, S., Lamas, D., and Dias, P. The implications of trust on moderating learner’s online interactions - a socio-technical model of trust. In CSEDU 2012 - Proceedings of the 4th International Conference on Computer Supported Education, M. H. e Maria Joa ̃o Martins e Jose ́ Cordeiro, Ed., vol. 2, SciTePress (2012), 258–264.

42. Sousa, S., Lamas, D., and Dias, P. A model of human-computer trust: A key contribution for leveraging trustful interactions. In CISTI’2014 - 9ł Conferencia Ibrica de Sistemas y Tecnologas de Informacin, IEEE Computer Society (2014).

43. Tyler, T. R., and Degoey, P. Beyond distrust: g ́etting evena ́nd the need for revenge. In Trust in organisations: frontiers of theory and research, R. Kramer and T. Tyler, Eds. SAGE publications Inc., California, 1996, 429.

44. Vänänen-Vainio-Mattila, K., and Wäljas, M. Developing an expert evaluation method for user experience of cross-platform web services. In Proceedings of the 13th International MindTrek Conference: Everyday Life in the Ubiquitous Era, MindTrek ’09, ACM (New York, NY, USA, 2009), 162–169.

45. Viswanath Venkatesh, Michael G. Morris, G. B. D., and Davis, F. D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quartely 27, 3 (2003), 425–478.

46. Weber, L. R., and Carter, A. The social construction of trust, vol. 33. Springer, 2003