fundamental inequities in algorithmic space: what do games teach us about power in a software...

14
Fundamental Inequities in Algorithmic Space: What do games teach us about power in a software mediated society? James Allen-Robertson Sociology, University of Essex, Colchester, United Kingdom

Upload: essex

Post on 10-Nov-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

Fundamental Inequities in Algorithmic Space: What do games teach us

about power in a software mediated society?

James Allen-Robertson

Sociology, University of Essex, Colchester, United Kingdom

Fundamental Inequities in Algorithmic Space: What do games teach us

about power in a software mediated society?

Game-space is the ultimate software mediated space, where all activity is known by the

space, and all activity is mediated by the space. Outside of these virtual spaces we

occupy a reality that itself is increasingly mediated and made by algorithmic processes

driven by the expansion of surveillance capacity enabled by big data. By unpacking

games through a framework of Object Oriented Programming, this paper draws on

game-space as indicative of the power relations the occur when an individual must

operate within an algorithmic construct. The paper focuses on three key aspects, the

necessity of calculability, inequity in knowledge and inequity in power, to examine

how algorithmic constructs in both game-space, and software mediated society govern

those individuals that must operate within them.

Keywords: Surveillance, Big Data, Games

In the Game Space

As the loading screen fades away the player is made aware of her character’s immediate surroundings and situation. Via manipulation of the peripherals she scans around, spinning the camera located in the game space that acts as a proxy for her character’s point of view. The mesh of algorithmic mechanisms responsible for the continued generation of the world, process data quietly in the background. The mesh’s surveillance of the game-space is total. It knows every movement that occurs within it because it has generated that movement, either through direct intervention, or through the establishing of rules and parameters for the objects in the space. The mechanisms also know the player and what the player can see: the input from peripherals, the position of the player’s character in the world, the content of the player’s view, and the accumulated history of their actions in the game-space thus far. The mesh can only know the player via these variables that act as a proxy for her. However, for the algorithmic mesh to know, plan and pre-empt the player this is enough.

The player’s knowledge of the game-space, relative to the mesh, is infinitely smaller. Though the overall geometry of the game space is fixed before it begins, the mesh has the liberty of altering the space, placing barriers, blocking routes, moving ammunition caches and distributing enemies. The player only sees a small part of the world, the part designed for their consumption. Invisibly layered on top of the visible space, are objects that trigger further algorithmic subroutines, and layers that rationalise the geometric space, making it calculable and therefore knowable to the mesh.

As the player begins to move and act in the space, the mesh monitors, records and adjusts the world in response to the variables known as ‘player’. The enemies in the world are spawned in and spawned out of the game-space just outside of her view. They move through the space, navigating using the layers unseen to the player and making decisions based on knowledge known only to the mesh. The distribution of enemies will be adjusted depending on the player’s performance in the space and on the level of stress the player is experiencing. The actual stress level of the player is not

known by the mesh, but derived from the data it does have about the history of the game experience so far, the game-space’s current state and the current input from the player’s peripherals. If her stress level drops too low, the mesh may trigger the arrival of large waves of enemies, inclement weather conditions that hinder player performance, or ready a particularly difficult scenario up ahead. If her stress level is derived as too high, enemies will dissipate, resources will be moved onto the player’s projected route and she will be given breathing space for a short while. This undulating pattern of calm and intensity moves the player forward as the shifting space subtly directs her through particular alleys and buildings indistinguishable from others. The player, impoverished of knowledge about the game-space is always reacting, whilst the algorithmic mesh is always planning.

After playing the game for a while she may pick up on these patterns, recognise tell-tale signals and patterns in the narrow slice of insight afforded to her by the game’s surface. Using these signals and patterns she will attempt to pre-empt the algorithmic mesh as it attempts to pre-empt her. She will change her strategies, keeping in mind both the world of the game space, and the algorithmic director that generates it, attempting to plan as much as she reacts. However, she can only ever know the game-space through the data presented to her by the algorithmic mesh itself. She will fail many times before she is able to discern the actual rules by which the world is run. Good thing it’s only a game.

Introduction

Game-space is the ultimate software mediated space, where all activity is known by the space, and all activity is mediated by the space. Outside of these virtual spaces we occupy a reality that itself is increasingly mediated and made by algorithmic processes driven by the expansion of surveillance capacity enabled by big data. By unpacking games through a framework of Object Oriented Programming, this paper draws on game-space as indicative of the power relations that occur when an individual must operate within an algorithmic construct.

This is not to say that reality has become a game in a literal sense. Games may be defined as a space without consequence, and that is certainly not the case here. However, as algorithmically generated and governed spaces, games are increasingly emblematic of our software-sorted society and its mode of algorithmic governance. The unseen construction of algorithmic space, the quantification of the individual and the subtleties of governance, involving algorithmic means take place in both game-space, and code/space. If we want to understand the power relations between individuals and society mediated by algorithmic constructs, then we should look at the power relations between the player and the game.

There are three fundamental qualities of games as algorithmic space I want to highlight. The first is that games are inherently spaces of procedure and calculability. As assemblages of algorithms these spaces operate through calculability and logical rules, fundamentally defined by the mechanics of the space. Secondly, that there is an inequity of knowledge between the player and the algorithmic assemblage, and thirdly that this leads to an inequity of power that ultimately results in algorithmic governance.

Theorising Game Space

Calculation

Caillois’ (1961) ‘Sociology of play’ is foundational to the field of game studies, emphasising play as an embodied act always in a state of tension along a continuum between structured rule based activities, and unstructured playfulness. Suits (1978) expanded this idea asserting that a game is fundamentally a set of rules, because without such rules the activity itself would not be possible. Suits illustrates his argument with the example of a running race. The race rules force competitors to take an inefficient route around the length of the track, rather than taking the more efficient route straight across the middle. In the unrestricted world, competitors have many options to get from the beginning to the end of the track, but by restricting those option,) the actions of competitors becomes framed as a specific game activity. Salen and Zimmerman’s influential computer game design text ‘Rules of Play’ (2003) extends this classic ludological field into the sphere of computational games and also discusses the design of games in terms of restrictive rule that define the game space. This focus on rules indicates why games have translated so well onto computational platforms, which fundamentally operate on calculation and logical operations.

Algorithms are essentially rulesets, a series operations that will be enacted upon an input, in order to achieve an output (Beer, 2013). The agency of algorithms arises from their performance rather than from their inert state as code. Each operation is important, but the agency of algorithms lies in the passing of data from one operation to the next (Introna, 2015). Every aspect of a computer game arises from this performance of interacting algorithms. To the machine performing the code and generating the world the code is an incredibly fast complex stream of linear on/off commands that manipulate the physical switches of the hardware. From the intense speed and complexity of these calculations arises a coherent virtual world.

However, the algorithms are not written as a linear series of on/off commands. Though programming began at the level of machine code with early computers that were wholly designed for linear calculative tasks, the complexity of programming has lead to the development of higher level programming languages that rely on software to translate human-readable source code into a machine-readable stream. The later development of object-oriented programming (OOP) completely deconstructed the notion of a programme as a linear set of calculations (Sicart, 2008) (Alt, 2011). In OOP, the programme is designed as a set of interdependent algorithmic objects, each responsible for particular operations. Objects within a programme can message each other, requesting data or instructing objects to trigger one of their predefined operations, and they have their own properties that indicate how other objects should interact with them. Game space is fundamentally comprised of many digital objects, each messaging and processing individually and as part of the program whole. For Alt (2011), it was this conceptual shift from understanding code as a linear stream to an encapsulated set of algorithmic objects that opened up the opportunity for user interactivity with the looping stream of machine instruction. As a multidimensional space, the user can be conceptualised as just another object, another process with its own variables, state and ability to message other objects to trigger activity.

For Sicart (2008) drawing on the language and framework of OOP highlights the ontology of games as a collective of discrete interacting subsystems. Rule focused ontologies tend to conflate the game rules and game mechanics, whilst Sicart’s OOP framework divides them. In Sicart’s framework mechanics are the methods available to alter the current game state, triggered through messaging between objects. They are the methods for agency within a game space, available to the

programming objects and to the player via the proxy of their own input object. The rules of a game are the space of possibility that gets modelled after the mechanics have been determined. When encountered by the player, this rule space is pre-defined by the web of pre-established mechanics, and the properties of the objects. For example, a player may be able to press an input to initiate their character’s entry into a car object. If the car has the property of ‘unlocked’, one mechanism will trigger a series of actions that results in the character entering the car object and the player’s input being remapped to controlling the car object. If the car has the property of ‘locked’ a different series of mechanics will initiate. The rules of the game that arise from the interrelation of objects are that the player can access certain cars and that in this game space cars are driveable (in games this is not a necessary truth). ‘In this sense, rules are modelled after agency, while mechanics are modelled for agency’ (Sicart, 2008, 'Defining game mechanics', para. 11 ).

Sicart’s ontological framing of games does two things. Firstly, it makes games, as algorithmic constructs, comparable to other software systems by reducing them down to their materiality at the level of code and mechanism. Secondly the OOP framework highlights important qualities of algorithmic environments, particularly in relation to their role within society. Firstly, an algorithmic environment, the rules of conduct within it and the rules of its operation arise from the interaction between different programmed objects, its mechanics. Second, the player’s ability to act and be known within this space is dependent on being represented by one of these objects with its own variables, properties, state and bindings as a proxy for the player. To exist within and affect the world users must be reduced to calculable, operationalised variables. Finally, agency within an algorithmic space is reliant upon the pre-established mechanics of the space. Not that agency is restricted by these mechanics, but that it is selectively facilitated, or to put it in a more familiar term, that it is governed by the mechanics of the algorithmic space (for an extended discussion see Tulloch, 2014).

Inequity of Agency and Knowledge

Games are popularly understood as a predominantly interactive medium, a quality that differentiates them from other media like literature or cinema. Players can determine the narrative, define their own experience in an open world and choose who they want to be. The game audience is empowered by the agency the medium affords (Williams, 2010). However in many cases interactivity, such as that of web 2.0 is about selecting from predefined options: rating products from one to five; male or female; like, or do not. The user is able to trigger actions, to initiate the messaging of an object in the code, but they lack the authorship to redefine the system that they are interacting with (Murray, 1998). The player is instead, operating within, and governed by a pre-existing system. They operate within it, but cannot alter it, at least not without leaving the world of the game.

The application of Foucault’s concept of governance to algorithmic spaces has been deployed by a number of theorists (Galloway, 2006; Introna, 2015; Tulloch, 2014) and for good reason. In society, Foucault argues, our agency is not simply restricted by governance, but also constituted by it as ‘a productive network which runs through the whole social body’ (Foucault, 1978, p. 119). There is no ‘natural’ human state that governance deprives us of, no action outside of the productive network. In a game, the mechanisms are the productive network, constituting the very possibility of agency. There is no player state outside of the game. In shaping our social conduct governance does not simply deny us agency, it facilitates some potential acts, at the expense of others. Games are more effective when the player internalises the logic of the space to such an extent that they perform the role expected of them. Agency within an algorithmic space is governed by the algorithmic construct

itself through selective facilitation. The player is made to feel empowered when their choices are facilitated rather than denied, yet to achieve this the player must have already internalised the rules of the system, to choose in a way consummate with the design of the space.

Governance is also appropriate from a technical point of view. Governance does not root power in a single sovereign, instead governance ‘implies a plurality of interdependent actors… that lack the ability to unilaterally act directly, but together can manage the conduct of conduct’ (Introna, 2015, p. 3). In society, the plurality of interdependent actors might be various intersecting state and corporate actors. In an algorithmic construct, they are the individual lines of code in an interdependent relational structure of programmed objects.

Furthermore, much like the process of governance in society, the operation and influence of the algorithmic mesh is not always apparent. Wardrip-Fruin (2009) argues that games do not reveal their inner workings to players outright. Though Salen and Zimmerman’s (2003) design manual states that the ‘rules’ of a game are explicit and unambiguous, this may only be the case for the author, not the user. To the user, the foundation of the game, the processes that simulate the world and the data that it collates and draws from, are disguised by an opaque surface. To the user, at least initially, the interface is the entire algorithmic construct. The user can interact with the surface, which in turn interacts with the foundations, however that translational process and what happens once that interaction has occurred is obscured. What is presented on the surface is entirely at the discretion of the design of the foundation. The foundation may be drawing in external data from outside of the programme, making network connections and utilising external processes without necessarily expressing that on the surface (Wardrip-Fruin, 2009).

The degree to which a system’s surface will reveal the operation of the underlying model depends on the design of the system. Some systems may rely on appearing to be much more complex beneath the surface than they actually are, relying on a veneer of ‘algorithmic objectivity’ (Gillespie, 2014). Others may present a simplistic surface, disguising great complexity beneath (for further discussion of both see Wardrip-Fruin, 2009). This discretion in how much to share with the user, and therefore the inequity in knowledge is foundational to any algorithmic system, whether it is a game disguising the decision making of AI opponents, your phone quietly sharing locational and sensor data with your service provider, or a piece of malware encrypting your computer without your knowledge. Fundamental to any algorithmic construct is an inequity in knowledge as whilst the construct is aware of every operation and data point that feeds into it, expressing any or all of that knowledge to the user is a design decision, and for most processes, will not be expressed. By default, knowledge is obscured until it is revealed.

Procedural Rhetoric and Governance

Though not done explicitly, aspects of an algorithmic system obscured from view can be revealed by the user through their interactions with it. According to Will Wright, creator of SimCity and The Sims, ‘playing the game is the process of discovering how the model works’ (Starr, 1994, Inside SimCity, para. 13). As the player engages with a model (at the surface level), for example like SimCity’s model of urban systems, they will receive feedback on how their changes to the model’s values influence other components. Players will often begin from a position of ignorance about the model, ready to implement past experience of playing similar games to begin teasing out the shape of the systems beneath. Wardrip-Fruin (2009) refers to this as the ‘SimCity Effect’, whereby an incredibly complex model slowly becomes visible to the player through play. In SimCity, and many other games, teaching the user to play the model is part of the design and an indicator of player mastery. Players that reach professional levels of competitive play are often characterised by their approach to game

systems known as ‘min-maxing’: a style that optimises achievement through manipulating the model. Min-maxing only occurs after the model has been thoroughly discovered through a long process of experimentation and failure (Juul, 2013). Crucially, this process of model discovery is not a necessary quality of all algorithmic systems. In games the player receives sufficient feedback about the model’s responses because ultimately the game is not designed to win, but to facilitate player experience. The opportunity to uncover the complexity beneath must still be designed in from the start.

However, uncovering the model is also a process of internalising the model. Whether or not the player fully understands the full complexity running beneath the surface, this model can be used to influence player conduct and knowledge. For theorists that approach games as systems such as Wardrip-Fruin (2009) and Bogost (2007), games are interesting because they are able to express meaning in the rule space generated by the game’s mechanics, through the way they model reality. Bogost (2007) refers to this as ‘procedural rhetoric’, a message that can only be received by being required to act and make decisions within the confines of a system. By operating within the model presented, even if that model is not explicit, the player is encouraged to internalise the logic of that system, leading them to a particular conclusion or to adopt a particular behaviour.

Boost cites The McDonalds Game as one example of procedural rhetoric. The game places the player in charge of managing production, retail and PR for a global fast-food chain. There is a variety of options available to the player to increase profits such as over-farming third-world land, using growth hormones on cattle, running a child-focused marketing campaign, and buying the support of a medical professional. By balancing the game in such a way that the player will catastrophically fail unless they engage in these immoral practice, the player internalises the view that the industry is unsustainable without these practices. Opting not to use these tactics will result in a swift game over. The player may begin with good intentions, but as they internalise the model, they will make small concessions in the interests of improving performance. The game presents an ‘operationalised model’ that expresses a position through the shape of its process (Wardrip-Fruin, 2009, p. 4). Changing the model, such as making it possible to win without engaging in any immoral practices, is entirely possible, but would express a different procedural rhetoric.

Procedural rhetoric highlights the way that games are able to establish knowledge regimes through their active processing. These knowledge regimes subtly govern user behaviour as they uncover and internalise the mechanisms and models in order to achieve the goals set out by the game. The diagetic content of the game space is secondary to the model that underlies it, because it is the model that determines both the range of player agency, the best means by which the player should operate in the space, and the goals that the player should achieve.

World as game

The rapidly increasing role of algorithmic systems in our society appears to be ever apparent and of interest to a number of scholars (Beer, 2009, 2015; Chun, 2011; Kitchin, 2014; Kitchin & Dodge, 2011; Mackenzie, 2006; Manovich, 2013). These often-interconnected systems inhabit the most mundane moments of our individual daily lives, and equally operate at the highest levels performing complex social sorting of populations and decision making in government and financial markets. The ubiquity of computing is ever increasing, with the much hyped ‘Internet of things’ promising further integration of material objects with networked computational process. With increasing permeation of devices, increasing opportunities for data collection, and increasing integration of the algorithmic into practice and process, the ‘technological unconscious’ (Thrift & French, 2005) that both simulates and structures life becomes ever more intrinsic to the assemblage of our society.

Compared to the relative neat simplicity of game-space, the assemblage of reality is much more diverse and complex. However, with algorithms seemingly being deployed between objects and subjects, we move closer to the totality of algorithmic mediation experienced by the player within a game. If we want to understand the operation of power in an increasingly algorithmic society, we can draw from the fundamentals of power and agency that operate in a totalising algorithmic space such as a game, and consider whether they might provide us a better framework for understanding the operation of algorithms within the assemblage.

Rendering calculable

A game environment does not need to be rendered calculable, it arises because of algorithmic process and so is, a priori, calculable. However, for anything that does not arise from the construct but needs to have a role in it, like the player for example, it must be made calculable through the generation of data. This is equally the case for any computational algorithmic construct in the world. In many sectors of society, this process of knowing reality for algorithmic systems has been transformed by a variety of developments known as ‘Big Data’ (Burrows & Savage, 2014; Couldry & Powell, 2014; Kitchin, 2014; Lyon, 2014). The big data environment differs from ‘small data’ infrastructure in a variety of ways. A small data project may be of great volume, but it is directed and designed with a specific purpose, often relying on sampling much larger populations and particular periods of time. Big data inverts these principles relying predominantly on the automated generation of data as an aside to the normal function of computational devices, ‘before determining the full range of their actual and potential uses’ (Lyon, 2014, p. 4).

This transformation is supported by ubiquitous computing projects that integrate networking and data collection facilities into a seemingly endless stream of previously simple objects. Computation generates data as a by-product meaning that the inclusion of computation into an object is equally the inclusion of the capacity to generate data (Schneier, 2015). Automated data generation operates in tandem with and is linked to volunteered data, to provide further depth and diversity to the data archives. Reviews, filled forms, practices of self-quantification (Whitson, 2014) and most importantly social media have opened up a whole other realm of reality to calculability. Add to this assemblage the less known worlds of distributed environmental sensors, state and private surveillance, logistics, and spatial mapping projects commonly known as GIS, and we begin to see quite how pervasive the generation of data about the world is (Kitchin, 2014). Wherever there is computation there is the capacity to generate data.

Despite the name, it is not simply the amount of data that has made big data so influential. It is the speed by which it can be processed, the flexibility that allows the combination of diverse data sets and the automation that instigates action dependent on its outputs (Couldry & Turow, 2014). The flexibility of big data infrastructure supports the joining up of different disparate sources as long as there are common identifiers between the data sets that can be matched. These systems are designed for extendibility, ready for new future sources to flexibly integrate on the fly (Kitchin, 2014). Crucially this modelling of reality is not a snapshot, but can occur in real-time, essentially creating a concurrent model of reality in real-time. As the sources of data rapidly increase, so does the breadth and resolution of the concurrent virtual model. It is not difficult to imagine that the scope of data generative practices is set to increase. The rationale for integrating software and data collection has swiftly become hegemonic amongst those with the greatest vested interest, and the underlying logic of measurement, of rendering the world calculable has become seductive at the individual level (Kitchin & Dodge, 2011, p. 106).

The diversity of sources, linked through the data’s relationality means there is little space to operate outside the virtualisation as any moment of data generation may feed in. Unlike game-space an individual can act in the world without predefined computational mechanisms for allowing it. However, if they wish to act within a realm that is reliant upon the operation of software, a ‘Code/Space’ (Kitchin & Dodge, 2011) they must operate within the context defined by that algorithmic construct. An individual could, for example, vow to avoid financial surveillance, submitting only to a cash based economy. However, the moment they need services, employment or goods there is a significant chance they must interact with some form of computational process. The model is generated, not by some centralised Orwellian authority, but by a mesh of public/private models which regularly intersect, blurring the boundaries between different systems and contexts. Private companies may rely on open access to state data, whilst the state may purchase data sets from the private sector when regulation forbids them from collecting the information themselves. Health data can be re-contextualised to assess insurance risk, marketing profiles utilised to make decisions about national security (Pasquale, 2015). This sea of algorithmic systems form a ‘mesh’ of mechanisms, a ‘variable geometry’ (Deleuze, 1992) of merging private and public systems that is increasingly inescapable.

Unlike the geometry and spatial elements of game space, material reality is not an algorithmic construct. However, our social reality is becoming indivisible from the algorithmic constructs that occupy it. The model that concurrently mirrors our social reality, the ‘technological unconscious’ (Thrift & French, 2005, p. 223) virtualised from the stream of diverse and disparate data sources, is increasingly becoming a major actor in, and constitutive of our reality (Fuller, 2003). Software’s integration into our sociomaterial assemblage moves our software sorted society ever closer to game space. The individual in a software sorted society is increasingly analogous to a player in a game space and, as is intrinsic to any algorithmic construct, the same inequity in knowledge and inequity in power.

Inequity of knowledge

In game space there is disparity in transparency; the game knows all about the player, but the player knows only what is represented on the surface by the game. In a society interpenetrated by software the individual is transparent to the data analytics, but the analytics are hidden from the individual.

Under big data ‘contemporary surveillance expands exponentially – it renders ordinary everyday lives increasingly transparent to large organisations. The corollary, however, is that organizations engaged in surveillance are increasingly invisible to those whose data are garnered and used. This “paradox” is deepened by the advent of big data’

(Lyon, 2014:4)

We are often shocked to learn about even the simplest aspects of surveillance such as that our phones log our location, that our cars monitor our speed, or that our browsers record our browsing habits. Our shock stems in most part, from the realisation that often these activities occur for the benefit of those other than ourselves. We are unaware of these operations because the algorithmic constructs are not designed to make these operations clear. No surface activity occurs, only foundational, because there are no mechanisms set to express that activities are occurring. In some cases, we can rely on those with the expertise to understand programming to point out when untoward processing might be happening. The best way to interrogate software is to view its source code, the human readable object oriented scripts that are yet to be rendered into machine code. However, in proprietary systems these files are not always forthcoming. What is predominantly

delivered to our devices is the inscrutable series of machine operations that must elect to inform us of their operation. As such these algorithmic constructs are often inscrutable yet automatic in their execution (Introna, 2015).

In any algorithmic construct it is not necessary that the surface reflect the processing that occurs in the foundation. In some constructs, the surface disguises the operations beneath, providing seemingly simple results after incredibly complex operations. In other cases, the surface may express great complexity when actually very little is occurring beneath the surface (Wardrip-Fruin, 2009). From an ethical point of view both these scenarios matter when the algorithmic construct is integrated into our social world, and if we only have access to the surface, we cannot be certain which of the scenarios may be in play. For example, credit scoring assesses an individual’s financial risk based on a vast array of data sources about their financial life, providing an accurate assessment for companies considering offering them credit. The scoring process draws on a vast array of processed data, however the surface result presented as an objective statement on the individual is a singular credit score. With no further rationale provided, the score appeals to its algorithmic foundation for its authority and objectivity. With little insight into how the scores are produced, individuals are often left to guess what behaviour may or may not result in a poor score, or must engage in arduous work uncovering whether an incorrect piece of data may be resulting in ‘cascading disadvantages’ (Pasquale, 2015, p. 32) as it sets off a chain reaction in the scoring processes. The opacity of the credit scoring process might be considered acceptable if it appeared to be accurate, however an individual’s score can vary wildly depending on the credit bureau queried indicating the process is far from objective (Citron & Pasquale, 2014). Credit scoring, and all other predictive analytics, are meant to provide a ‘line of sight’ (Amoore, 2009), a way to know and mitigate future risks. In credit scoring and many other algorithmic systems that seek to generate line of sight, the operation of the system is opaque, hidden beneath surfaces used to disguise the foundation. Just as in game space, these algorithmic constructs, and the knowledge at their disposal, are hidden from the view of those that are so closely managed by them.

The indexicality and relationality of big data has rapidly improved the ability for algorithmic systems to accurately associate disparate pieces of data with a single individual. The players in the algorithmic space can be known, often in real-time, their movements and actions easily discerned through meta data as they move through the modelled environment (Lyon, 2014). Context aware adverts that are delivered based on a mix of geographical location, recent browsing history and expressed preferences seek not to know audiences but to know discrete individuals (Couldry & Turow, 2014). However, they are not known holistically. Just as with the player in a game space, the individual within a software mediated society can only be known by the proxy variables indexed to them. The individual is the sum of the sources of data about them, peripheral input and historical data. It is a thin image of the individual but it is enough for the context of play. In the assemblage of software and society, the individual is known by similar means, through various sources of data either given freely or generated through their actions. They become ‘dividual’ (Deleuze, 1992), known by many different measurements, divided into many parts. It is not even necessary for the individual to divulge data on an aspect of themselves for it to be known, with enough relevant variables, new variables can be derived through processing, without a corresponding trigger phenomena. Pregnancies can be derived from shopping habits, sexualities derived from social ties, travel plans derived from historical locational data (Pasquale, 2015). Multiple models of the individual drawn from different sets of parts exist in different systems yet none of these ‘data doubles’ (Lyon, 2014, p. 6) are sufficient to be a holistically accurate model. It is a thin image of the individual but it is enough for the context at play.

The data double exists as a representation of the player in a software sorted world, constantly shifting and altering without the player’s knowledge. Yet in this space, ‘the data body not only claims to have ontological privilege, but actually has it’ (Critical Art Ensemble, 1995 quoted in Kitchin, 2014: 177). The complexity of the self becomes rendered into something that is calculable, and in that rendering given objectivity (Gillespie, 2014) and moral authority (Introna, 2015). Quantification, particularly in data renewed efforts to continue the ‘care of the self’ (Foucault, 1990) appeals to us as a way of uncovering hidden truths about ourselves. By tracking measurements such as weight, calorie intake, or productive hours we hope to know ourselves and be better for it (Whitson, 2014). However, this ontological privilege is claimed by a data body often constructed outside of our purview in the interests of others. As in the game space, the data double can only express variables predefined by the algorithmic construct, and the individual does not have authorship over that construct. The constructs only see the dividual components, yet they have come to mean more than the individual themselves. The individual is only known by a predefined set of variables and data points, but for the purposes of the construct – though not necessarily for the purposes of the player – this is enough.

Inequity of Power

What does it mean for an individual to be known, and to act within an algorithmically saturated space? What is the balance of power between the individual and the construct that they operate within? In game-space, we saw that agency within the construct is not simply restricted. It is a matter of constitutive facilitation, whereby the possibility of agency must already be pre-empted, and pre-allowed by the design of the algorithmic mechanisms. There is resonance between this framing, and Deleuze’s (1992) concept of the control society. The concept - derived from and extending Foucault’s work on governance and power - proposes that contemporary power now operates not through great institutions, but via a dispersed mesh of control mechanisms. These mechanisms continuously exert normalising pressure on us through the monitoring and processing of us as data. A key aspect of Deleuze’s concept is that the process of normalisation is not restrictive, but selectively facilitative, which he likens to the building of highways, infrastructure that simultaneously facilitate travel to certain locations, whilst dissuading travel to others. Deleuze’s framing of contemporary governance expresses its Foucauldian roots, echoing the idea that modes of power can only facilitate our agency, as the possibility of action outside them is a fallacy.

Deleuze’s mechanisms echo the mechanisms as delineated in an OOP framework, each their own mechanism, but also messaging between each other to form the construct as a whole. Power arises in the mechanisms of the control society not through the collating of data, but through its processing (Introna, 2015). The individual may act within the construct, but only in ways facilitated by the construct in line with how that individual is known based on their data double. The kinds of cultural content that are recommended to them (Beer, 2013), the kinds of financial resources they may rely on (Pasquale, 2015), or even the extent of their mobility between and within national boundaries (Amoore, 2009), may be significantly determined by the way their data is processed within the construct. This ‘entity then acts back on those with whom the data are associated, informing us who we are, what we should desire or hope for, including who we should become. The algorithms grip us even as they follow us…’ (Lyon, 2014, p. 7).

The ontological privilege claimed by the data body ultimately means that the decisions made about us based on our data, become decisions about us. The data points are not objectively indicative of anything in themselves. Through processing and decision making they become indicative. Amoore (2009) demonstrates how the war against terror has become a hunt for the ‘unknown terrorist’

using the ‘residue’ data of daily life. Joining the dots of individuated pieces of data eventually results in a new derived data point, a positive identification of terrorist intent. The individual determined by these processes is a possible future, an individual that has not yet acted, but could act, and thus must be managed to ensure they do not. The processing that occurs in this amalgamation of data is, as we have seen above, unseen, and yet can be drastically impactful on the lives of individuals. The dots are not always joined up in the right way, and false identifications can be made (see Pasquale, 2015 for a range of examples). The derived identifier, based on the processing of individuated data, actively performs the world, shaping it as we choose to imbue it with objectivity. As such, just as in game-space, the world is both monitored and constructed by the processing of an algorithmic construct.

Crucially for many writers, it is its predictive, pre-emptive capacity, the seeking to draw from past data and pre-empt future action that is problematic. In game-space, the player is known by two data sets. The variables assigned to their input, and the aggregated data of their past actions. This data is used to place the player in the space, but also to determine the most effective approach in engaging with the them. Transparent to the construct the player is constantly pre-empted by the space, and the world altered around them. However, the player, at least until they discern the rules of the space, is always reacting to the space. The processes at play, the way data will be utilised, or even the existence of such data, is not yet clear to the player. The player is always reacting.

In Deleuze’s control society, the power relations between construct and individual are similar in that often individuals neither know how data about them will be processed, or necessarily that there is data about them at all. Lyon (2014) drawing on Kerr and Earle’s (2013) work on marketing analytics outlines three intents that may drive an organisation to subject individuals to pre-emptive analytics. The first, ‘consequential’ prediction, seeks to help an organisation act in a way that will benefit those being surveilled, such as providing data that will help customers make better informed choices. The second, ‘preferential’ prediction, seeks to second guess our preferences in order that they may be more efficient in their use of resources such as targeted advertising through tracking cookies. The final intent is perhaps the most problematic one because, unlike the former two, the subject in question is unlikely to know it is occurring due to the foundational problem of inequity of knowledge. ‘Pre-emptive’ prediction operates to reduce the range of options for the individual in question, funnelling them towards a particular conclusion or behaviour before they reach the point of making a decision. Pre-emptive prediction can already be seen most strikingly deployed in the virtual assistant markets, the most popular being Apple’s Siri, Google ‘Now’ and Microsoft Cortana. These systems, drawing on our vast and complex data doubles, seek to answer our questions before we ask them and tell us what we should do next (Jenkins, 2010). These systems are the pre-emptive analytics with interfaces, where the surface expresses to us some degree of foundational process. However, the key issue with these systems is that they do not necessarily divulge their operation to us. The range of flight prices, the options or speed through a customer service line, or the existence of financial products can all alter depending on the pre-emptive analytics at play within their own particular assemblage of the algorithmic construct (Pasquale, 2015).

We are not ignorant of the existence of these practices and processes but, like in the blackbox of credit scoring, we do not have insight into the processes themselves. We are aware that we are being measured and we seek to act in a way beneficial to the measurement. The act of measurement becomes productive in that by being aware of the surveillance process, we act in accordance with the expectations of the process perhaps under the auspices of self-improvement, or to ease our movement through the system (Beer, 2015). However, we do not necessarily know how our actions will be interpreted by the processes beneath. Like the game-space we must discover the rules the rules, often through a process of directed failure that might allows us to tease out the

edges and contours of the foundation’s operation. Those seeking to improve their credit scores have no end of purported methods that may or may not solve their problem (Citron & Pasquale, 2014). Those who find their PVR has incorrectly determined their cultural preferences may have to be careful in what consumption choices they make if they want the algorithms to get back on track. More gravely, individuals needing to disguise their political or sexual identities in less hospitable regimes may need to think incredibly carefully about what any number of measurable choices may mean once processed by the unseen construct. The process of rule discovery in games, is heavily reliant on the embrace of failure. However, when the same logics are applied in a software mediated society, the consequences are much more than a game.

Conclusion

The the power relations inherent in the interaction between an individual and an algorithmic construct, are demonstrably similar whether that construct is a highly circumscribed virtual space, or a part of Deleuze’s algorithmic mesh of control. Both spaces are fundamentally driven by their calculability and the rendering of phenomena to algorithmic process. The necessity to make phenomena machine readable, and machine executable requires reduction. This rendering calculable opens up the first inequity, the inequity of knowledge. With all data and process rendered machine readable, human readability becomes a design decision rather than an inevitability. Access to the knowledge about the processes occurring and the data produced must be granted through the production of surface phenomena. The result is an inherent inequity in the knowledge held between the algorithmic construct, and the individuals within the construct. Finally, this inequity of knowledge cascades to an inequity in power. The rapid shift towards utilising data for predictive purposes, alongside the vast institutional, governmental and private celebration of these technologies, has imbued the opaque algorithmic process with almost infallible objectivity. The models made by the processing take ontological privilege over the subjects they model and become truth. Yet with these modelling processes being so opaque, the population governed by them has little way of knowing precisely what their actions might say about them once processed. In a contemporary surveillance society, we know that we are being surveilled – we may even significantly contribute to it ourselves – but we do not know what that surveillance means. We must engage in the process of rule discovery, epitomised for Wardrip-Fruin (2009) in the game SimCity that slowly but surely reveals the hidden model through the act of play. In 1994 Starr considered how SimCity exemplified the ways that urban policy decisions were increasingly reliant on often incomplete, inaccurate low resolution models. His concern was that the rationale that allowed the model to take ontological privilege over the urban realities of the city streets, may become more pervasive within policy for many years to come. Starr was quite prescient when he said, inflected with a degree of concern, ‘we shall be working and thinking in SimCity for a long time’ (Starr, 1994, pp. Simulation in reality, para.11).

Notes

This vignette is predominantly based on the game series Left 4 Dead by Valve Studios. An in depth exploration of the game’s ‘AI director’ can be seen at https://www.youtube.com/watch?v=WbHMxo11HcU

References

Alt, C. (2011). Objects of our affection: How object orientation made computers a medium. In E. Huhtamo & J. Parikka (Eds.), Media archaeology: approaches, applications, and implications.

Berkeley, Calif.: University of California Press.Amoore, L. (2009). Lines of sight: on the visualization of unknown futures. Citizenship Studies, 13(1),

17-30. doi:10.1080/13621020802586628Beer, D. (2013). Popular culture and new media : the politics of circulation. New York: Palgrave

Macmillan.Beer, D. (2015). Productive measures: Culture and measurement in the context of everyday

neoliberalism (Vol. 2).Burrows, R., & Savage, M. (2014). After the crisis? Big Data and the methodological challenges of

empirical sociology (Vol. 1).Citron, D. K., & Pasquale, F. (2014). The scored society. Washington Law Review, 85(1). Couldry, N., & Powell, A. (2014). Big Data from the bottom up. Big Data & Society, 1(5).

doi:10.1177/2053951714539277Couldry, N., & Turow, J. (2014). Advertising, Big Data, and the Clearance of the Public Realm:

Marketers’ New Approaches to the Content Subsidy. International Journal of Communication, 8, 1710-1726.

Deleuze, G. (1992). Postscript on the Societies of Control. October, 59, 3-7. doi:10.2307/778828Foucault, M. (1990). The history of sexuality volume three: The care of the self. London: Allan Lane

Penguin.Fuller, M. (2003). Behind the blip: Essays on the culture of software. New York: Autonomedia.Galloway, A. R. (2006). Gaming : essays on algorithmic culture. Minneapolis: University of Minnesota

Press.Introna, L. D. (2015). Algorithms, Governance, and Governmentality: On Governing Academic

Writing. Science, Technology & Human Values. doi:10.1177/0162243915587360Jenkins, H. W. (2010). Google and the Search for the Future. The Wall Street Journal. Retrieved from

http://www.wsj.com/articles/SB10001424052748704901104575423294099527212Kerr, I., & Earle, J. (2013). Prediction, preemption, presumption: How Big Data theatens big picture

privacy. Stanford Law Review, 66(65). Kitchin, R. (2014). The data revolution : big data, open data, data infrastructures & their

consequences.Kitchin, R., & Dodge, M. (2011). Code/Space: Software and Everyday Life. Cambridge, MA: MIT Press.Lyon, D. (2014). Surveillance, Snowden, and Big Data: Capacities, consequences, critique (Vol. 1).Pasquale, F. (2015). The black box society : the secret algorithms that control money and

information. Cambridge: Harvard University Press.Sicart, M. (2008). Defining game mechanics. Game Studies, 8(2). Retrieved from

http://gamestudies.org/0802/articles/sicartStarr, P. (1994). Seductions of Sim: Policy as a simulation game. The American Prospect, 17(Spring),

19-29. Retrieved from http://www.princeton.edu/~starr/17star.htmlThrift, N., & French, S. (2005). The Automatic Production of Space. In N. Thrift (Ed.), Knowing

Capitalism. London: Sage.Tulloch, R. (2014). The Construction of Play: Rules, Restrictions, and the Repressive Hypothesis.

Games and Culture, 9(5), 335-350. doi:10.1177/1555412014542807Wardrip-Fruin, N. (2009). Expressive processing : digital fictions, computer games, and software

studies. Cambridge, Mass.: MIT Press.Whitson, J. R. (2014). Foucault's Fitbit: Governance and gamification. In S. Walz & S. Deterding (Eds.),

The Gameful World: Approaches, issues, applications. Cambridge, MA: MIT Press.