value-inspired testing - renovating risk-based testing, & innovating with emergence (2012 paper)

34
Value-Inspired Testing: Renovating Risk-Based Testing, and Innovating with Emergence v1.0 Neil Thompson [email protected] @neilttweet @neiltskype Thompson information Systems Consulting Ltd www.TiSCL.com +44 (0)7000 NeilTh (634584) 23 Oast House Crescent, Farnham, Surrey, GU9 0NP England, UK Abstract Is testing “dead”? Some parts are declining, but evolution can inspire survival. To renovate use of risk; collate current variants, eg “Risk-Based”, “Risk-Driven”; use a context-driven mix of principles; prioritise testing from high to low (not zero); consider value as benefits minus risks; remember risk applies throughout testing, from static testing through execution, bug-fixing and beyond; integrate risk into Value Flow ScoreCards to manage across complementary views of quality. To innovate: consider evolution in nature: 1

Upload: neil-thompson

Post on 20-Jan-2015

249 views

Category:

Technology


0 download

DESCRIPTION

EuroSTAR conference Nov 2012, Amsterdam

TRANSCRIPT

Page 1: Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Emergence (2012 paper)

Value-Inspired Testing:Renovating Risk-Based Testing, and

Innovating with Emergence v1.0

Neil Thompson [email protected] @neilttweet@neiltskype

Thompson information Systems Consulting Ltd www.TiSCL.com +44 (0)7000 NeilTh (634584)

23 Oast House Crescent,Farnham,Surrey,GU9 0NPEngland, UK

AbstractIs testing “dead”? Some parts are declining, but evolution can inspire survival.

To renovate use of risk; collate current variants, eg “Risk-Based”,

“Risk-Driven”; use a context-driven mix of principles; prioritise testing from high to low (not zero); consider value as benefits minus risks; remember risk applies throughout testing,

from static testing through execution, bug-fixing and beyond;

integrate risk into Value Flow ScoreCards to manage across complementary views of quality.

To innovate: consider evolution in nature: periods of

ecosystems in stability, punctuated by innovative disturbances;

as genes evolve in biology, “memes” evolve in thinking;

testing’s history suggests some specific memeplexes;

natural innovation seems to emerge on a path between “excess order” and “excess chaos”;

could testing evolve similarly? Try Johnson’s “Where good ideas come from”.

So: VIVVAT Value-Inspired Verification, Validation And Testing! Please join me in exploring our future.

1

Page 2: Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Emergence (2012 paper)

ContentsAbstract.............................................................................................................................................................................1

0. Introduction...................................................................................................................................................................3

1. Renovating the use of Risk in testing.............................................................................................................................4

1.1 Current variants of Risk-Based Testing etc..............................................................................................................4

1.2 Context-driven mix of available principles...............................................................................................................5

1.3 Risk-Graded Testing.................................................................................................................................................5

1.4 Value-Graded Testing..............................................................................................................................................6

1.5 Value-Inspired Testing.............................................................................................................................................6

1.6 Value Flow ScoreCards............................................................................................................................................8

Through the lifecycle.................................................................................................................................................8

Integrating risk........................................................................................................................................................10

2. Innovating in testing, using Emergence concepts........................................................................................................12

2.1 Evolution in Nature................................................................................................................................................13

Biology.....................................................................................................................................................................14

Relationship with other sciences.............................................................................................................................16

2.2 Evolution of Software Testing................................................................................................................................16

The view ahead........................................................................................................................................................16

The story so far........................................................................................................................................................17

2.3 Genes to Memes...................................................................................................................................................18

2.4 Memeplexes in the History of Testing...................................................................................................................18

2.5 Emergence between “Too Much Chaos” and “Too Much Order”..........................................................................21

2.6 Innovation and Ideas for Testing...........................................................................................................................21

3. VIVVAT Value-Inspired Verification, Validation And Testing........................................................................................25

References and Acknowledgements...............................................................................................................................26

2

Page 3: Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Emergence (2012 paper)

0. IntroductionThe theme of this EuroSTAR 2012 conference is "Innovate & Renovate: Evolving Testing". In his call for submissions, programme chair Zeger van Hese included a quotation from William Edwards Deming: "Learning is not compulsory... neither is survival." This is presumably a veiled threat – if we don’t learn, we may not survive. But is it already too late? Several speakers have recently alleged that testing is dead, or some very similar message:

Tim Rosenblatt (Cloudspace blog 22 Jun 2011) “Testing Is Dead – A Continuous Integration Story For Business People”;

James Whittaker (STARWest 05 Oct 2011) “All That Testing Is Getting In The Way Of Quality”; and Alberto Savoia (Google Test Automation Conference 26 Oct 2011) “Test Is Dead”.

There may be others. But I suggest that at least some of these commentators seem to be talking mainly about “the testing phase”, with an emphasis on functional testing, “independent” of the developers. They mean in particular purveyors of standard, manual testing, which is increasingly offshored or automated. No-one seems to think that performance, security or privacy testing is dead. No-one seems to be suggesting that developers have stopped testing and so should everyone else. It is more a question of who and how.

So in this paper, when I talk about testing, I mean all of testing. I include: not just dynamic testing (executing software), but various kind of static testing, eg reviews; not just functional testing but all the non-functional (or para-functional) types – and this list itself may evolve.

I consider what we can learn from the history of testing and its place in the “ecosystems” of IT products and projects. Testing has been called many things: an art, a craft, and more recently some people (including myself) have been trying to make it more of a science – even if that means it is a social science (as Cem Kaner argues).

I think of testing as “value flow management” – we should be facilitating / assisting / monitoring / measuring / improving / optimising (according to your taste, context and role) the flow of value all the way from ideas in people’s heads (initial requirements) through to not only implemented but also service-managed, supported and maintained systems and services, in their human context. To do this, in today’s environment of increasingly-rapid, innovative and pervasive change, we do need to renovate and innovate. When holistic and evolving, testing will not die (and must not be allowed to die). I choose to focus on:

renovating the increasingly fragmented and apparently-neglected subject of risk-based testing; and using analogies from science and evolution to inspire ideas for innovation in testing generally.

3

Page 4: Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Emergence (2012 paper)

1. Renovating the use of Risk in testing

1.1 Current variants of Risk-Based Testing etcThe first step in renovation is to collate what variants of “Risk-Based Testing” (or related terms) are around, and how we arrived at this situation. The diagram below shows a simplified flow over time, from left to right.

The early books by Hetzel, Myers and Beizer all contained some notions of testing as depending on principles of risk, but this was mostly implicit. Then in the later 1980s and through the 1990s, basing testing on risk became explicit as statement of theory. But you wait ages for guidance on how to practically do risk-based testing, then in 2002 three books came along at once!

Paul Gerrard, drawing on the earlier work of James Bach and others, published Risk-Based E-Business Testing, the theme of which was imagining what could go wrong with a system, then designing tests to address those risks. I was co-author of that book.

Craig & Jaskiel described in Systematic Software Testing a somewhat different view of risk-based testing, which prioritised software features and attributes according to risk (its current version is called risk-driven testing, and has no doubt evolved since then);

Kaner, Bach & Pettichord published Lessons Learned in Software Testing, which included context-driven versions of both of the above variants, but distinguished them as risk-based test design and risk-based test management respectively.

Since then, I have seen a variety of approaches, published in books, papers or as proprietary methods. I meet many people who tell me they know what risk-based testing is, it’s quite easy to do, and it’s “not that stuff over there, that’s not risk-based testing”. I think these are all useful to some degree, but I believe they are all partial views (either focussing on the prioritisation side or on the risks-as-test-entities side), some seem to be too prescriptive / too simplistic / too complex, and I do not believe that risk-based testing is easy. Not good risk-based testing, anyway.

The field seems to be fragmented; and it no longer seems to receive the attention it used to. Fashion has moved on to other subjects. Are some people just paying lip-service to risk-based testing? How many people are doing it well? How does it relate to /merge into safety-critical methods? In 2007 I integrated the two main aspects of risk-based testing into my Holistic Test Analysis & Design method, but that is only part of the story (and does not yet have tool support).

4

Page 5: Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Emergence (2012 paper)

I think it is time for a broad re-appraisal of the whole subject – away from one-size-fits-all, to be more inclusive of various approaches, more responsive to context.

5

Page 6: Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Emergence (2012 paper)

1.2 Context-driven mix of available principlesI would like to see more cross-fertilisation and unification between the “upper and lower halves”, sometimes called risk-based test management and risk-based test design. On some projects these are done by different people of course, but not always. And anyhow, the two halves should fit together. One way (and it is only just one choice) is to mirror-image James Bach’s Heuristic Test Strategy Model (HTSM), as illustrated below.

The lower half is borrowed straight from the HTSM, and the upper half is modified to show similar usage for prioritisation of work. I do not mean simply “do this first, then that...” – decisions need to be made on what to prioritise, and how. The message here is that we should be ready to mix and match methods and techniques from the variety available, depending on context factors.

1.3 Risk-Graded TestingOne thing I feel compromises the respectability of risk-based testing in some situations is the notion that having prioritised things, we can set a cut-off threshold below which things are not tested. A better way, I believe, is to “grade” coverage and/or effort, from low (not zero) to high, according to the selected risk factors.

6

Page 7: Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Emergence (2012 paper)

I think “Risk-Graded Testing” might be a better term here than Risk Based Testing. One reason is that Risk-Based Testing aligns with the term Test Basis, often used to mean a document or other oracle against which tests are designed. Another reason is that it distances itself from cruder notions of prioritisation, and from cut-off thresholds.

1.4 Value-Graded TestingTaking this a step further: we should grade testing coverage / effort not only by risk, but also the varying benefits of the features being tested. There is a partial correlation, because features which have high benefits will also tend to have high business impact if they go wrong, but it is worth making the distinction because considering the benefits may generate specific test ideas and inform the selection of test techniques. Particularly In agile methods, if a feature is exhibiting serious bugs in testing and is not of critical benefit, it is more likely to be descoped from a release.

We may think of value in terms of expected benefits minus residual risk after an amount of testing.

1.5 Value-Inspired TestingRisk is relevant at all levels of testing, but the risks differ by-level. The diagram below illustrates several principles:

all the way through the lifecycle, different risks accumulate; the quality information a test provides depends on comparison of software’s behaviour with the test model,

the development mode (verification testing) and also real-world desired behaviour (validation testing).

7

Page 8: Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Emergence (2012 paper)

Although this is shown in the format of a V-model, it is not necessarily advocating “the” V-model in its traditional sense. I argue that all lifecycles have some kind of levels of stakeholders & participants, levels of specifications / other oracles and levels of integration of the developed system. Iterative lifecycles can be considered as repeatedly descending then ascending through some or all of these levels in various ways.

Looking at this in more detail: requirements are necessarily a simplification of the way the software will behave in use; no requirements can be perfect. When functional and non-functional specifications are written, there are risks of distorting / omitting requirements, or adding functionality that is not really wanted. And so on through design and coding – all of these are different risks with their own set of risk factors (each with their probability and consequence components). This chain (or rather, network) of risks corresponds to the various definitions of mistake, defect, fault, failure etc.

To manage these various risks, we need a variety of techniques. The traditional view is that the earlier we mitigate risks, the less the knock-on effect (diagram below), although in agile methods some more tactical risk management is used, eg making some decisions as late as possible, allowing technical debt to build, then refactoring at suitable times.

Looking more closely at validation: it includes all the decisions that cannot be made by simply “checking” behaviour against a specification:

8

Page 9: Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Emergence (2012 paper)

Even if good specifications exist, are they 100% up to date? Are they still what is wanted, or is a change request needed?

No specification is perfectly detailed or specifies every possible thing which the software should do and should not do (expressable as risks), therefore some behaviour will be implicit / assumed, and judgement will be needed;

in some contexts, traditional specifications may not exist at all; testers may therefore need “oracles” other than specifications – for example:

o consistency with product /system purpose, history, image, claims, comparable products/systems etco familiar failure patterns.

So in summary, risk-related principles apply throughout testing, from reviews to test specification through execution to retesting, regression testing, go-live and beyond.

1.6 Value Flow ScoreCardsNow, how can we manage risk throughout the system development lifecycle and throughout testing? I propose in this paper a framework to do this, but in order to get there, for a few moments let us a step back from risk.

Through the lifecycleIn the introduction I suggested we think of testing as value flow management. One approach to this is to start with the concept of a balanced scorecard. On the left half of the diagram below is a version of Kaplan & Norton’s original. On the right side is a modified version, tailored for software quality after a variety of authors.

The basic principle is that for each different view of quality, we may set a structure of objectives, measures, targets and initiatives. Kaplan & Norton’s original purpose was “translating strategy into action”. In IT project terms, we may ask:

what are our objectives? (for example, we may want to adhere to a particular process standard, or achieve a certain degree of product quality, or a degree of customer satisfaction;

by what measures will we gauge success – in colloquial terms, “what does good look like?”; what targets shall we set for a particular stage, eg the next software release? This could be in terms of bug

frequencies and severities after go-live, but measures and targets need not be quantitative, for example rubrics could be used for customer satisfaction surveys.

Then what initiatives shall we take to make this happen?

Four of the quality viewpoints may be thought of as applying to the current project; the fifth is about improvement, for future projects.

9

Page 10: Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Emergence (2012 paper)

In the following diagram, I develop this structure to fit conveniently within the software lifecycle. First I add two more viewpoints, supplier and infrastructure. Then I arrange the viewpoints in a kind of “value flow unit”.

To use this practically, the scorecard becomes table of seven columns and four rows. There is a rough logical flow from left to right. In earlier papers I have outlined several applications in and around testing, but there is not space here to describe those.

In the following diagram (next page), I illustrate how the value flow items which can be defined for an individual team or role can be cascaded to control value flow through the whole lifecycle, both down and up the levels and from left to right (corresponding to static then dynamic testing).

10

Page 11: Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Emergence (2012 paper)

Integrating riskNow, we are ready to integrate risk into the scorecard. Risks may be seen as threats to the success of the objectives for each view of quality, so we can insert a new row between objectives and the way we measure, target and define the way forward. When we know the risks, we can build in appropriate management measures and tactics.

Next, let’s look at different types of risk. Many authors distinguish: product risks, ie threats to the quality of software; from project risks, ie threats to the conduct of project activities.

Some authors also distinguish a third type, process risk, which is a kind of specialism of project risk connected with methodology.

The following diagram (next page) illustrates these, some examples, and relationships between the risks.

11

Page 12: Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Emergence (2012 paper)

Finally, we can now be more specific about the risks in the scorecard – because there is a strong correlation between the quality viewpoints and the risk types.

So to summarise up to now: we have arrived at a structure for setting out, balancing and measuring the full range of quality viewpoints, and for associating with them the risks which threaten. This is a complete, integrated quality and risk management framework. To continue the renovation, future work should now build together, using this framework:

a more holistic context-driven approach to risk, putting together the “two halves” of test design and test management and refining guidance on how to mix and match methods and techniques from the fragmented variety on offer;

firming up into practical advice how to balance benefits against risks; and clarifying how risk management activities can be pragmatically controlled throughout the software lifecycle

and throughout the testing process.

The challenge is to achieve an appropriate balance between a robust approach which is too complex, and an achievable approach which is too simplistic to be useful; this balance varies of course with context.

12

Page 13: Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Emergence (2012 paper)

Now to move towards the second half of this paper, which focuses on the rightmost column of the Value Flow ScoreCard, ie improvement for future projects.

The above diagram illustrates the relationship between the Value Flow ScoreCard and a “toolbox” structure I developed recently to fit around it, to embrace scientific thinking and a structure for thinking about innovation.

2. Innovating in testing, using Emergence conceptsThis toolbox structure is not a primary focus of this paper, but just to position the risk renovation and testing-innovation parts of this paper within that structure for reference:

This second part of this paper moves to consider innovation in testing, via analogies with how innovation occurs in nature.

13

Page 14: Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Emergence (2012 paper)

2.1 Evolution in NatureThe outer layer of the toolbox consists of this triangle:

There is evidence that innovation in nature includes a phenomenon called emergence, which is associated with the concepts of systems thinking and complexity theory. One way of looking at emergence is to see how different sciences build progressively on top of each other, according to scale:

When human society is established, the resulting further innovation no longer depends on scale but becomes explosive in its information content.

The explosion of human innovation is shown in more detail in the diagram on the next page (which also takes the opportunity to invert the image to a more satisfying view).

14

Page 15: Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Emergence (2012 paper)

The reference to Kurzweil epochs may not be appreciated by all readers. This is a rather extreme view of how explosive human innovation may continue in the surprisingly-near future. Many people are very sceptical of these predictions, but I would argue that bearing in mind the effects of Moore’s Law and the exponential innovation we have seen in recent years, even if progress is not as fast as Kurzweil expects, software is headed for some big new territory, and testing should be ready to boldly go there.

BiologyLeaving aside the particular technicalities of physics and chemistry, the most obvious part of the evolutionary saga is the biological.

A way of appreciating evolution (admittedly not shared by everyone) is to consider it in two related dimensions: over time, diversity has increased (though not regularly, as we will see); and also, broadly, the sophistication of organisms has increased (with humankind being a spectacular recent

example).

This concept is illustrated in the following diagram (next page).

15

Page 16: Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Emergence (2012 paper)

But it seems that evolution has not been smooth. Instead, there seem to be long periods of relative stability, interrupted by sudden upheavals such as mass extinctions or explosions of new species:

It is outside the scope of this paper to go into details, but there are examples in other sciences (eg physics, chemistry) of sudden emergences, eg those transformations known as phase changes.

The diagram on the next page illustrates this idea. The point of mentioning this in a paper about software testing is that many people (including myself) see this kind of behaviour as a universal phenomenon. We could, and maybe should, learn from it.

16

Page 17: Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Emergence (2012 paper)

Relationship with other sciences

The theory of such sudden advances was likened by Per Bak to the avalanches that occur unpredictably when a pile of sand is continually added to from above – suddenly a stable or metastable state gives way to widespread change.

2.2 Evolution of Software Testing

The view aheadAgain you may ask: what has this to do with software testing? Well, if you accept the idea of software testing as a social science, you should be aware that social sciences (much of human history) is, like other sciences, subject to punctuated equilibria. Another way of looking at the (Per Bak) avalanches is in terms of Gladwell’s “tipping points”.

Software testing has admittedly failed to keep up with advances in IT generally, and there are various ways out of this situation. It could, as some have claimed, “die” – but what would that do for the quality of life of all those people who depend on software? I would prefer to see us rise to the challenge, and help make the world not only a more complex place but really a better place.

17

Page 18: Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Emergence (2012 paper)

As IT has innovated explosively, it is worth the testing discipline taking a look ahead. For example, are we ready to test artificial intelligence? (admittedly some lower forms of AI have been around and in use for a while, but when did you last hear about them at a testing conference?).

The story so farThe table below represents my extrapolation of Gelperin & Hetzel’s historical analysis plus my recent interpretation of the “schools of software testing” situation.

But what can my proposed analogies with science and nature contribute to this picture?

18

Page 19: Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Emergence (2012 paper)

2.3 Genes to MemesOne way of understanding the explosive transition from slow biological evolution to rapid human cultural evolution is to consider replicating units of human knowledge and habits as analogous to the genes of DNA. These cultural units were named “memes” by Richard Dawkins, and many authors since have argued about the accuracy and usefulness (or not) of this analogy. The illustration immediately below is of genes as media of biological evolution.

The next diagram illustrates the analogy with memes. Memes are not so well-defined, but like genes they replicate (though not as precisely) and they mutate (more often and more extravagantly?).

2.4 Memeplexes in the History of TestingI am not the first author to claim a role for memes in software testing; the idea is already widespread on the internet. But in the meme literature there is a concept termed a “memeplex” – being a collection of related and readily-coexisting memes. it seems to me that memeplexes are a useful concept to understand software development ecosystems and schools of software testing.

Below (next page) are two examples of what might be called software testing memeplexes.

19

Page 20: Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Emergence (2012 paper)

The first is an old attempt by myself to represent what was then known as “software testing best practice”:

The second is an entirely different representation (though also by myself) – and this attempts to represent the antithesis of software testing “best practices”, namely a context-driven thought structure:

So, do memeplexes really help in understanding the evolution of software testing overall? I think they do, but even more illuminating I believe are the ideas of platforms, cranes and tipping points. A memeplex codifies an ecosystem which has become established on a platform. The driving forces are arguably:

what are the cranes that get us to a new level, and the tipping points that make that lift respectable and respected?

is this a single stream of evolution or are there multiple streams?

In the following diagram I take the Gelperin-Hetzel-based view of software testing history and attempt to express it in the language of platforms, cranes and tipping points.

20

Page 21: Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Emergence (2012 paper)

And another worry... here is a different view of the history (so far) of software testing.

Over the most recent few years, has innovation really almost stopped, or is there another explanation?

The diagram below (nest page) shows a different view of testing innovation: cause-effect-chained rather than mere reportage. The bullet points on the right of the picture are closely related to the material I am about to present regarding innovation. But how do those factors and aids really operate?

21

Page 22: Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Emergence (2012 paper)

2.5 Emergence between “Too Much Chaos” and “Too Much Order”Now here is a new perspective on the initial ideas about evolution and emergence I expressed above. There are some suggestions from the scientific literature that life evolves best on “the edge of chaos”:

2.6 Innovation and Ideas for TestingA way of looking at testing (bearing in mind things I have said above) is to consider that it is part of an ecosystem with development, but it lags slightly behind (or far behind, depending on your experience / opinion).

Development continually carves a path towards the “chaotic” end of the spectrum, because of market forces and the typical personality mixes and cultures of programming groups. Conversely, testing tries to keep in step but is drawn towards the “ordered” end of the spectrum by its typical tester psyche and the conservatism and risk-aversion of its management.

I have tried to project the suspected tipping points I described above (psychology to method, method to art, art to engineering etc) onto a swerving path between too much chaos and too much order.

22

Page 23: Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Emergence (2012 paper)

There is communication between development and testing/quality disciplines, though development is in the lead.

In the platforms, cranes and tipping points illustration a few pages above, I questioned whether anything was wrong with that picture. Hmm... I think there may be. My perception is that there have been essentially “two cultures” at work here so far, not understanding each other well enough (see CP Snow, 1956, 1959 etc). The idea of “schools” of software testing was introduced and publicised as part of the foundation of the Context-Driven School.

I suggest that, rather like testing lagging behind development, traditional testing has been lagging behind context-driven. But I think that is at least partly due to the client business communities in finance and other traditional markets having lagged behind the more modern business sectors. The main point however is that the two factions do not communicate enough – more often they do not understand each other, agree to differ, or argue violently and non-productively.

So, have I any suggestions to address this concern? Well, maybe...

Author Steven Johnson tells numerous stories of creativity and other innovation in some areas of commonality he has identified (see diagram next page).

23

Page 24: Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Emergence (2012 paper)

Johnson’s innovations are expressed as seven themes, introduced by the reef-city-web” concepts and wrapped up by a survey of most significant human inventions in recent centuries.

The next diagram shows the specific innovation facilitators that aid innovation from platform to platform.

24

Page 25: Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Emergence (2012 paper)

The conclusion of the book is that over recent centuries the pattern of innovative environments has changed markedly (as illustrated below).

So, what are the lessons for software testing for all this? The table below gives some examples.

25

Page 26: Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Emergence (2012 paper)

3. VIVVAT Value-Inspired Verification, Validation And TestingTo renovate the Latin for “long may it live” – VIVVAT a Value-Inspired evolution of Verification, Validation and Testing. We still need all three: if we go to the trouble of writing specifications and developing them from higher-level documents, we need verification. And In this increasingly agile world, we need validation more and more. Testing suffers from a “two cultures” difficulty, but I hope that science can turn out to be a unifying factor to enable us all to work most effectively in our various contexts.

26

Page 27: Value-Inspired Testing - renovating Risk-Based Testing, & innovating with Emergence (2012 paper)

References and AcknowledgementsThe sources below have been the primary inputs to this work. This is not a full bibliography, and may be expanded in future versions of this paper.

I am particularly grateful to colleagues with whom some of these ideas have been developed, both within and outside client project work – in particular:

Chris Comey of Testing Solutions Group, whose structure for risk-based testing made a useful and complementary counterpart to the method which Paul Gerrard and I published in the 2002 book Risk-Based E-Business Testing.

The Software Testing Retreat – a small informal semi-regular gathering started in the UK by EuroSTAR regulars. In recent years this has grown to include some international friends. The original stimulus for the Value Flow ScoreCards idea came from Mike Smith who was interested in testing’s role in IT projects’ “governance”, and the governance of testing itself. Isabel Evans was a major inspiration for my subsequent scorecard ideas which integrated well with her views of quality. My joint presentation with Mike Smith “Holistic Test Analysis & Design” at STARWest 2007 laid the foundations for the ScoreCard idea.

Stuart Reid has published material on Risk-Based Testing and on innovation in software testing which contains some similar messages to those in this paper, and to which I have referred:

o The Five Major Challenges to Risk-Based Testing; ando Lines of Innovation in Software Testing;

Scott Barber blogged some persuasive material in response to the “testing is dead” blogs, and now has a scheme of mission-driven measurements which are aligned to value and risk (similar themes to this paper); and

thanks to the Association for Software Testing, its members and the authors and teachers of the Black Box Software Testing series of courses, with whom I have had many fruitful conversations. These have given me a deeper insight into the principles and practices of the Context-Driven school of testing, and how those may be used (where context demands) to more thoughtfully interpret and selectively apply various testing methodologies of various degrees of formality and ceremony.

EuroSTAR 2012 T6 Neil Thompson Value-Inspired Testing v1_0.docx

27