doing analytics right - building the analytics environment

Post on 11-Jan-2017

49 Views

Category:

Software

3 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Building the Analytics Environment

Look Whose Talking

@tasktop

• Nicole Bryan, VP of Product Management, Tasktop– Passionate about improving the

experience of how software is delivered– Former Director at Borland Software– nicole.bryan@tasktop.com |

@nicolebryan

• Dr Murray Cantor – Senior Consultant, Cutter Consortium – Working to improve our industry with

metrics– Former IBM Distinguished Engineer– mcantor@cutter.com | @murraycantor

What we’ve learned so far….

• Webinar 1: There is no “one size fits all” metric nirvana

• Webinar 2: Use GQM to design the metrics that are right for your mix of development

Today…

It’s all about the execution! Let’s get practical!

©2015 Murray Cantor

Choosing metrics big picture

Agree on goals

- Depends on the levels and mixture of work

Agree on the how they fit into the loop

1. “How would we know we are achieving the goal”

2.” What response we should take?”

Determine the measures needed to answer the questions

- Apply the Einstein test (as simple as possible, but no simpler)

Specify the data needed to answer the questions

Automate collection and staging of the data

4

Today

©2015 Murray Cantor

From Goals to Measures to Data (GQM-ish)

1. Identify a set of corporate, division and project business goals and associated measurement goals.

2. Specify a sense-and-respond loop to steer to the goal.

3. Generate questions based on the goal that if answered:

• Let you know have achieved, are trending to \ the goal?

• Provide the level of detail necessary to take action

– Where is the problem, bottleneck?

• Communicate progress to stakeholders

– Summaries, rollups

4. Select or specify data needed to answer the questions in terms of state transitions of the relevant artifacts

5. Study the data to specify the data set and statistic needed to be collected to answer those questions and track process and

product conformance to the goals.

6. Develop automated mechanisms for data collection.

7. Collect, validate and analyze the data in real identify patterns to diagnose organization situation and provide suggestions for

corrective actions.

8. Analyze the data in a post mortem fashion to assess conformance to the goals and to make recommendations for future

improvements.

5

The “Last Mile Problem”

A phrase used in the telecommunications and technology industries to describe the

technologies and processes used to connect the end customer to a communications

network. The last mile is often stated in terms of the "last-mile problem", because

the end link between consumers and connectivity has proved to be

disproportionately expensive to solve.

Read more: http://www.investopedia.com/terms/l/lastmile.asp#ixzz3dAdJpzAQ

The Last Mile Problem

Aspiration without execution is useless!

No wait … It’s actually worse than useless…

– If execution for your analytics solution is difficult it can quickly leads to “The Light is Brighter Here” anti-pattern

Danger!!!!

How Do I Unlock All This Goodness?

Po

rtfo

lio M

gm

tAgile PM

Requirements

TestDev

Op

eratio

ns

Why So Difficult?

– Tool Reality

• You have lots of them! So it’s not one ETL, its many ETLs! That gets very hard to maintain.

• You’ve got disparate tools but your GQM needs single source fed by variety of tools

– I’ve got defects in HP QC, Rally and JIRA – how do I calculate cycle time!!!

• Yes, tool vendors have analytics solutions…. and these solutions are focused on their particular areas of specialization

Why So Difficult?

– Logistics problems

• SaaS problem – sometimes data only available for limited time

• Transaction based data vs. reporting based data

• Many of the smaller more purpose built tools have no thought that the transactional data they are producing needs to participate in a larger analytics strategy

• You say tomato, I say tomato

Remember – you want your point tools to stay focused on their domain expertise

What is the solution?

– Collated data across tools

– Abstraction away from specific tool representations of artifacts

– Near real time access

– Mix of simplicity so that you can just “get going” combined with the ability to “get sophisticated” when you need to/are ready to

Powering software lifecycle analytics

0

2

4

6

8

May June July Aug

0

2

4

6

ETL

Customer Data

Warehouse

“Raw” Data Storage in customer Database

(etc.)

Remember what Murray taught us?

©2015 Murray Cantor

Kinds of Development Efforts: What is your mix?

18

1. Low innovation/high

certainty

• Detailed understanding

of the requirements

• Well understood code

2. Some innovation/

some uncertainty

• Architecture/Design in

place

• Some discovery required

to have confidence in

requirements

• Some

refactoring/evolution of

design might be required

3. High innovation/Low

Uncertainty

• Requirements not fully

understood, some

experimentation might be

required

• May be alternatives in choice

of technology

• No initial design/architecture

©2015 Murray Cantor

Descriptive example: Cycle times

19

Let’s Bring Cycle Time to Life!!!!!!!!!!!

First, some key concepts of Tasktop Data

Defects

Requirements

Test CasesTimesheets

A tangible by-product produced during the development of software.

Artifacts CollectionsA set of artifacts from your repository

Collection #1

Collection #2

JIRA Defects Collection

Priority• High• Medium• Low• Trivial

Summary

Fix Version

DescriptionPriority• High• Medium• LowReleased InTags

M O D E L

Project #1

Project #2

Project #3

HP Defects Collection

Priority• 1• 2• 3• 4

Description

Release

DescriptionPriority• High• Medium• LowReleased InTags

M O D E L

Project #A

Project #B

Project #C

Event Collection

Priority• High• Medium• Low

Description

Released In

DescriptionPriority• High• Medium• LowReleased InTags

M O D E L

* Raw database collections are a little bit special

Reporting Integration

Flow Specification

And it will results in a database table like below

Another way of looking at it…Use this Model feeding defects from JIRA, HP, etc

Artifact ID

Project Type Created Modified Severity Priority Status Release Assignee

DEF-1 Project A Defect 1/1/15 1/1/15 1 High Open

DEF-1 Project A Defect 1/1/15 1/2/15 1 High In Progress

John

DEF-1 Project A Defect 1/1/15 1/5/15 1 Med In Progress

John

DEF-1 Project A Defect 1/1/15 1/7/15 1 Med Shipped 1.0.0.1 John

1 Artifact, 4 Rows in Database

Event Log Concept

And once you’ve got that, you can easily get things like this….

Demo

(2) Create or reuse a model

(3) Create collections(And Map the Collection to the model)

(4) Create an integration

Four Easy Steps

(1) Connect to your system

1234

(1) Connect To Your System

(2) Create or reuse a model

• Identify the fields to flow• Configure to Normalize the Data

(3) Create Collections (and map them)

(3) Create Collections (and map them)

One Core Artifact Type

Sourced from One Repository

Many Projects

Mapped to One Model

• Configure fields and field values to conform to the normalized model values

• Transform values

Mapping Artifact to Model

(4) Create an Integration

Solves the Last Mile Problem

– Collated data across tools

– Abstraction away from specific tool representations of artifacts

– Near real time access

– Mix of simplicity so that you can just “get going” combined with the ability to “get sophisticated” when you need to/are ready to

Stay in touch

@tasktop

nicole.bryan@tasktop.com@nicolebryan

mcantor@cutter.com.com@murraycantor

@tasktop@cuttertweets

top related