an ounce of prevention is worth a pound ... - kingfisher inc. · company overview leading maker of...

20
Shaheen Makandar “ An ounce of prevention is worth a pound of cure” Session 2877

Upload: vukiet

Post on 06-Apr-2018

215 views

Category:

Documents


3 download

TRANSCRIPT

Shaheen Makandar “ An ounce of prevention is worth a pound of cure”

Session 2877

Why a Steady State approach saves you time, money and maybe….your job!

You do not know what you don’t know. The goods news: you can find out!

Best practices on how to implement steady state environment

LEARNING POINTS

Navigating complexity through instability

GPI - A Customer story Packaging

International, Inc. Quick Facts

Locations: Corporate

Headquarters

Atlanta, GA 20 +

World Wide

Locations,

Industry: Consumer

Packaged Goods

(CPG)

Products: Packaging

Products

Revenue: $4.3 Billion

Employees: ~14,000

Solution: Steady

State Support (SSS)

Business Challenges

• Reporting delays due to process failures

• User mistrust in data, due to failure of jobs in the middle of the

processes rendering incorrect data

• Constant re-runs of critical reports

• Maintenance of systems in addition to regular enhancements

• Lack of expertise from Reporting to EIM applications and Data

Quality support.

Company Overview

Leading maker of laminated, coated, and printed packaging such as

beverage carriers, cereal boxes, microwavable food packaging, and

detergent cartons.

Some of its customers are: Kraft Foods, MillerCoors, Anheuser-Busch,

General Mills, and various Coca-Cola and Pepsi bottlers.

Landscape for GPI

BW environment

Data Services for Data Integration jobs (i.e.; Salesforce)

Information Steward for Data Governance (Materials, customers, Vendors)

BOBJ and LUMIRA: Users view

Hardware Failures

Obstacles ahead

Network Issues

Inconsistent Data

Knowledge Gap

Changes

Several Systems

New requests

Politics

Talent Availability

Maintenance Windows

Overall visibility means “steady behavior”

Perform a full systems check on every critical system:

Databases Data Integration Systems Presentation Layer

Revise problem report protocols

Establish standard procedures for response

80/20 rule applies here

Creating a Steady State Support (SSS)

framework – Where to begin?

What are your Dependencies?

How do you engage them?

Do they have the tools or access they need?

Proactive vs. Reactive approach

Have experts in BO but own the landscape as well.

SSS Framework – What to look for?

GPI Signed a year long contract with

Multiple (and interchangeable) resources availability

Team with experts from all aspects of SAP BI

platform and EIM applications.

No waiting! Team is engaged if issues occur

Production System Monitoring and support

Application Upgrades and Data Migrations

GPI Signed a year long contract with

SSS is treated separate from other projects

Domain expertise – It takes time to understand

complex systems

Every change is documented and communicated in

weekly meetings

Bigger the problem, bigger the engagement from

multiple resources

Development and post-implementation support is

easily transitioned according to established

standards

Your users will revolt if problems are pervasive

It saves your team time because they probably

should be doing something else

Effective organizations constantly fight

unreliability in order to decrease uncertainty

Planning can only be effective if you can trust

your architecture

RETURN ON INVESTMENT

Reduction of remedy tickets by 76%

Immediate response when issues do occur

If needed, Issues can be escalated sooner

Increase in the number of users trusting the BI

System and using the reports

Increase in enhancement requests

Projects completing on time

RETURN ON INVESTMENT

Understand the impact of not having reliable information or any information at all and communicate to your organization!

Change is required if what is in place is not working

Take control over possible points of failure

Establish standards that can be replicated elsewhere and everywhere

When problems occur (and they will occur) – evaluate, document, and act!

BEST PRACTICES

Bring down systems before each maintenance window and bring them up after (gracefully)

Make sure your message/reporting by exception is working

Always check for system integrity (are all services running?)

Communicate, communicate, communicate – Especially when errors are fixed.

Standardize whenever and wherever possible

BEST PRACTICES

Sometimes issues have simple explanations (insufficient memory in a BOBJ server)

Inconsistent data will cause inconsistent results – Look for ways to guarantee that ONLY vetted data goes through your checkpoints – (Before and after checksums will help here)

Have an early alert system in place (even if manual)

Check and apply recommended patches and notes monthly

Create a team that can support each other – Avoid silo knowledge.

BEST PRACTICES

Everybody is busy but there is a lot you can do!

The price for fixing something is higher than preventing it

Fix recurrent issues regardless the cost (80/20 rule!)

Document, document, document (Issue tracking system)

Understand that fault may NOT be in your system, but in its interaction with another.

KEY LEARNINGS

Blue sky should be the norm not exception…

Which one is your story?

STAY INFORMED

Follow the ASUGNews team:

Tom Wailgum: @twailgum

Chris Kanaracus: @chriskanaracus

Craig Powers: @Powers_ASUG

SESSION CODE

2877