putting together the pieces: supply chain analytics - 2 sep 2017
Post on 21-Apr-2017
2.933 Views
Preview:
TRANSCRIPT
Putting Together the Pieces: Supply Chain Analytics Insights on Technology Options
09/07/2016
By Lora Cecere
Founder and CEO Supply Chain Insights LLC
Page 2
Contents
Research Methodology
Disclosure
Executive Summary
What Defines a Supply Chain That Works Well?
The Steps to Define a Holistic Supply Chain Analytics Strategy
The Role of Analytics in Developing Strategies for Supply Chain 2030
Making a Technology Selection
Recommendations
Conclusion
Terms to Know
Research Summary of Quantitative Data
Additional Reports of Interest
About Supply Chain Insights LLC
About Lora Cecere
3
3
4
5
9
14
15
16
17
18
21
22
23
23
Page 3
Research Methodology We are committed to delivering thought-leading content for the supply chain leader. The goal of our
research is to be the first place that supply chain visionaries turn to in order gain unique insights on
technology evolution.
This report is based on qualitative research completed during the period of January-July 2016. In this
research effort, we interviewed 35 analytics technology providers to understand their solutions. This
was followed by interviews with 30 innovative supply chain leaders. Here we share the findings.
To support this research, and take it one step further, we augment these qualitative insights with
quantitative survey analysis collected in preparation for the 2016 Supply Chain Insights Global
Summit. In this research we share insights on the importance of supply chain analytics in Supply
Chain 2030 strategies. (Details on this quantitative survey analysis is shared in the Appendix.)
Disclosure Your trust is important to us. In our business we are open and transparent about our financial
relationships. In this research process we never share the names of respondents, or give attribution
to open comments collected in the research.
Our philosophy is “You give to us, and we give to you.” We collect data from a private network of
qualified participants and openly share the results. The participants of our research always receive
the final reports; and, if interested, we share insights from the studies with the respondents of our
quantitative surveys and qualitative interviews in a complimentary one-hour phone call with supply
chain teams.
This report is written and shared using the principles of Open Content research. It is intended for you
to read and share freely with your colleagues and through social channels like LinkedIn, Facebook
and Twitter. When you use the report all we ask for in return is attribution. We publish under the
Creative Commons License Attribution-Noncommercial-Share Alike 3.0 United States and our citation
policy is outlined on the Supply Chain Insights Website.
Page 4
Executive Summary Supply chains are drowning in data, but are low on insights. While the cost of computing memory was
once a barrier to executing an analytics strategy, this is no longer the case. The largest barrier is the
understanding of new forms of analytics.
Historically, the term supply chain analytics was used to describe reporting. This is no longer the
case. Today there are more options and capabilities for supply chain analytics. There is a proliferation
of new technologies flooding the market.
Ironically, despite the explosion of options as shown in Figure 1, the supply chain operating team is
more conservative. It is a skewed distribution. When it comes to decision support, the number of late
adopters outnumber the early adopters three to one. The lack of early adopters, the rapid rate of
change, and the conventional architectural definitions (primarily focused on Enterprise Resource
Planning or ERP-based architectures) are barriers to the adoption of new forms of supply chain
analytics.
Figure 1. Risk Profile of Companies Adopting Decision Support Analytics
Driving change requires education, testing and experimentation. The technology market is shifting:
there is a market evolution of technology offering promising capabilities. The goal of this report is to
share insights on how business users can drive new levels of competitive advantage through supply
chain analytics.
Page 5
What Defines a Supply Chain That Works Well? Data access and usability are prerequisites for a supply chain to perform well. Today both are issues.
As shown in Figure 2, 35% of supply chain leaders list the ability to get to data, and use analytics to
drive insights, as a top business pain.
Figure 2. Relative Supply Chain Business Pain
One of the issues is the lack of a comprehensive analytics strategy. The historic focus was on
building analytics within pockets—or functions—of the organization, centered around the applications
of advanced planning (APS), order management (ERP), transportation planning (TMS), sourcing
(SRM), and warehouse management (WMS).
Page 6
Why was this a problem? This focus is not holistic. Most companies have only invested in reporting,
without a data analytics strategy. As a result, as shown in Figure 2, data access and usability of
analytics rates as the fourth largest business pain.
As demand and supply volatility increases, the need for data and insights grows. Great analytics also
improves visibility and organizational functional alignment. It is also needed to drive outside-in
business processes. A holistic analytics strategy is the answer to many of the pains ailing the supply
chain today.
Today only one in three companies believe their supply chains are working well. While companies
would like to have supply chains that are more proactive, as shown in Figure 3, the majority are
reactive.
Figure 3. Current Characteristics of Supply Chains
When we examine this data more closely in companies that believe that their supply chains are
working well, there is a significant difference (at a 90% confidence level) between companies that can
access data and use supply chain analytics well and those that cannot.
Other significant factors include executive understanding of the supply chain, and cross-functional
alignment. Companies should tackle these change management issues up front (and often) in the
Page 7
development and execution of a holistic data strategy. Most of the terms and concepts in this report
will be new to the supply chain leadership team. As a result, companies will need to recognize the
need for change management and human capital development along with the testing and trial of new
forms of technology. As a result, data analytics for the supply chain should not be approached
as a technology project. There needs to be a clear charter that includes aggressive management of
the change management issues.
Figure 4. Characteristics of Supply Chains That Are Working Well Versus Supply Chains That Have
Room for Improvement
Another issue is that the new technologies for supply chain analytics don’t come traditional providers
of supply chain technology. This sets up nine major issues.
1. New Forms of Supply Chain Analytics Providers Have the Goods, but Not the Knowledge.
New supply chain analytics vendors are struggling to learn supply chain. They do not have
supply chain backgrounds and each manufacturing/distribution organization is different. As a
result, the requests coming to them from companies are confusing.
2. Traditional Supply Chain Vendors Have the Knowledge, but Are Slow to Change/Adapt.
Traditional supply chain technology solution vendors are slow to adopt new forms of analytics.
Why? It requires a rewrite of traditional architectures. As a result, the adoption of supply chain
analytics requires companies to redefine current standards for technology purchases.
3. New Capabilities Redefine the Fundamentals. Supply Chain processes are changing. The
shifts are many. New forms of analytics are redefining decision support. The changes are
Page 8
anything but subtle. It is redefining supply chain process definitions. Advanced Planning, as we
know it, with concurrent optimization using simplistic algorithms, will be obsolete in five years.
This includes moving from inside-out to outside-in processes (use of channel and supplier data).
The shift from one-to-one to many-to-many architectures will redefine supply chain visibility.
4. Not Always an Evolution. We Are Experiencing a Redefinition of Decision Support.
Traditionally, supply chain planning evolved to complement the decision support tools of finance,
marketing and sales. With the evolution of cognitive and prescriptive analytics, the traditional
enterprise architectural definitions of CRM, SRM, APS and ERP are not sufficient. These
traditional taxonomies will become outside-in and cross-functional using structured, and
unstructured, data to feed deeper analytic engines for prescriptive and cognitive analytics.
5. Talent and Support Is the Missing Gap. While traditional analytics approaches were selected
and maintained by information technologists, the new options for analytics are designed to
empower line-of-business users. The roles between IT and business process owners are
profoundly changed to drive self-administration, maintenance, and support by the line-of-
business leaders.
6. Standards and Master Data No Longer the Formidable Barrier. The Barriers Shift. While
supply chain automation has been hampered by master data issues and the lack of standards,
this is no longer the obstacle it was a decade ago. The use of machine learning and cognitive
computing to detect data anomalies, clean up data, and structure learning from disparate data
sets will make these current issues disappear. Gaining an understanding of this basic shift is
critical to unleashing new capabilities.
7. Supply Chains Do Not Play by the Rules. Should They Have to? With ever-changing
business requirements, it is hard for a supply chain to live within tight and inflexible rules.
Traditional supply chain rules—allocation, inventory matching, deduction and order matching,
Available to Promise (ATP)—were driven by “single if to single then” logic. This limitation made
the rules inflexible and tough to manage. New forms of analytics unleash the opportunities to
have flexible and learning rule sets for “multiple ifs connected to multiple thens.”
8. Buckle Your Seatbelt. Get Ready for the Rate of Change. While traditional architectures are
deployed in major releases, software based on as-is and to-be documentation, this is not the
case with supply chain analytics. Instead, these projects are small with a focus on test and
learn. The technologies will change at a rate of 10 times that of conventional supply chain
applications. There will no longer be a need for maintenance upgrades and ongoing
maintenance support contracts. A major mistake that companies will make is treating new forms
of analytics as a traditional technology project.
9. Don’t Limit the Technology. Answer Questions That You Do Not Know to Ask. To fully use
supply chain analytics, we must be open to the use of new forms of decision support in the form
Page 9
of prescriptive and cognitive analytics. Companies must divorce themselves of their
spreadsheets and allow analytic engines to manage the heavy lifting. This is an issue for many
companies that are deeply wedded to spreadsheet-based deployments and spreadsheet labor
ghettos. Working with machine learning and cognitive computing enables companies to answer
the questions that they don’t know to ask.
As companies drive competitive advantage through supply chain analytics, the first step is
recognizing that they are no longer hamstrung by the issues of the past.
The Steps to Define a Holistic Supply Chain Analytics Strategy Supply chain analytics requires defining a layer to lie between operational applications—Advanced
Planning, Enterprise Resource Planning, Product Lifecycle Management (PLM), Supplier
Relationship Manager, Supply Chain Execution (SCE), Transportation Management Solutions, and
Warehouse Management Solutions—and workforce productivity technologies like search engines
such as Firefox, Chrome, and Internet Explorer, and collaborative applications like Yammer and
Socialcast, and Microsoft Office technologies like Word, Excel, and Access. Traditionally, the focus
of supply chain leaders was on functional/operational technologies. As a result, there has not been a
holistic focus like the one which is outlined in Figure 5.
Page 10
Figure 5. Defining a Holistic Analytics Framework
To define this holistic analytics strategy, start by defining the problem to be answered. Then sort
through the alternatives. This section is designed to be a guide:
A) What Data Types Are Needed to Answer the Problem? When solving business problems and
driving new insights, the first question to ask is “Which data set is required to yield the greatest
insights?” Data comes in two types: structured and unstructured data. Most of the automation in the
supply chain today is based on structured data. (Structured data fits well into rows and columns.
Examples of structured data include transactional and planning data.) Unstructured data does not fit
into relational tables. These unstructured data sets include, but are not limited to, data types like
social sentiment, call-center data, commercial drawings, contracts/partner agreements, maps,
ratings/review data, pictures, warranty information, and weather data.
1. Structured Data. If the data is structured, build analytics on top of relational databases like
Microsoft, Oracle, SAP HANA and Teradata.
2. Unstructured Data. If the data is unstructured then the system requires the use of non-
relational database open source code through Cassandra, Cloudera, and MongoDB.
B) How Fast and When Does the Data Need to Move? (Not All Data Needs to Move at the Same
Rate or First Class.) What Are the Required Data Structures? The second step in the definition
of data infrastructure starts with mapping data flows. Some questions are, “Where does data need
Page 11
to pool into lakes for discovery? When does data need to stream in real-time, based on sensor data
outputs?” While streaming data needs to move in real-time flows, most data in the enterprise can
move at a slower rate to support batch processes. Columnar database structures help to move
transactional, and frequently used data, more quickly. (Columnar architectures like SAP HANA
brings frequently used data, based on columns, into memory). However, columnar database
structures do not work as well for hierarchical and time-phased data architectures of supply chain
planning. Here are some details:
1. Streaming Data Architectures. Stream processing is designed to analyze and act to drive
real-time “continuous queries” (i.e. SQL-type queries that operate over time and buffer
windows). Streaming data analytics are defined by the ability to continuously calculate
mathematical or statistical analytics, on the fly, within the stream. These streaming-data
processing solutions are designed to handle high volume, in real-time, with scalable, highly
available and fault-tolerant architectures. This enables the analysis of data in motion. This is
in contrast to the traditional database model where data is first stored and indexed with
subsequent query processing. Stream processing takes the inbound data while it is in flight,
as it moves through the server. Stream processing also connects to external data sources,
enabling applications to incorporate selected data into the application flow, or to update an
external database with processed information. There is no standard definition of streaming
data architectures for supply chain. These architectures are evolving. The most advanced
companies are using the open source code defined by Apache Spark.
2. Data Lakes. A data lake is a storage repository holding large amounts of raw data in a
native format—structured, semi-structured and unstructured data. The data is formatted
when called for use, and is formatted for analytics/visualization based on business
requirements. Data lakes are primarily used in supply chain for discovery and insights on
commercial, quality or supplier data. Nonrelational database technologies based on Hadoop
are being used for discovery on structured and unstructured data.
3. Data Warehouse. A data warehouse is used to normalize, harmonize and structure data for
reuse and enrichment. The most common use case is harmonization and synchronization of
structural data.
4. Hierarchical Data Structures. The organization of data into a one-to-many
representation/schema for analysis. In supply chain, the most common use of hierarchical
data structures is for forecasting and demand management.
5. Transactional Data Formats. Transactions, or small events, are recorded into relational
database structures. Relational data structures in Enterprise Resource Planning are
architectures that are designed to record and retrieve business transactions within the
organization to a common system of record.
Page 12
6. Time-Phased Data Structures. These data structures format over time for analysis. Time-
phased data is the basis of supply chain planning. In the design of supply chain planning,
the focus is on the reuse of rows. In contrast, the management of transactional data is more
focused on column reuse.
C. What Is Required to Improve Insights? Which Types of Intelligence/Data Models Yield the
Best Results? The goal is to drive insights by adding intelligence to better understand data patterns.
The progression of supply chain analytics moves through four steps. While descriptive analytics
enables the visualization of data, predictive, prescriptive and cognitive analytics brings progressive use
of optimization and machine learning to drive insights from data. These technologies are packaged into
engines requiring data inputs, data model definitions and data outputs. The use of analytical engines
enables the sharing of new insights. The evolution of capabilities is shown in Figure 6.
Figure 6. Evolution of Progressive Levels of Analytics in Supply Chain Engines for Analytics
1) Descriptive. Visualization of data is the first step. With the use of descriptive analytics
companies can easily gain insights of data relationships without the deployment of engines.
2) Predictive. The use of predictive analytics enables the use of different forms of algorithms
to drive new insights. This includes linear optimization, genetic algorithms and stochastic
optimization. In each case, there is a well-defined set of inputs, engines and outputs. These
Page 13
run as batch processes within the supply chain. The use of predictive analytics is most
commonly used on historic data.
3) Prescriptive. Prescriptive analytics not only gives an output, but will also give
recommendations to exceptions. The most commonly used forms of prescriptive analytics
are in transportation routing based on road construction, traffic delays, and
4) Cognitive Learning. The use of machine learning to learn from data. Cognitive computing
starts with a rules-based ontology that structures the learning and then drives insights over
time based on hypothesis engines.
While predictive analytics is limited to structured data, prescriptive and cognitive analytics are
evolving to be used on nonrelational databases based on structured and unstructured data.
D. Delivery Mechanisms. Parallel processing and cloud-based delivery increases the options for the
buyer of analytics. While the traditional approach was licensed software with a maintenance agreement
of 18-22%, cloud-based deployments increase the options and reduce the costs.
1) License. Purchase of applications and implementation of the analytics through a license
deployment model including maintenance and implementation fees.
2) Hosted. Licensed software is run and managed by a third-party and deployed via cloud-
based architectures. In the selection of hosted software clearly understand the instance
architectural definitions and the limitations due to server sharing.
3) Software as a Service. Solutions deployed natively over the internet. These cloud-based
architectures redefine maintenance and software upgrades. These applications are usually
purchased based on a number of users to use the software over a predetermined period of
time.
4) Business Process Outsourcing. In Business Process Outsourcing (BPO) services, a third
party drives the business process on behalf of their client. They use analytics through cloud-
based deployments to design, manage and maintain analytics software. The use of a third
party to provide not only software, but also the management and support team to run the
software to drive the analytics. Cloud-based services also enable the use of BPO as an
option.
F. Organizational Design. Determine how the solution will be used and who will maintain the data
models and inputs.
1) IT-Based Deployments. In traditional deployments, the software is maintained and
upgraded by informational technology teams. By definition, the use of relational databases
and reporting based on Enterprise Resource Planning (ERP) are IT intensive.
Page 14
2) Self-service. To maximize the value of new analytics solutions, focus on self-service
deployments. These designs allow business leaders to drive their own reports and queries.
While the traditional analytics deployments were very dependent on DBAs (Data Base
Administrators), newer forms of analytics allow business users to own their applications.
3) queries, plan, or
4)
The Role of Analytics in Developing Strategies for Supply Chain 2030 As companies plan for Supply Chain 2030, and build strategic plans, analytics plays a major part in
the development of new capabilities. The relative focus of disruptive analytics is shown in Figure 7.
Figure 7. Role of Analytics in the Development of Supply Chain 2030 Strategies
This evolution of Supply Chain 2030 strategies will be built on multiple, not one analytics technique.
The portfolio of analytics defining Supply Chain 2030 are shown in Figure 8
Page 15
Figure 8. Most Important Analytics Techniques in the Evolution of Progressive Levels of Analytics for
Supply Chain 2030
Making a Technology Selection Today there are more options. The traditional world of analytics—with well-established terms and
technologies-- is redefined. The first step is formulating the business questions. Here we share
answers to frequently asked questions:
Which vendors should I consider for data visualization? Consider FusionOps, Qlik, Spotfire, or
Tableau Software. (Trifacta enables data wrangling from Hadoop into a Tableau visualization layer.)
In mining unstructured text and making it usable in supply chain processes, which vendors
should be short-listed? Clarabridge and SAS Institute.
In choosing a vendor for streaming data, which vendors should I evaluate? Savi Technology,
and SAS Institute.
When getting data out of SAP tables to drive insights, which vendors should be considered?
Data wrangling and data extracts from SAP are easier when using Every Angle and Trufa.
In supply chain planning, which vendors have a natively cloud-based environment? This
includes o9 Solutions, Kinaxis, and Steelwedge. Look for Oracle to release a cloud-based planning
solution in Q1 2017.
I am working with complexity and would like to move to attribute-based planning. Which
solutions can help to plan products at an abstracted level of attributes to see beyond an
Page 16
item-at-a-location view (SKU) level of planning? Consider the use of attribute-based planning
technologies from Adexa or Logility.
What technologies can I use to build a discrete event simulator of the supply chain to test
the feasibility of supply chain design? Consider the solutions from AnyLogic, Arena or
LLamasoft.
My goal is a self-correcting and learning supply chain. Which artificial intelligence (AI)
solutions can help? One of the most exciting technologies is the use of machine learning to sense
anomalies and redefine master data management through rules-based ontologies and outlier
detection. Consider the use of technologies from Enterra Solutions, IBM Watson, Rulex and
TransVoyant.
I am looking for a simple, easy to use solution for planning based on spreadsheets, but I
would like to have the spreadsheets anchored to a system of record. What do you suggest?
Consider using Boardwalktech or John Galt.
I would like to build a set of customized set of optimizers to solve a supply chain problem.
What do you suggest? Use the enterprise-class platforms from Opalytics and SAS Institute.
Recommendations Build analytic strategies with the goal in mind. Think past pockets of analytics surrounding
conventional applications and build a holistic vision on how data should pool, stream and transform
the supply chain. Here are seven recommendations to consider in building your strategy:
1. Ask the Right Questions. Think Beyond Existing Paradigms. A frequent mistake companies make
is to focus only on technology. Start with the goal and work backwards. Train employees to ask the right
questions and use analytical approaches to drive root cause analysis. Use new forms of analytics to
unleash new insights.
2. Define the Metrics That Matter. Define a balanced scorecard and use analytics to drive frequent
updates of cross-functional metrics. In selecting the metrics for a balanced scorecard work with the
financial teams to set realistic targets. (Use of the Supply Chain to Admire report analysis can help you
to understand what is feasible based on industry-specific outcomes.)
3. Get Good at Managing Data. Data volumes are increasing and cleanliness issues abound. Get good
at using new forms of analytics to clean and parse data. Use cognitive and prescriptive analytics to
redefine master data management.
4. Focus on Self-Service. Take Ownership of Supply Chain Analytics. To maximize the value of
analytics, implement solutions that can be administered by line-of-business users. Partner with
Page 17
Information Technology (IT) teams to make self-service a goal. To implement this strategy, focus on
solutions like Anaplan, Halo, Qlik and Tableau which can be managed by business teams.
5. Not All Data Needs to Fly First Class! Move data at the speed that the business requires. Costs
escalate as data speed increases. While many supply chain strategies state that the data needs to
move real-time, the use of streaming data architectures (which are expensive) should be only used for
sensor and real-time event data.
6. Rethink Old Problems and Apply New Solutions. New forms of analytics offer opportunities to solve
old problems. Use technologies like Enterra Solutions and Rulex to manage data anomalies and
redefine master data management. Work with Cloudera, Infosys and Hortonworks to use Hadoop and
open source standards to combine structured and unstructured data.
7. Think Beyond Three- and Four-Letter Acronyms. While traditional data architectures center on
reporting on SCM and SCE architectures, think beyond APS, CRM, ERP, SRM, TMS, and WMS
requirements. Connect traditional architectures to workforce automation by thinking holistically about
data requirements and analytics.
Conclusion New forms of analytics offer great opportunities for companies that can think beyond traditional
definitions. However, it is not about technology for the sake of technology. Business value is
maximized when companies build with the goal in mind with a focus on answering the right business
questions.
Page 18
Terms to Know Getting clear on terms is the first step to driving a supply chain transformation. To help teams to gain
greater value from the report, here we provide definitions of the terms used within:
Artificial Intelligence.
Artificial Intelligence. The use of computer-based learning that includes cognitive learning
analytics and machine-to-machine learning using sensor data. Artificial intelligence is the backbone
of the autonomous supply chain.
Apache. Open source code development through the Apache Software Foundation. The group,
founded in 1993, was founded by eight developers. Over the last three decades the group has
managed a governance structure to release code to support improvements in analytics including,
but not limited to, Hadoop, Hive and Spark.
Apache Spark. An open-source cluster computing framework to enable extreme computer
parallelism and streaming data architectures with fault tolerance.
Autonomous Supply Chain. The use of analytics to enable robotics and self-driving vehicles.
Cloud-based Software. Data deployed through cloud-based deployments. This can include hosted
applications and natively configured Software-as-a-Service (SaaS architectures).
Cognitive Learning. Analytics that learn over time, simulating the human brain. Cognitive
computing is a form of Artificial Intelligence. Within the engines, the data relationships are often
represented by a rules-based ontology to test hypothesis engines to drive learning on structured
and unstructured data.
Data Lake. The pooling of data of different forms into a repository to enable discovery. The use of
prescriptive and cognitive learning on top of a data lake can answer the questions that we do not
know to ask.
Data Wrangling. The handling, cleaning and formatting of large amounts of data to submit for
processing.
Discrete Event Simulation. Modeling of discrete events in a simulation environment.
Decision Support Technology. The use of mathematics and optimization to improve decisions.
Common forms of decision support technologies are supply chain planning, price management and
trade promotion management.
ETL. A technology that Extract(s), Transform(s) and Load(s) data into data warehousing
technologies. The data can be homogeneous or heterogeneous.
Hadoop. An Open Source software framework for distributed storage and processing of very large
data sets on computer clusters. The approach is built with the assumption that hardware failures are
common. It is defined by the Hadoop Distributed File System (HDFS) and a processing layer
Page 19
termed MapReduce. Hadoop splits files into large blocks and distributes them across nodes in a
cluster.
Hosted Solutions. Centralized software clusters software supporting remote users over the cloud.
The solutions can be hosted internally or externally, but the architectures are fundamentally different
than Software as a Service (SaaS). Software as a Service is a form of cloud-based software that is
designed for the cloud and enabled by cloud-based services. The fee structures are subscription—
typically with a three-year license agreement—and maintenance is automatically downloaded.
Impala. Impala is a modern, open source MPP SQL engine architected from the ground up for the
Hadoop data processing environment. Impala provides low latency and high concurrency for
BI/analytic read-mostly queries on Hadoop, not delivered by batch frameworks such as Apache.
Hive. Open source analytics for query on large data sets in nonrelational database structures.
Hadoop. An open source software framework for distributed storage and processing of large data
bases on computer clusters based on commodity hardware. The core is a distributed file system
(HDFS) and processing layer termed MapReduce. Hadoop splits files into large blocks of data and
distributes the data across nodes.
Machine Learning. A subcomponent of cognitive computing, machine learning enables learning
from pattern recognition.
Nonrelational Database. While traditional supply chain applications are written using relational
data bases with referential integrity between rows and columns based on a data model,
nonrelational databases are being increasingly used for large volume and real-time web
applications. These are termed NoSQL. Nonrelational database architectures are 30-40% less
costly than relational database structures.
Open Source Software. Supporters/programmers voluntarily write and exchange software. Apache
is a type of open source for big data supply chain analytics. Governance of the code is based on a
formalized system of software committers.
Relational Database. The representation of data in rows and columns into tables with referential
integrity. The majority of supply chain analytics are based on relational database infrastructures.
Rules-Based Ontology. In cognitive learning, a rules-based ontology is a representation of the
relationships of data elements and rules/beliefs to initialize cognitive computing.
Self-Service. The design of analytics to enable the management and configuration of analytics by
line-of-business users to minimize the involvement of Information Technology teams in day-to-day
business support.
Sentiment Data. The mining of unstructured data to visualize patterns. Data includes social,
customer service, and warranty data.
Page 20
Simulation. The use of analytics to understand the feasibility of network design through discrete
event simulation where events and outcomes are simulated minute-by-minute and hour-by-hour
analytic routines to see relationships in complex data.
Streaming Data. Data that is moving real-time in streams requiring the use of an architecture for
streaming data. Streaming data is generated by sensors, telematics,
Unstructured Data. Data that does not neatly fit into rows and columns. Unstructured data includes
GPS data, maps, pictures, quality data, social sentiment, weather, warranty,
Visualization. The use of analytics to display the patterns and status of operational data to enable
action.
“What-If” Analysis. The design of an optimizer in decision support to run multiple scenarios—
changing inputs and tweaking data models—to determine the best output.
Page 21
Research Summary of Quantitative Data As a part of our research method, we source survey respondents through open research
techniques—sharing of links with conference attendees, on LinkedIn and through direct mail
campaigns. This research was collected from the pre-event survey for the Supply Chain Insights
Global Summit held on September 6-9, 2016.
Consistent with the philosophy is that “respondents give to us and we give to them.” All respondents
participating in this survey were given the results of this study at the conference and invited to share
in a roundtable discussion with other survey participants to gain additional insights.
In our research, the names, both of individual respondents and companies participating, are held in
confidence. Here we share the demographics to help the reader of this report gain a better
perspective on the results. The demographics and additional charts are found in Figures A and B. At
the bottom of each image are the specific questions asked in the survey along with the survey
demographics.
Figure A. Summary Overview of Pre-event Study
Page 22
Figure B. Demographics of Line-of-Business Respondents
Additional Reports of Interest
Prior Reports in This Series
In the Putting Together the Pieces report series, we try to help organizations better understand
technology markets. This is our third report in this series. To gain additional insights on the topic of
procurement and sourcing analytic solutions to build Market-Driven Value Networks, consider these
additional reports:
Building B2B Networks: Who are the Players?
Inventory Optimization in a Market-Driven World
Putting Together the Pieces: Sales and Operations Planning 2015
Page 23
About Supply Chain Insights LLC Founded in February, 2012 by Lora Cecere, Supply Chain Insights LLC is beginning its fifth year of
operation. The Company’s mission is to deliver independent, actionable, and objective advice for
supply chain leaders. If you need to know which practices and technologies make the biggest
difference to corporate performance, we want you to turn to us. We are a company dedicated to this
research. Our goal is to help leaders understand supply chain trends, evolving technologies and
which metrics matter.
About Lora Cecere Lora Cecere (twitter ID @lcecere) is the Founder of Supply Chain Insights LLC and
the author of popular enterprise software blog Supply Chain Shaman currently read
by 5,000 supply chain professionals. She also writes as a Linkedin Influencer and
is a a contributor for Forbes. She has written four books. The first book, Bricks
Matter, (co-authored with Charlie Chase) published in 2012. The second book, The
Shaman’s Journal 2014, published in September 2014; the third book, Supply
Chain Metrics That Matter, published in December 2014; and the fourth book, The
Shaman’s Journal 2015, published in September 2015.
With over 12 years as a research analyst with AMR Research, Altimeter Group, and Gartner
Group and now as the Founder of Supply Chain Insights, Lora understands supply chain. She has
worked with over 600 companies on their supply chain strategy and speaks at over 50 conferences a
year on the evolution of supply chain processes and technologies. Her research is designed for the
early adopter seeking first mover advantage.
top related