‘’Every year over the past several years, at least one company in twenty has suffered a supply-chain disruption costing at least $100 million.’’
In light of the 2020 crisis exposing numerous supply-chain vulnerabilities, companies have realized the need for new approaches to managing supply-chain risks, including the ones related to the data analytics strategy. Utilizing supply chain data analytics is a key part of these strategies. This article will delve into major supply chain vulnerabilities identified by McKinsey, including planning and supplier networks, transportation and logistics systems, financial resiliency, product complexity, and organizational maturity. To address these challenges effectively, exploring data analytics services can provide the necessary tools and insights for robust supply chain risk management.
And it is critical to reveal such weaknesses before various types of exposure (e.g. pandemic) occur.
Exposure refers to unforeseen events that disrupt a supply chain by using a vulnerability. There are four main sources of exposure:
- force-majeure shocks (natural disasters),
- macropolitical (economic shocks),
- malicious actors (cyberattacks),
- counterparties (fragile suppliers).
As the COVID-19 crisis has revealed, these shocks can have a negative impact on supply and demand in many different ways.
Key approaches to supply chain risk management
-
Putting a supply chain to tests (the way banks are tested)
According to Harvard Business Review , the methodology is based on 2 key metrics. The first one is TTR - time to recover - that’s the period of time a specific supply chain node ( e.g a distribution center, a transportation hub, etc) would need to be restored and fully operational after it’s disrupted. Whereas, the second value is time to survive (TTS) - that’s the maximum period of time that the supply chain is able to meet the supply with demand after a specific facility gets disrupted. By quantifying each measure under different scenarios, a business can measure its ability to restore operations after a disaster. For instance, if the TTR for a specific facility exceeds the TTS, it means the supply chain won’t be able to match supply with demand unless there is a plan B.
This methodology allows companies to calculate the cost of disruptions and draw risk mitigation plans for the most critical parts of the supply chain that could be applied in different scenarios.
-
Supply chain network analytics and mapping
Supply chain network analytics and mapping allow you to trace the entire supply chain, monitor the production units, routes, and nodes. This will allow you to:
- Identify routes that have bottlenecks that result in delays.
- Track how your warehouses are performing based on such metrics as shelf time of goods, demand within 1-2 km, and take measures to either boost their performance or eliminate them.
- Get a clear understanding of your inventory (spare parts, parts in transit, after-sales stock, finished goods).
- Spot the demand patterns for different products at certain periods of time and respond to the buying behavior of customers using metrics like order type, density, frequency, as well as the number of returns.
Usually, those supply chains that are not transparent enough or have a higher level of interconnectivity, concentration, and codependence are the most vulnerable ones.
To be able to spot any vulnerabilities, it's important to ensure transparency into all supply chain nodes. To obtain it, you need to use publicly available data and network analytics algorithms. Network analytics will enable you to have a clear picture of your supply chain and gain visibility into all fragilities and vulnerabilities and draw meaningful comparisons with peers and industry benchmarks.
Here is the landscape of supply chain analytics opportunities. As we can see, they range from cost modeling and demand/supply modeling to the credit rating and fraud detection.
Typically, in the process of sales, inventory, and operations planning, inputs from Enterprise Resource Planning (ERP) and SCM planning tools are taken into account. However, to make the planning process truly effective, it is important to use new internal and external data sources.
How to effectively adopt supply chain data analytics:
1.Establish clear business KPI’s and estimate ROI
First of all, it is critical to establish clear KPI’s and calculate ROI. If you need to validate the feasibility and profitability of your supply chain data analytics system, you can undertake a Business Strategy Discovery Phase and based on rigorous calculations for different scenarios, either go for integrating third-party solutions or building your own supply chain analytics system. The Product Discovery phase will provide you with all the deliverables needed to efficiently kick off the implementation phase while mitigating risks and optimizing costs.
2. Ensure effective Big Data analytics
The success of any Big Data analytics project is about:
- choosing the right data sources;
- building an orchestrated ecosystem of platforms that collect siloed data from hundreds of sources;
- cleaning, aggregating, and preprocessing the data to make it fit for a specific business case;
- in some cases, applying Data Science or Machine Learning models;
- visualizing the insights.
Note: Before applying any algorithms or visualizing data, you need to have the data appropriately structured and cleaned up. Only then, you can further turn that data into insights. In fact, ETL (extracting, transforming, and loading) and further cleaning of the data account for around 80% of any Big Data analytics time.
That is especially critical when talking about supply chain analytics as, In fact, only around 20 percent of all supply chain data is structured and can be easily analyzed, and the other 80 percent of supply chain data is unstructured (dark data). And to harness the power of this dark data, you need to clean and preprocess it properly. Otherwise, it won’t make any sense and will bring about misleading results.
Consider building a Data Lake for supply chain data analytics
Many companies built data warehouses to store their data. However, they are the best fit for companies that are dealing with well-structured information or the one that can be easily structured.
Whereas, for many businesses that deal with a lot of unstructured data, building a data lake and integrating their legacy data warehouse into a data lake will allow them both to take advantage of their legacy systems and unlock the power of the myriad unstructured data generated from their supply chains.
A data lake (unlike a data warehouse) is unconstrained by the structured data of a relational database, and it allows to aggregate all of an organization’s available data sources — not just structured data, but also unstructured data such as documents, emails, and social media engagements. This way, it provides more comprehensive info for analysis. And it allows running almost real-time reports.
Also, a data lake provides support for advanced algorithms and is an excellent choice for integration with Machine Learning and IoT solutions.
For example, here is how our partner, Gogo, benefited from leveraging a data warehouse and a data lake.
As a part of cooperation with the client, the N-iX team has built the data warehouse system for storing and processing significant amounts of data. It allows the company to receive timely reports. Also, N-iX developers have created an AWS-based data platform and built a data lake for collecting data from more than 20 different sources in one place. The data lake has a separate layer that provides information for the company's data warehouse, improving the reporting process. Also, the data lake provides historical data for Data Science and ML applications.
Thanks to the Data Analytics system, the client was able to streamline the system of predicting failures and replacing devices (reducing the number of not-fault-founds by 8 times).
Leveraging predictive analytics and machine learning for supply chain risk management
Reducing freight costs, improving supplier delivery performance, and minimizing supplier risk are three of the many benefits machine learning is providing in collaborative supply chain networks. Here is how we helped a global manufacturing company (with 400+ warehouses) mitigate risks with Machine Learning.
The company partnered with N-iX to deliver a Computer Vision (CV) solution for docks based on industrial optic sensors and lenses and Nivida Jetson devices that allow them to manage and track the goods in a non-touch way. Thanks to the solution, the client will be able to :
- predict and manage the delivery status of the box;
- enable package damage detection, thus eliminating the defective packages;
- improve inventory management at the warehouse;
- predict warehouse load etc.
Why choose N-iX for implementing supply chain data analytics:
- A pool of 1,000+ experts that have experience working with business cases of different shapes and sizes.
- Expertise in the most relevant tech stack for implementing Big Data engineering, BI, Data Science, AI/Machine Learning solutions.
- N-iX has delivery offices across Eastern Europe.
- 10+ years of experience migrating existing data solutions to the cloud; a certified AWS partner.
- We partner with Fortune 500 companies helping them launch Big Data directions and migrate to the cloud.
- A team of 130+ data analytics specialists.
- Long-lasting expertise in Cloud computing, DevOps, High-load computing, and more.
- Compliance with international regulations and security norms.