Over the past few years, most businesses have come to recognize that the ability to collect and analyze the data they generate has become a key source of competitive advantage.
ZF, a global automotive supplier based in Germany, was no exception. Digital startups had begun producing virtual products that ZF did not know how to compete against, and engineers in logistics, operations, and other functions were finding that their traditional approaches couldn’t handle the complex issues they faced. Some company executives had begun to fear they were in for their own “Kodak moment” – a fatal disruption that could redefine their business and eliminate overnight advantages accumulated over decades. With automotive analysts forecasting major changes ahead in mobility, they began to think that the firm needed a dedicated lab that focused entirely on data challenges.
But how?
At the time one of us, Niklas, a data scientist for ZF, was pursuing a PhD part-time at the University of Freiburg. Niklas took the first step and recruited his advisors at the university, Dirk Neumann and Tobias Brandt, to help them set up a lab for the company. This gave ZF access to top-notch expertise in data analytics and the management of information systems.
The hardest part was figuring out how the lab would work. After all, industrial data laboratories are a fairly new phenomenon– you can’t just download a blueprint. However, after a number of stumbles, we succeeded in winning acceptance for the lab and figured out a number of best practices that we think are broadly applicable to almost any data lab.
- Focus on the Right Internal Customers
ZF had dozens of departments filled with potentially high-impact data-related projects. Although we were tempted to tackle many projects across the entire company, we realized that to create visibility within a 146,000-employee firm, we had to focus on the most promising departments and projects first.
But how would we define “most promising”? As the goal of the data lab is to create value by analyzing data, we initially focused on the departments that generate the most data. Unfortunately, this didn’t narrow it down a whole lot. Finance, Logistics, Marketing, Sales, as well as Production and Quality all produced large amounts of data that could be interesting for data science pilot projects.
However, we knew from experience that the lowest hanging fruits for high-impact projects in a manufacturing company like ZF would be in Production and Quality. For years, ZF’s production lines had been connected and controlled by MES and ERP systems, but the data they generated had yet to be deeply tapped. We decided, therefore, to begin by concentrating on production issues, such as interruptions, rework rates, and throughput speed, where we could have an immediate impact.
- Identifying high-impact problems
Next, we selected those projects within Production and Quality that promised the highest-value outcomes. Our experience with the first few projects provided the basis for a project evaluation model, that we have continued to refine. The model contained a set of criteria along three dimensions that helped us to rank projects.
- The problem to be solved had to be clearly defined. We could not adopt an abstract aim such as “improve production.” We needed a clear idea of how the analysis would create business value.
- Hard data had to play a major role in the solution. And the data had to be available, accessible, and of good quality. We needed to shield the team from being flooded by business intelligence reporting projects.
- The team had to be motivated. We gave project teams independence in choosing how they solved the problems they took on. And while we made the budget tight enough to enforce focus, we made sure that it was not so tight that the team couldn’t make basic allocation decisions on its own. To sustain motivation and enthusiasm, we priotitized projects that could be subdivided into smaller but more easily achieved goals.
While we eventually found it useful to assign a particular person to manage relations with the rest of the company, we kept the whole lab involved in project selection as the number of people working in the lab grew. This kept everyone informed, gave them a greater sense of personal responsibility, and implicitly expressed management’s appreciation for their professional judgment.
- Execution
The key risk was that the team would get lost in optimizing minor nuances of models and methods instead of solving the major problem. To avoid this, we usually limited the execution phase to three months, and gave the team the right to cancel its engagement.
This power turned out to be a game changer. Giving the team (including the domain expert) a “nuclear option” made them much more focused and goal-oriented. Once we put this rule in place, the number of change requests from the internal client dropped and the information initially provided tended to be more accurate and complete than before.
Of course, a team couldn’t cancel a project for arbitrary reasons. It needed to justify its decision, specifying conditions uncer which the project could be reopened. And while cancellations are contentious, they are sometimes necessary to free resources and to enforce progress toward a meaningful goal. In fact, introducing the ability to cancel projects actually increased the number of successfully completed projects.
Although a single team can work on multiple projects concurrently, particularly as waiting for responses from the client department can lead to delays, we generally found it best for the team to work on a single project at a time. We found that downtimes were better used by team members to learn new analytics methods and techniques, which continued to advance at a rapid pace.
We kept our internal customer up to date on our progress through regular reports and when possible by including their domain expert in the project team. If we could not so do, we looked for an arrangement – such as a weekly meeting – that allowed us to contact the domain expert directly without having to pass through gatekeepers.
Key Success Factors
Beyond gaining a general understanding of the data lab’s work as a three-stage process, we learned other lessons too. In particular, we found three more ingredients to be crucial to the data lab’s success:
- Executive support. The confidence that the technology executive team placed in us was crucial to our success. Fortunately, they don’t seem to regret it: “Giving the data lab a great freedom to act independently, to try ideas and also to accept failures as part of a learning process, required trust. But the momentum it created is something we do not want to miss”, said Dr. Jürgen Sturm, Chief Information Officer.
- The perspective of an outside authority. In this case, data scientists from the University of Freiburg, made a huge difference to the lab’s success. As Andreas Romer, ZF’s Vice President for IT Innovation, put it, “We no longer consider innovation to be an internal process at ZF. To safeguard our future success, we must look beyond the confines of our company, build up partnerships to learn and also to share knowledge and experiences.”
- Domain experts. While data scientists brought knowledge of analytic methods and approaches to the project, their access to domain experts was essential. Such experts needed to be closely involved in answering domain-related questions that come up once the team is deeply engaged with the problem. In our experience, the capacity and availability of domain experts is the most common bottleneck blocking a data analytics project’s progress.
Problems solved
Three years on, we can say with confidence that the ZF Data Lab is a valuable addition to the company. With this dedicated resource, ZF has been able to solve problems that had stumped the company’s engineers for years. Here are two examples:
- Broken grinding rings. A key source of stoppages in production line machinery, a breakdown can create a mess that may take hours to clean up. An internal client wanted to develop an early warning system that could indicate the probability of a future ring breakdown, but they had messy data, a weak signal (unclear data), and a highly unbalanced ground truth (because breakdowns happen only occasionally). Despite those limitations, we were able to create an algorithm that could detect imminent breaks 72% of the time – a far cry from five-decimal perfection but still enough to save the company thousands.
- High power demand charges. Managing energy units to regulate energy demand at times of peak use is an effective way to reduce costs. Our goal was to develop an automated data-driven decision-making agent that provides action recommendations with the objective to lower load peaks. Working closely with the energy department, we were able to develop a working prediction model to avoid those high-demand surcharges. Following the model’s recommendations should reduce the peak load by 1-2 Megawatts, worth roughly $100k – $200k per year.
After growing for three years, the ZF Data Lab has become a kind of specialized R&D function within the company. It is a melting pot of ideas and technologies, producing and evaluating proofs-of-concept, and discarding approaches that don’t quite work. In the last analysis, the data lab is not only there to solve problems, but to help answer the biggest Big Data question of all: how will our company compete in this increasingly digital world?
from HBR.org https://ift.tt/2F09EAo