Three Principles of Leveraging Predictive Analytics for Safety

Three Principles of Leveraging Predictive Analytics for Safety

At its most basic, analytics is the use of data to make informed decisions.

Ask anyone who is looking for more insight into their operations and they will say they need more analytics. But do they know what they really want? Analytics has become such a generic term with such a lack of definition that many who ask for analytics don’t really know what they are asking for. At its most basic, analytics is the use of data to make informed decisions.

How can a safety professional benefit from implementing an analytics program? Think of the sea of information your teams have entered into multiple tools and systems. Hidden in all those disparate data sources is a lot of valuable information. Now, consider the time, effort, and resources already invested to measure and report risk in your organization’s environment and associated with understanding that data. That’s where analytics comes into play.

A form of advanced analytics is predictive modeling. With predictive modeling, data is used to predict an outcome and the toolset used to do so is machine learning. Machine learning uses the sea of data you are currently gathering to predict where risk is going to be higher—this takes your analytics to the next level.

How does machine learning accomplish the “next level” of analytics? Studies show that the human brain can handle at most four pieces of information at a given time. To better asses risk, it is typically necessary to leverage many more than four pieces of information which then creates the proverbial rock and a hard place. Additionally, when trying to mitigate risk, organizations tend to increase the amount of data gathered and metrics tracked, which only increases the already unmanageable see of data. How can all this data be leveraged together, at the same time, to understand where risk is likely to occur? The answer is predictive modeling.

One of the many benefits of predictive modeling is the capability to feed tens, hundreds, or even thousands of disparate data points into a software to identify patterns. The hope is that the patterns found can be used to make informed decisions to sway the outcome favorably (i.e. mitigate risk). But what is a model? At its core, a predictive model is just an algorithm. The algorithm is given data, such as leading indicators, told what events to look for (i.e. risk or lagging indicators), and then finds the patterns that correlate between the two. There are no emotions. No biases. Just 1s and 0s. These algorithms have been proven to work with up to 97 percent accuracy. Empirically, these algorithms can forecast risk with better than random chance. Another way to think of it is that predictive modeling ensures you get the most value out of the sea of data you gather by clearly identifying areas that need further identification. This is the magic bullet of predictive modeling. It is a tool in the safety toolbox specific to helping make sense of the data that’s been collected.

This predictive modeling technology centers on three foundational principles, each of which encompasses technical and ideological concepts. These foundations are key for any organization to gain the utmost value from predictive analytics within your organization. These pillars include:

Trusting the model

If the goal is to mitigate risk by leveraging data and analytics, the foundation for that goal is having trust in the model. As all decisions made afterwards will be based on the outcome of this model, there needs to be faith that the foundation of model-based decisions is strong and valid. In validating trust in a model, there are three items that must be addressed:

Accuracy. Simply put, when a model makes a prediction, that prediction should be accurate. This is typically done by testing the model on your historic data to predict if an incident identified by the system actually occurred. Accuracy can be determined by working with your internal data or team or external consultants to translate the accuracy in the model back to your organization.

Stability. The next metric used to gain trust in the model is stability. It is necessary to have an accurate model, but if the model is only accurate one time, the efforts are wasted. The goal is to have an accurate model that is also stable—which is where machine learning comes into play. Machine learning refers to the process by which the computer (i.e. machine) is not told what to look for, rather it is given a large quantity of data and, in this instance, finds the patterns that predict risk.

The intent is for the machine to learn from the data. This trains your organization’s unique algorithm while validating that the model has learned rather than memorized. For example, if three years of data were acquired to build the model, the first two years would be used to train the model. Over this partition, the model is learning to find patterns that predict risk. Once a suitable model is found, we then evaluate this model across the third year of data that was held out of the training process. Here, we are looking for consistency of the model’s performance across both the training partition of data and the hold out partition of data. If we can maintain a consistency of accuracy between these two partitions, we can conclude that we have a stable model, and that this model can be trusted to perform accurately on future unseen data.

Segregation. The third component to trust is risk segregation. Whereas accuracy and stability are metrics that data scientists use to verify the validity of a model, segregation can be thought of as the “proof is in the pudding” view of trust in the model. In short, there is an understanding that a model is not perfect. That is, the model is predicting the future, not reading the future. It is understood that there can and will be errors in predictions. The final part of trusting a model is to accept the errors of that model.

Easy Hazard Identification

Once the model is trusted, organizations must use it to identify the risks. Many times, companies struggle because they have too many sites/areas/departments to oversee and not enough resources to support them—which is why the goal of this pillar is to allow organizations to proactively focus their extremely limited resources to the areas where it is needed most.

For example, a predictive platform can share a snapshot of risk is high level and depicts (1) the current state, (2) the trend towards current state, and (3) top areas for focus. The snapshot is meant to support the high-level decisions that are happening at the executive level. This is not about getting into the details and nitty gritty of managing risk. At this point, it is about trying to understand, at a broad level, the organization’s risk and where that risk exists.

Effective Risk Mitigation

Once there is an understanding of where risk is present, and before actions are taken to mitigate risk, it is necessary to first understand why risk is present—which is identified through many charts within any predictive platform. Once your team has reviewed and identified where the risk is occurring, it’s important it determine gather the right stakeholders together to determine an action plan to mitigate this risk. After the right team has been assembled, go out in the field and ask questions to understand why the identified risk may be happening—is your team on they on a tight deadline? Do they not have the right tools to be successful? By asking questions to people actually conducting the work, you are proving or disproving any initial assumptions.

From there, discuss and strategize with your team to come up with the best plan to ensure the risk is actually mitigating the risk. If it’s a tight deadline, what resources could your team bring in to support them. If it’s lack of proper tools, how can you get the right tools for them? Once this strategy is defined, communicate the findings to the organization and set clear metrics that track the progress to ensure the plan implemented is having a positive impact.

This article originally appeared in the September 2019 issue of Occupational Health & Safety.

Featured

Artificial Intelligence