Sign up for our newsletter and get the latest big data news and analysis.

Incident prevention with Big Data in Manufacturing

In this special guest feature, Edwin Elmendorp, Information Architect, Kinsmen Group, points out that many opportunities exist for using BIG data technologies in manufacturing, while some are still in a research phase, others are usable products that offer cost beneficial approached available for small and large organizations by using modern platforms. Edwin lives in Houston Texas and has close to 20 years of consulting experience in Engineering Information Management. After initially graduating as an electrical instrumentation engineer, he moved on, added a Computer Science degree, and recently graduated cum laude with a Master’s in Business Process Management and IT. Aside from a solid academic background, Edwin has worked with many owner-operators to digitally transform how companies manage their information assets spanning many different software solutions

Process Manufacturing companies are constantly scrutinized to adhere to the highest and safest workplace standards. The Occupational Health Administration (OSHA) plays a key role in the enforcement of safety guidelines and when companies have incidents, the fines can run up to millions of dollars. OSHA has defined standards such as Process Safety Management (PSM) that help companies identify, control, and mitigate safety risks.

PSM data to predict risk

Process safety data is locked in many different sources and are a combination of structured, unstructured, static, and dynamic information. Pankaj et al, provide the following overview in this – Figure 1. When we look at this overview, it is easy to see the various attributes for BIG data, also called the 7 V’s.

Figure 1 – Types and classification of process safety data [1]
  • Volume – Process historians produce equipment and sensor data 24×7 for a very large number of sensors in a facility.
  • Variety – The type of data varies between very dynamic, static, structured as well as unstructured information.
  • Velocity – Historical data is constantly being produced from facility sensors, but equally so are work reports, drawings updates and other pieces of information.
  • Value – There are both historical values, but also documents that reflect the new “As-Built” status of a facility.
  • Veracity – Many of the data are based on written documents (handwritten) and the quality is not always consistent or accurate. Information is also stored in different systems and is often not aligned.
  • Variability – Information is inconsistent, and the same information can be in different formats with a high level of dependency on human creation.
  • Valence – A lot of the data is interconnected, a work instruction will refer to equipment information that is logged by a historian, which is also referred to in an incident report.

With Process data qualifying as “BIG” data, combining these source models can be built for predictive analysis. One area that this can be used for is risk assessment. The researchers in this case have described different layers of risk, all interconnected and reflecting the facility. Each layer uses different input sources – which results in a risk classification. Considering the frequency of events happening at the facility, a formula was established to determine the risk profile more accurately for a given scenario – in this case the risk of a dust explosion. The developed risk framework for Big Data is displayed in Figure 2.

Figure 2 – Big Data Dynamic Risk Framework [1]

Machine Learning to achieve greater situational awareness

Next to predictive analytics, the large data set is also used to provide operators with a much greater situational awareness. It is easy to envision that the many different data sources can produce conflicting information for facility operators that in case of emergency need to respond in split seconds. Providing facility operators with all the accurate information available in the underlying sources can be trivial to preventing incidents in the facility.

Software solutions exist that analyze the underlying data sources and find hidden patterns that allow the linking of information in ways usually not available. Utilizing complex document extraction methods, combined with pattern matching and machine learning, the software can utilize all data sources in order to link to the data. When a facility operator opens a document, he has immediate visibility into the historic data, referenced equipment manuals, current and historic work orders and much more, see Figure 3.

Figure 3 – Sample software linking operational data

Many opportunities exist for using BIG data technologies in manufacturing. While some are still in a research phase, others are in the form of usable products that offer cost effective approaches for small and large organizations.

References

[1] P. Goel, A. Datta, and M. Sam Mannan, “Application of big data analytics in process safety and risk management,” Proc. – 2017 IEEE Int. Conf. Big Data, Big Data 2017, vol. 2018-January, pp. 1143–1152, 2017, doi: 10.1109/BigData.2017.8258040.

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1


Leave a Comment

*

Resource Links: