Industrial IoT: From Monitoring Things to Monitoring a Process

Published On: 27th September 2017//5.2 min read//

Industrial IoT is a natural extension of historical process control systems. In the past, each machine or system had its own monitoring and alerting function, to signal when conditions were out of tolerances for that particular machine. To monitor a process across multiple stages, however, often required the person responsible for process control and quality assurance to monitor multiple systems with differing interfaces and alerting mechanisms. These monitoring and alerting systems were often built on customized platforms specific to that component of the process. Controlling a process literally meant walking around from one system to the next to watch for warning lights, read and respond to meters in the “red zone,” and listen for alarm bells.

The ability to consolidate data from multiple components or stages of a process, however, has fundamentally changed the way systems are designed and processes are monitored and controlled. Modern monitoring and process control applications are built on traditional IT systems, using industry standard IT operating systems, hypervisors, and application-development tools. These applications collect, filter, clean, transform, consolidate and analyze data from multiple systems in real-time, to deliver a monitoring and process-control dashboard. The applications enable a process control expert to observe and respond to alerts from multiple systems in a process, as opposed to walk the factory floor looking for out-of-bound conditions. These systems also maintain historical data for compliance reporting, and can extract data for higher-level analysis to build predictive failure algorithms.

The consolidation of data and alerts into a dashboard allows for greater efficiency, improved quality, and enhanced reporting, but it also drives the need for high availability from the supporting infrastructure. The dashboard that monitors and possibly controls all systems in a process can’t go down. Fortunately, the cost of enabling highly available IT infrastructure has dramatically decreased.

Leveraging Branch-Office IT Technology on the Factory Floor

Much information technology innovation has been adopted in the branch offices of large enterprises. Server virtualization enabled the consolidation of multiple applications onto fewer servers, reducing the IT footprint. Monitoring tools from major software companies such as VMware and Microsoft enabled centralized monitoring of remote site IT infrastructure, reducing or eliminating the need for on-premises IT staff in remote locations. Clustering and automated failover tools from the same companies enabled non-disruptive maintenance and upgrades of both software and hardware. At the same time, software-defined storage and the increased storage capacity and performance within industry-standard servers enabled the elimination of expensive external disk arrays and storage area networks, replacing them with internal hard disk drives, solid state drives, and flash memory together with data mirroring for high availability and data sharing that’s required to enable automated failover.

Branch offices of large enterprises, especially in the retail sector, have been earlier adopters of these technologies, because each of the applications they used ran on traditional data center-compatible infrastructure. Consolidation at the branch office was similar to the consolidation that had already occurred at the core data center. Industrial systems, however, were slower to adopt, because the industrial controls applications were often built on proprietary systems with industry-specific interfaces and application program interfaces. Even within a component of a process, the cost of high availability was often prohibitive, as the tools to enable non-stop monitoring and control had to be customized for each system, based upon its own sometime-proprietary protocol. That is changing now, as modern process control dashboards collect, filter, cleanse and transform data from multiple disparate systems into a common language and data format that can run on industry standard data center systems.

The Need for On-Premises Monitoring

It is possible to monitor and control the processes running at remote locations from a central site using this newer approach. However, the reality for geographically distributed organizations, especially those that are operating in low-labor-cost, non-urban locations, or developing countries, where much of our manufacturing takes place, is that bandwith reliability, performance and affordability are inadequate to support real-time decision making from a centralized location. These systems need to run on-premises in the factories, and should operate on highly available infrastructure. When equipment is operating outside of good quality or safety tolerances, operators need to take action immediately.

Even if the application is only a monitoring dashboard and not a control system, the infrastructure should be highly available. Without monitoring and alerting, though the process may continue to run, the supplier may fail to detect or prevent a product defect that later results in a costly recall and damage its customer relations and reputation. And without continuous monitoring, the evidence needed to defend a liability claim may be lost.

Integrating the Entire Supply Chain

It is neither possible nor desirable to transfer all data collected at remote locations to a central location. Exception data, reports, and summary data, however, are critical to leading companies, as they monitor their corporate-wide product quality and their supply chain. This data should be transferred to a central data analytics site, whether in the company’s own data center or into a cloud service. This offers the ability to perform corporate-wide analytics to support sales, strategic sourcing, finance and operations, as well as improve algorithms for on-premises predictive failure analysis.

Some companies are going so far as to incorporate their own suppliers into their corporate-wide quality dashboards to evaluate supplier performance and quality. And they want to know, if there is a product quality issue, well before supplies get to their dock.

Recommendations

It’s critical for industrial organizations to understand uptime and response-time requirements and the different ways in which data can be collected, analyzed, and used. Technology and budget constraints have a major impact on IT architecture decisions, but what is clear is that there is not one architectural approach that meets all industrial IoT requirements. IT and application architects should plan for and implement:

  • High availability systems in each remote location to support distributed, real-time decision making and corrective actions to insure quality and safety
  • Centralized systems to support corporate-wide quality initiatives, compliance reporting, supply-chain management, and improved analytics
  • Centralized systems to incorporate their suppliers into a corporate-wide dashboard supply chain dashboard to avoid product delays and product recalls

…now what next?

Read more on why SvSAN is a perfect fit for organizations with IoT applications

Share This Post, Choose Your Platform!

Recent Blog Posts
Go to Top