A NEW FRONTIER FOR ANALYTICS IN INDUSTRIAL-IOT
Optimizing For Outcomes In Complex Process Manufacturing Environments | A Multi-Dimensional Algorithmic Challenge
In order to home in on opportunity areas to target AI/ML supported algorithmic optimization in a process plant environment, it always helpful to go back to first principles and ask a few thumb-rule questions. Beginning a new optimization project with a process scope that is not only manageable, but one that lends itself to large potential payoffs through better prediction and optimization outcomes, allows data scientists and engineers to share earnings, and better showcase project success.
• Which product lines have high quality-rejection rates?
• What refining outputs face the most frequent pressure to maintain target production levels?
• What is the economic impact of discarding reject batches?
• How much of new economic value is created by an incremental quality improvement over a three year period?
• What data pools exist in the operations side of the facility? – sensor data, lab data, SCADA/PLC data, maintenance ticket data, operator data.
• Which OT/IT systems hold this event data?
• Is there an identified executive champion to drive the project?
• How can AI algorithmic outputs be operationalised quickly?
We examine analytics innovations from three companies, bringing some field experiences solving domain-specific problems using IIoT analytics. Oil & gas midstream refining and chemical process/batch manufacturing often throw the hardest multivariate problems for optimization of processes. Two are early technology companies with a track record combining domain specific models with AI to solve client process issues. The third, a business unit of a large global conglomerate. The three companies exemplify the use of deep-domain expertise with operating experiences in upstream & midstream of oil &, gas and downstream aromatics process and chemicals industries. One is based in Europe, the other in Asia, the third headquartered in the United States.
MANAGING & OPTIMIZING PROCESS PARAMETERS DYNAMICALLY
Dynamic control as the process unfolds has unique challenges – re-calibrating temperature, or tweaking flowrates, require frequent feedback signals in the form of process conditions data or lab checks of output quality. While
standard-operating-procedures are well established and codified within the chemical-manufacturing industry, they are generally encoded assuming static operating conditions. Quite often they do not have explicit procedures to handle dynamic changing conditions within actual manufacturing processes. Dynamic changes often introduce unknown variables that physics models (used for defining plant process conditions) were not designed to include, as part of the inbuilt process control action.
For example, the mixer’s vessels may have residuals from prior production process; ambient temperature used for process calculations may contain moisture not accounted for; fine airborne suspended solids in the production area may have a larger-than-anticipated role in influence product quality, something not covered in calculation assumptions. For a manufacturer these variables create productivity blind spots:
• What influence do each of these factors play in impacting product quality outcomes? (How do we distinguish factors that are noise, from factors that are signals?)
• How do we predict expected quality outcome based on current conditions – algorithmically and with reasonable accuracy?
• Can we model incorporating each influencer variable? (Some variables may have disproportionately more influence on quality outcomes than others.)
• What prescriptive actions can be recommended for operators at the production floor to minimize wasteful outcomes?
AI plays a role here in processing multiple signals to predict quality of current production, and identify correlations between various input parameters and quality outcomes (as input lab-quality signals, sensor anomalies, process signals and ambient condition data).
Abstracting Hundreds Of Sensor Signal-Streams Into An Algorithm That Outputs A Few Discrete Virtual “Process-Health Indicators”. This Is The First Milestone In An IIoT-Analytics Journey.
Another useful optimization concept to keep in mind here is based on control theory.
Traditional Feedback based Control corrects for past errors and works mostly linearly on a single variable. Model based Controls, on the other hand correct for future errors and works on multiple variables in non-linear fashion. Their primary design focus is on optimization. Model based control also allow for improvements in existing control and optimization processes through better control within each unit, better coordination of production units, and improved control over inventory and final product quality.
Finally, complexity of dynamic optimizations vary with scope of the process supported.
• Optimizing a Single-Unit process, are easier to solve because of smaller scope (10-20 independent variables, 50,000 equations) and have been the traditional focus of most optimization applications. However, while smaller and manageable in scope, it may not achieve full targeted benefits should upstream unit optimization results in downstream constraint violations.
• A Multi-Unit optimization is more difficult to solve because of larger scope (50-100 independent variables, 200,000+ equations). However it holds the promise of achieving greater optimization benefits because unit interactions considered, resulting in global optimum. This kind of optimization is the current focus today, and is based on wider scope development. (more details below)
OPTIMIZATION USING MACHINE-LEARNING
Machine learning technology absorbs sensor and maintenance data over long periods of time and then identifies patterns in that data – patterns that would typically remain elusive to an operator. A major benefit of machine learning is that it can learn under all different types of conditions (for instance, seasonally, different operating conditions, varying duty cycles, different inputs) based on the real-world behavior of the equipment. It measures learned failure signatures from a machine, as opposed to modeling machine environments. Those learned signatures can then transfer from one machine to another, to help that machine avoid conditions that caused failure in the initial state.
Optimization is a highly complex task where a large number of controllable parameters affect production in one way or other. Somewhere in the order of 100 different control parameters must be adjusted to find the best combination of all the variables. Machine learning-based prediction model provides us “production-rate landscape” with its peaks and valleys representing high and low production. The multi-dimensional optimization algorithm then moves around in this landscape looking for the highest peak representing the highest possible production rate.
By moving through this “production rate landscape”, the algorithm can give recommendations on how to best reach this peak, i.e. which control variables to adjust and how much to adjust them. Such a machine learning-based production optimization thus consists of three main components:
1. Prediction algorithm: Your first, important step is to ensure you have a machine-learning algorithm that is able to successfully predict the correct production rates given the settings of all operator-controllable variables.
2. Multi-dimensional optimization: You can use the prediction algorithm as the foundation of an optimization algorithm that explores which control variables to adjust in order to maximize production.
3. Actionable output: As output from the optimization algorithm, you get recommendations on which control variables to adjust and the potential improvement in production rate from these adjustments.
One of the companies featured in this report share some interesting experiences building such prediction and optimization models using machine learning algorithms in the upstream oil & gas space.
COMBINING PHYSICS-BASED MODELS WITH MACHINE-LEARNING & AI MODELS
With sufficient information about the current situation, a well-made physics-based model enables us to understand complex processes and predict future events. Such models have already been applied all across our modern society for vastly different processes, such as predicting the orbits of massive space rockets or the behavior of nano-sized objects which are at the heart of modern electronics.
A common key question is how you choose between a physics-based model and a data-driven ML model. The answer depends on what problem you are trying to solve. In this setting, there are two main classes of problems:
1) No Direct Theoretical Knowledge Available About The System. A Lot Of Experimental Data Is Available About System Behavior. In absence of direct knowledge about the behavior of a system, one cannot formulate mathematical (physics, thermodynamics, mass balance) models to describe it and make accurate predictions. However, with a lot of example outcomes, one could use a machine-learning based model. Given enough example outcomes (the training data), the model should be able to learn any underlying pattern between the information available about the system (the input variables) and the outcomes its expected to predict (the output variables).
2) Good Understanding Of The Physical System To Describe Its Dynamics Mathematically. If a problem can be well described using a physics-based (thermodynamics, mass balance, fluid dynamics, structural dynamics) model, this approach is often the better good solution.
However, this approach has some limitations, especially in dynamic real time situations. One of the key aspects is the computational cost of the model. While it describes the system in detail using a physics-based model, solving this model could be complicated and time-consuming. A physics-based approach, therefore, might break down if we aim for a model that can make real-time predictions on live data. In this case, a simpler ML-based model could be an option. Given enough examples of how a physical system behaves, the machine learning model can learn this physical behavior and make accurate predictions. The computational complexity of an ML model is mainly seen in the training phase. Once the model has finished training, making predictions on new live data is straightforward.
Summarizing his model building experiences with a range of client industries, Vegard Flovik, Principal Data Scientist in the AI Center of Excellence at Kongsberg Digital, also shared an insight that underlines our broader observations within our research universe as well – “The Combination Of Physics Based Models And ML Is Something We Are Exploring Also In Other Scenarios Where We Have Access To Simulator Models. The Main Bottleneck In Most ML Projects Is The Lack Of Access To High Quality Training Data,. Being Able To Complement Limited, Or Even Non Existing Datasets, With Synthetic Generated Data Using Simulator Models Is Extremely Valuable. Of Course, As The Number Of Variables Increase, The Training Procedure Becomes Increasingly Complex. One Needs To Make Sure That The Training Data Covers All Relevant Operating Conditions One Might Encounter, To Avoid Making Predictions Outside The Validity Range Of The Model. This Is Also A Phase Where Having Domain Knowledge About The Process One Tries To Model Is Crucial.”
KONGSBERG DIGITAL’S KOGNIFAI PLATFORM & LEDAFOW SIMULATION TOOL
Kongsberg Digital (based in Oslo, Norway) provides next generation software and digital solutions to customers within upstream oil & gas, as well as renewables & utilities industries. Its LedaFlow tool offers dynamic process and advanced transient multiphase flow simulations.
Kongsberg shared a great example of a case combining physics based simulators and machine learning. Its virtual flowmeter (VFM) for monitoring & predicting multiphase well flow rates, gas-oil ratios and water.
Virtual flowmeters work by calculating flow based on existing instrumentation, knowledge of a facility and fluid properties, which is made possible with the use of correlations that relate the flow rate to the pressure and temperature drop through the system. Typically, such flows are measured with expensive multiphase flow meters (MPFMs), which can be a significant amount of a facility’s capital expense. Using a VFM turns this into an analytics service that help their clients gain real-time data to understand the constituent properties of any given stream of produced fluids. One advantage of a VFM is also that the prediction solution is relatively insensitive to the loss of one or two physical measurements across a multi-well field.
The Kongsberg designed VFM is capable of modelling flows of individual phases of various intermingled fluids in a single stream. In order to train the algorithm, the team utilized data from two sources
– Available field data on temperatures, pressures and choke-settings (typically elemental computing parameters for subsea multiphase meters, such as total mass flow rate, gas volume fraction, and water-liquid ratio).
– Data generated by its own LedaFlow simulation tool. Access to simulated data was a valuable ingredient for the training. It overcomes one of the biggest constraints in machine learning – the availability of high-quality training sets.
– Additional engineered features that described some basic physics of the system (such as, density of fluid mixture is a function of pressure differentials between drill-head and upstream inlet of the choke, heat capacity can be described by relevant temp. differentials, etc.)
Kongsberg’s Kognifai platform is a digital ecosystem (a complex, interconnected network of organizations, applications, and assets), supporting collaboration and knowledge-sharing between member organizations, and interactions for new market reach and business value. The ecosystem allows enables building of data-driven prototypes, apps, solutions, microservices, and APIs to drive industry transformation.
The Kognifai digital platform architecture builds on horizontal IaaS, PaaS, and SaaS services from major cloud vendors, including Microsoft Azure. On top of the broader cloud platform, Kognifai includes a set of platform capabilities and common building blocks that cover various industry scenarios, including patterns and best practices for valuable cross-pollination across industries. Targeted at developers to help build industry solutions quickly, the platform takes care of cyber security, identity, encryption, and data governance.
- K-Spice® UrbanEnergy (Utilities Industry App): Real-time decision support system for larger gas, steam and hot water distribution network. Increase uptime, safety and reliability of the system
- Esmart Connected Grid (Utilities Industry App): One system to operate all of your Smart Metering & Grid Infrastructure.
- VesselMan (Maritime & Offshore): Effective dry-docking and repair management on top of your existing solutions.
- K-Fleet Maintenance(Maritime & Offshore):Solution for managing planned maintenance, including digital report forms.
- K-Spice® Analyze : Field Performance Monitoring System (Oil & Gas Industry App): Validating measurements and process equipment against historical performance.
- K-Spice® Grid (Utilities Industry App): Dynamic simulation of an electrical grid, from producer to consumer. Scalable from small standalone network to large countrywide power grids.
- K-Spice® Assure (Oil & Gas Industry App): Production Assurances; real-time decision support system utilizing available sensors to gain a full picture of production along with predictive calculations.
- K-Spice® Match (Oil & Gas Industry App): Connect process models to real plant data to compare and replicate plant behavior. Initialize and validate the model with process data from the real plant.
- K-Spice® FieldTwin (Oil & Gas Industry App): Digital Twin for the entire field. Multiphase flow model integrated with dynamic process model using common thermodynamic and advanced control functions.
- K-Spice® Design (Oil & Gas Industry App): Dynamic Process Simulation software that enable the user to carry out process, control and safety analysis for the whole plant in a realistic dynamic behavior.
- LedaFlow® PointModel (Oil & Gas Production: Point calculation engine – basis for multiphase flow simulation within wells and pipe networks and a plug-in within many standard steady state software.
- IDS DrillNet (Drilling & Wells App): Capturing, tracking, analysing and reporting on all drilling and completions activities.
- SiteCom Discovery (Drilling & Wells App): Enabling user-defined calculations and operations on real-time data in Discovery.
- NSG E2E (Cross-Industry App): To enable total supply chain visibility and control through a single solution enabling collaboration and information in client’s logistics operation.
FLUTURA’S CEREBRA AI PLATFORM APPS
Flutura (based in Bangalore) has built a reputation for combining its knowledge of machine learning and AI, with its specialized domain & operating knowledge of oil & gas and chemicals processes.
The company shared experiences with ArcInsight Partners at their Bangalore HQ. Flutura began work with Henkel Chemicals to build mathematical models that could helping to transform the way quality is managed in making adhesives using a strategy of optimizing production by minimizing wasted off-spec batches.
Henkel’s Dragon Plant, located at the Shanghai Chemical Industry Park (SCIP), is the largest adhesives factory in the world. With targeted annual output of approx 428,000t of adhesives, it will employ approximately 600 people, and will serve more than 15 different industries comprising of approximately 2,000 customers, mostly from the automotive industry and various consumer goods sectors.
Seven production lines are present in the hotmelt workshop making the leading brands of adhesives such as Loctite, Teroson and Technomelt, and a production line for pressure sensitive adhesives (PSA). Our industrial product portfolio is organized into five Technology Cluster Brands – Loctite, Bonderite, Technomelt, Teroson and Aquence. For consumer and professional markets, we focus on the four global brand platforms Pritt, Loctite, Ceresit and Pattex.
LOCTITE surfacing films for lightning strike protection. Surfacing films are designed to improve the surface quality
of honeycomb stiffened composite parts. They also provide a barrier for dissimilar materials, decrease surface preparation time and provide protection of structural fibers. But this is not all: laminated films composed of a surfacing film and a conductive metal foil protect the composite structure from damage caused by lightning strikes.
Clients like Bombardier use a metal bonding glue for its aircraft wings.
The portfolio of hundreds of kinds of adhesives Henkel makes, each has its own mix of raw materials, and each material can be affected by changes in parameters like temperature and pressure.
Making of pressure sensitive adhesives (PSA) is one of the most typical process as far as recipe is concerned.
A typical set of process steps is illustrated below..
a. Addition Of Water And Emulsifier In Reactor.
b. Heating Reactor To Temprature Around 75°C.
c. Preparation Of Premulsion By Monomers With Emulsifier In Separate Overhead Tank. (Monomers Mainly Consist Butyl Acrylate And Methaacrylic Acid. Emulsifier Are Generally Combination Of Ionic And Non Ionic).
d. Preparation Of Potassium Persulfate Solution In Water.
e. Seeding Of Premulsion In Reactor Tank.
f. After Seeding Continuos Addition Of Kps Solution And Premulsion Of Monomer Simultaneously For 3 Hrs At 82–85°C.
g. Adding Chasers After Completion Of Monomer Premulsion Addition For Complete Conversion Of Monomers.
h. Checking Total Solid Content Of Emulsion.
Each step has multiple known variables associated with it from a process condition monitoring standpoint. All in all Flutura estimated an initial modeling play with over 20,000 variables, from a machine learning standpoint.
1. Permutations of operating conditions: Temperatures, pressures, flow-rates, viscosity, mixing paddle rotation speeds, mixing times, among others. A variation here has measurable impact on the quality of adhesive output, and could make the difference between a high-quality batch and a rejected batch. Rejects have material cost implications for Henkel.
2. Permutations of raw materials: Input raw materials often vary from one supplier to the other, between batches produced by the same supplier, time of year produced, and several others.
3. “Unknown unknowns”: Variables not accounted for in typical process design calculations, but are often observed after the fact, or sometimes merely through serendipity.
Human errors, rework and quality testing account for much of the cost of making adhesives. A larger just-in-case inventory also has to be maintained should batches get rejected. The total cost of wasted glue at Henkel ran at $300m. Flutura leveraged predictive analytics capabilities of its Cerebra platform to address Henkel’s quality issue. The team defined the problem space as increasing the quality and yield of factory lines by controlling 2 levers – viscosity and softening points.
Flutura’ s Cerebra SignalStudio™ is an AI platform for IIoT, integrates and analyze machine data rapidly and to provide advanced diagnostics and prognostics solutions. The IoT signal intelligence platform ingested 3 years of historical sensor data regarding plant operations from temperature sensors, RPM sensors, torque sensors and pressure sensors which were strapped on to industrial mixers. Cerebra’s ensemble models were then used to filter signal from noise and specifically identify feature-contributors to quality. These features were then used as input vector for predicting final adhesive output quality.
Focus on three key value delivery areas – Benchmarking processes and quality outcomes (“Quality Scoring”), understanding causality and correlation (“Diagnostics”) and predicting the quality of end product in real time (“Prognostics”) : It was impossible to attribute causal factors for sub-optimal batches due to the multitude and complex nature of data points available. Root cause analysis was manual, highly time-consuming and many a time impossible to conduct due to the sheer complexity of the process. Process effectiveness and continuous improvement in standard operating procedures were subjective and reactive in nature.
With better operational health awareness from the plant’s sensor data, Cerebra’s algorithms constantly learn ever-changing complex interactions and inter-relationships between human, machine, environment, processes, and outcomes throughout the manufacturing process, identifying issues that may impact final batch quality early enough. Quality teams leverage this advance insight to intervene & make corrections minimizing rejected batches of adhesive products. An improvement that has already saved Henkel more than 140 million dollars of savings over 3 years from reducing defective product batches..
Discovering A New Control Variable: In one instance, the system discovered deviations to composition of the end product while it was using a particular industrial mixer, a problem that went unnoticed for decades. Paddle rotation and mixing speeds are crucial parameters (features) in Henkel’s quality model. The team discovered paddle rotation wobbles to be cause of end-product consistency. Drilling deeper revealed that the type of blade used in the industrial mixer was leading to deviations in viscosity of the product. The system provided sufficient input to adjust the machine resource allocation process, and adding a new model constraint that related to “product-machine mix”.
Scale-up & Enterprise Impact: Cerebra’s ensemble models are now used to filter signal from noise and specifically identify the contributors to quality. The prediction model has now been operationalised and scaled up significantly from its initial pilot, and extended to 33 plants, 1400 manufacturing lines and 16 event types across Henkel’s global operations. Looking ahead, Henkel plans to focus on getting truly prescriptive -“passing information back in real time to the operator, recommending choice of one decision-step, instead of another.” as Sreekumar, Flutura’s Co-founder & Chief Customer Officer, puts it. Cerebra now helps monitor compliance to standard operating procedures, the effectiveness of standard operating procedures and helping to question status quo; It has helped narrow down quality thresholds established more than a decade ago, by helping create specific product BOMs and finally led the change to new product development formulation. Many plans and blueprinting are underway to leverage Cerebra in multiple areas; next being Supply Chain Optimization.
The Cerebra platform now ingests and analyzes a cumulative stream of over 20 million sensor events.
Flutura’s Success At Monetizing Its AI App Store
Flutura now offers a host of domain specific apps on the platform distilled from its collective client model-deployment experiences. A significant milestone for Flutura is its successful strategy of apps monetization – an achievement only a handful among a large population of IIoT platforms have managed to pull off so far.
The company’s monetization strategy uses multi-pronged approaches.
- State Assessment as a Service: Processing billions of events for operationally critical fleet assets (such as fracking pumps, additive units, engines, etc.) whose downtime could result in massive maintenance costs and opportunity costs for an OEM. Flutura offers a “price-per-asset” condition monitoring model, assuring itself of a revenue scale up as the OEM’s own topline expands.
- Diagnostics as a Service: diagnose problems with customer’s critical operating assets (tanks, compressors, etc) where a missed asset health deterioration, deviation from standard operating conditions or fluctuations in process variables could lead to catastrophic situations. Enabling an automated asset diagnostics service provides a new revenue stream for mitigating asset safety risks on behalf of a customer.
- Prognostics as a Service: This service proactively recommends which of the client’s recalibration in order to improve performance based on age of the asset, operating history, current operating conditions, as well as the performance of other similar assets.
HONEYWELL’S CONNECTED PLANT
Tucked inside of Honeywell’s diverse business portfolio is a hidden gem – UOP. The company with deep experiences in process is an integral part of Honeywell’s connected plant service capabilities. Acquired by Honeywell entirely in 2016, the 104 year old UOP contributes nearly 15% of Honeywell’s overall revenues, and is poised to be a key part of the company’s process automation & IIoT strategy.
60 % of the world’s supply of gasoline, diesel and jet fuels; roughly 70 % of the world’s paraxylene (used to make plastic products); 90 % of the world’s biodegradable detergents; about 40 % of natural gas (provides much of the world’s heating and electrical energy); a majority of the 700 odd refineries in the world – are produced on plants whose processes were designed by UOP. So do a majority of the 700 odd refineries around the world. Impressive penetration.
UOP’s enormous body of industrial expertise, software and digital capabilities, and a long legacy In chemicals, fluorines and oil & gas industries, hold promise to make Honeywell’s IIoT strategy underline its value message to make its customers’ operations more reliable, more profitable, and more secure. The message is built on a marriage between three distinct expertise areas & market opportunity.
a. UOP’s Plant & Process Design services expertise – a deep knowledge of physics, thermodynamics & mass-balance models.
b. Process Solutions – Honeywell’s longstanding process automation hardware and industrial control technology business (HPS). Its understanding of interactions between equipment, processes and controls help create create predictive models to alert our customers to issues that could save them millions of dollars in downtime and underperformance.
c. Strategic technology investments by Honeywell’s venture arm – machine-learning & predictive models building on data sourced from Honeywell’s process automation hardware and industrial control technology components (sensors, PLCs, DCS systems)
Uniformance® Cloud Historian, launched recently by HPS is a software-as-a-service cloud hosting solution that fuses real-time process data analysis of a traditional enterprise historian with a data lake, enabling the integration of production, enterprise resource planning (ERP), and other business data. The tool will run data analytics and connect information from third-party sources into one location.
New Digital Capabilities – Computing at the edge & massive enterprise data-aggregation: Not core to Honeywell’s traditional areas of strength, these new areas acquired via strategic investments in a few early stage ventures form heart of HCP’s IIoT analytics capabilities. It remains to be seen how HCP leverages them to strengthen its core.
Honeywell’s edge-computing capabilities come from its venture investment in FogHorn, which gave it access to analytics & machine-learning driven capabilities at the edge, and create predictive maintenance, asset performance management and process optimization solutions.
Honeywell Ventures’ $100M fund (launched in 2017) also invested in Element Analytics for its ability in in processing, managing and integrating large volumes of data in complex customer environments such as the oil refining and chemicals process industry accelerating ability to deliver data analytics, thereby creating dynamic digital representations of physical equipment and assets. This allows UOP to offer “digital twins” as part of its CPS solutions portfolio.
Connected Plant & Software : Four key components comprise Honeywell’s connected plant Industrial IoT push.
1. Cloud Historian (Uniformance Suite), for enterprise data aggregation
The Cloud Historian is hosted on Microsoft® Azure® cloud platform, with the Enterprise Historian based on a cloud native time series database, and the data lake based on Hadoop®, allowing users’ data scientists to use their preferred tools. Customers can use the data store to correlate other types of data and analyze it against process data across the enterprise. Cloud Historian uses unique smart cloud connector software which enables users to connect to various data sources and configure the data for transfer to the cloud, including any Honeywell Uniformance PHD or OPC-compatible historian.
Collects, stores and enables replay of historical and continuous plant and production site process data and makes it visible in the cloud in near real time. The historian combines a time series data store, which empowers plant and enterprise staff to execute and make decisions, with a big data lake, which enables data scientists to uncover previously unknown correlations between process data and other business data in the enterprise. Uniformance Cloud Historian is built on the Honeywell Sentience Internet of Things Platform.
2. Uniformance® Asset Sentinel, for condition-based monitoring
This is Honeywell’s take on condition-based monitoring. The platform continuously monitors equipment and process health, enabling industrial facilities to predict and prevent asset failures and poor operational performance.
Asset Sentinel covers Monitoring of process performance and equipment health to minimize unplanned losses and maximize uptime, as well as continuous assessment of the health and performance of smart instruments. Together the module targets helping users minimize unplanned downtime and maximize investments in smart instrumentation. Several features built into Asset Sentinel support condition-based monitoring capabilities.
The Plant Reference Model provides a multi-level equipment hierarchy to model users’ plant and equipment structure. A variety of data interfaces to access a complete view of asset health or performance. An asset library houses a repository of physical models driving all key assets within the process. A calculation engine allows user friendly programming tools for data analytics including complex statistical calculations.
The event detection & notification feature allows plant control personnel to set user-defined rules and checks to set trigger alerts and warnings.
3. Sentience Cloud Platform, for collaboration, analytics
This is Honeywell’s “me-too” IIoT platform offered in 2017, as response to ABB’s Ability, Siemens’s MindSphere, GE’s Predix, and Schneider Electric’s EcoStruxture. It provides a standard platform for Honeywell applications and a common platform on which Honeywell partners can develop applications and services.
4. INspire™ Open Collaboration Partner Ecosystem, for leveraging OEM capabilities into its solution
This may be the most interesting and valuable piece in the offering. The INspire™ ecosystem helps manufacturers leverage the IIoT to improve operations across a single plant or several plants within an enterprise. Honeywell and its IIoT ecosystem partners are developing infrastructure that offers customers a secure way to capture and aggregate data, apply complex analytics and reference the results with the vast knowledge gathered by an ecosystem of equipment vendors and process licensers.
Dover Energy Automation, one of the INSpire OEM partners, supplies products, productivity tools and automation software for the energy sector, enabling oil and gas operators to optimize performance and improve productivity. Its Windrock Enterprise is an asset-performance management solution to support equipment visibility & operational insights for rotating and reciprocating equipment. Using the Inspire platformWindrock combines its asset knowledge (Vibration measurements, Dynamic cylinder pressures, Temperatures) with process data (Product molecular weight, Process unit design, Flows and velocity, Pressure & valve settings) contributed by Honeywell.
Flowserve (High Performance Fluid Motion OEM) supplies pumps, valves, seals, automation and services to the world’s most critical industrial applications, including power, oil and gas, and chemicals. Flowserve utilizes Honeywell’s key infrastructure to collect and securely move data, while embedding its deep domain knowledge into predictive analytics via Inspire.
SKF (Rotating Equipment Knowledge OEM) collaborates on Continuous Rotating Machinery Health via Inspire to combine asset data (such as Bearing Temperatures, Bearing Vibration, Motor Speed, Motor Vibration, Shaft Vibration) and its own operating knowledge with process data (Process Unit Design, Motor Power, Flows, Pressure and valve settings, Control system alarms) and Honeywell’s own process control & design competencies.
Other ecosystem partners that Honeywell’s UOP and HPS units leverage to combine its process domain knowledge with specialized asset & domain insights include Sparks Dynamics (OEM), Aeron (OEM), GEA (OEM), Hoerbiger (OEM), L.A.Turbine (OEM).
ArcInsight Partners Observations
Honeywell has done well to pull ahead of the industrial-IoT pack by downplaying its core Sentience™ IoT platform in favor of packaging itself as a value-delivery play through its Connected Plant & Software (CPS) go-to-market message. By contrast, its peer-group continues to flog an ineffective “platforms-based enablement” messaging strategy in a crowded low-differentiation marketplace.
Its cloud-based services – Process Reliability Advisor and Process Optimization Advisor are powerful solutions that logically connect unit operations and process constraints to key process performance indicators, analyzing sensor data and leveraging UOP process design expertise to make comparisons between actual process performance and that available from model predictions, in order to allow clients a diagnostic and root-cause analysis pathway for pinpointing and correcting operating constraints, bringing outputs to targeted levels. All via easy to use interfaces.
Having said that, the Connected Plant offering still relies heavily on UOP’s “physics- models” (mass-flow, balance, thermodynamics) as the prime driver behind its process optimization capabilities. While its made smart strategic investments in early stage ventures that bring edge-computing, enterprise data-aggregation and predictive modeling capabilities. Most of FogHorn and Element Analytics’ capabilities lie in analytics & modeling behavior of individual industrial-assets, or fleet of similar assets.
Honeywell still has some ways to go in developing truly transformational capabilities that incorporate predictive analytics with automated plant process optimization at the scale of an entire midstream facility. This could create self-governing process facilities, those that require minimal remote operator-decisions and intervention. Experienced operators are an expensive and a fast dwindling human resource pool, due to ageing & retiring of highly trained engineers with deep process experiences.
As we described earlier, a high-end problem such as plant process optimization built on physics-models does not lend itself to prescriptive approaches very efficiently on real-time data. The constraint lies mainly in the complexity (iterations) and time required to solve many of these physics models before it can generate prescriptive decisions. Leveraging machine-learning and AI algorithmic approaches to complement physics-models have often proven helpful and cost effective & helpful here.
A future iteration of Uniformance IIoT solution would be to transform Honeywell’s UOP division’s services relationships with clients into total lifecycle management offered on an outcome basis. In the past, oil and gas companies operated their plants with the expertise they acquired over decades of operating experience, supported by external technical services team to commission, start-up, troubleshoot and optimize their operations. UOP is in very early stages of attempting to build a services offering (branded Connected Plant Services) where instead of paying for service, they want to pay for an outcome from that service. Most of the capability will be delivered through IIoT.
Honeywell’s aspiration to offering “Process Digital Twins” as a service to its clients hinge on the ability to integrate its deep legacy of building physics-based often unwieldy iterative calculation models (mass flow, thermodynamics, asset-operations) with simpler machine-learning models that allow faster decisioning on near real-time data, while simultaneously learning and alerting to shifts in model behavior. The combination has potential to deliver prescriptive insight that could (at some point) approach quality of optimization advise delivered by experienced UOP advisors, without the company having to maintain a large pool of process engineers close to each client location. Additionally, the autonomy of this “expert system” could allow advise to be delivered proactively to its clients at far lower price points (versus, the traditional practice of responding reactively after a process event occurs.
According to UOP CEO Rebecca Liebert, “Service has been a substantial cost of doing business for Honeywell’s clients, and the company sees an opportunity to change to a model which shifts the risk — as well as the reward — to the service provider, who then becomes a partner to the customer, with a vested interest in successful operation of the plant. Customers would rather pay for maintaining or improving an operation, rather than just for fixing it when it goes wrong. This allows them to budget more reliably for service throughout entire lifecycle of the technology, and to pay for that service on an outcome-achievement basis.”
A new digital transformation initiative was recently reported by the Spanish refining major Repsol, applying AI-supported plant process optimization aimed to deliver new economic value (increased production, better margins, enhanced plant-operator capabilities) from its existing refining operations.
Emerging macro-trends are placing cost pressures, and higher customer expectations of domain expertise on suppliers, to optimize productivity of operating process plants in the upstream, as well as midstream & downstream sectors.
- The oil and gas industry has entered a new era of uncertainty both from supply and from a pricing standpoint. Rising penetration of electric vehicles and more efficient gasoline-powered vehicles are also expected to reduce gasoline demand.
- There’s increasing pressure to reduce environmental footprint of refining, petrochemical manufacturing and gas processing. New environmental standards could seriously affect allocation of investment capital in downstream markets.
- A new kind of customer is emerging that isn’t a seasoned refiner or chemical manufacturer, but an entrepreneur or merely an investor. To this customer, speed and scope matter, and they would rather pay to have a plant operating with the inputs of a trusted advisor.
- Customers want services to be provided on an outcome-achievement basis. Instead of paying for service, they expect to pay for an outcome from that service. This shifts the risk — as well as the reward — to the service provider.
It has also opens up significant new opportunities for services players to manage their customers’ full product lifecycle rather than just fixing it when it goes wrong. Combining domain modeling expertise with improved operational awareness using industrial-IoT & software enabled services become critical competencies. Alongside that lie opportunities to apply emerging capabilities in machine-learning and artificial intelligence to transfer learning across complex process systems for optimization. More importantly, it opens opportunities for monetizing services that weren’t possible earlier, in return for delivering far more compelling economic value for the client by resupplying and servicing every part of an operation.
REPSOL PARTNERS WITH GOOGLE TO BUILD AI-SUPPORTED REFINERY PROCESS OPTIMIZATION CAPABILITIES
Repsol, the Spanish energy major, is partnering with Google to deploy the technology group’s big data and artificial intelligence tools to optimize performance of its Tarragona integrated refining complex in eastern Spain, and ultimately across its five other refineries with a total capacity of 896,000 b/d. The Tarragona Industrial Complex processes 9.5 million tons of raw materials a year and has the capacity to distill 186,000 barrels of oil a day.
As explained in a prior section, a refinery is made up of multiple divisions, including the unit that distils crude into various components to be processed into fuels such as gasoline and diesel and the entity that converts heavy residual oils into lighter, more valuable products.
Google’s ML Cloud will be used to analyze hundreds of variables that measure pressure, temperature, flows and processing rates among other functions for each unit at Tarragona. Repsol’s developers will build and deploy machine-learning models for refinery production
The company already uses Tools such as Siclos which Repsol’s refinery control-panel operators use to forecast in real time, the economic impacts of operating decisions; or Nepxus, which increases planning, analysis, and agility in decision-making in the control rooms at these sites.
According to María Victoria Zingoni (Repsol’s executive managing director of downstream)” the highest number of functions that could previously be integrated digitally in an industrial plant was around 30. The new project aims to increase this number more than 10 fold.
The company expects using AI across all of its refineries and could lead to boosting margins by 30 cents per barrel at the facility and $100m in additional revenue per year for its downstream business.
Energy companies have been forced to reduce spending since the 2014 oil price crash, often using new technologies to become more efficient and, increasingly, to improve revenues. Oil and gas giants also confront a global shift towards cleaner fuels, and pressures on traditional players to become more profitable to compete.
This initiative is the latest sign of oil companies turning to Silicon Valley to cut costs and boost margins. It also indicates a new realization that AI supported production process optimization initiatives by oil & gas companies could help refineries maximize efficiency (both in energy consumption and consumption of other resources), as well as improve overall operational performance and competitiveness, while enhancing broader industrial processes across the company as well. The ultimate goal being to enhance capabilities of its operating staff.
While Repsol’s Tarragona project aligns with other digital initiatives already in use at its other industrial sites, the timeline for achieving all optimization objectives from this AI-supported process optimization initiative remains unclear.
About The Author:
Praas Chaudhuri is a Silicon Valley based industrial-IoT analyst & co-founder of ArcInsight Research Partners, a strategic research & advisory group. A chemical engineer with experiences at oil-refining plants, designing industrial cooling and waste-water systems, and coal & gas-fired power generating units. He spent a decade with business strategy consulting firms advising large corporations around the world. He also earned a MBA in strategic management, and has added qualifications in decision analytics, Bayesian-learning approaches & operational risk-assessment.
Areas of his research interest include industrial-IoT, machine learning & artificial intelligence, enterprise digital transformations, SaaS models, industry & business dynamics, and new monetisation strategies.
Praas has written & presented on a wide range of strategic topics related to industry & vertical dynamics and on business related subject areas. He is also a frequent speaker & panelist at professional forums, and has published many research-based articles, white-papers and points-of-views. (Some of his thoughts are on LinkedIn Pulse)
About ArcInsight Partners:
ArcInsight Partners offer strategic expertise to help clients navigate transformation to a smart digital enterprise. Its core expertise derives from combining broad vertical knowledge (manufacturing, high tech, healthcare, banking services, retailing) with an expanding research base of existing & beta use-cases that point to emerging business models and service-design opportunities. The firm operates globally from its bases in San Francisco, Singapore & Bangalore.
ArcInsight’s strategy consulting methodology is a valuable capability for enterprises looking to position themselves for an IoT-driven world, and a structured path to validate its strategic decisions.
Assess new target markets,
Validate drivers for the new business model,
Design new service opportunities
Build a clear traction towards to monetization,
Map in-house competencies,
Structure appropriate partner ecosystems for effective value delivery,
Define infrastructure elements (platforms / analytic tools / technology / domain skill-sets)
Due-diligence for potential acquisition/partnership targets