Unlock margin by interconnecting refinery optimization silos with closed-loop AI
Refineries and chemical processing plants are complicated to model, control and optimize.
Refineries and chemical processing plants are complicated to model, control and optimize. Feedstock and intermediate hydrocarbon products, without complete compositional information, undergo various types of fractionation and reactions with added chemicals and catalysts.
Theoretically, the entire process could be represented by a universal rigorous first principle model that accounts for every valve, temperature, pressure, flow and level indicator in the plant. In theory, such a model could enable optimal manipulation of all valves in every unit throughout the plant, perfectly accounting for all disturbances, constraints and economic objectives.
However, theory and reality are two different things. Rigorous first principle models have existed for decades and have been useful in designing new catalysts and processes, as well as in offline process troubleshooting and analysis. Despite this, they have not been practical for the prediction, control and optimization of actual live plants. The reality of a process plant is far too complex to be accurately represented in real time by existing first principle models. Furthermore, reliable and complete measurements of ever-changing feedstock composition, required by first principle models, simply do not exist in actual plants. As a result, it has not been possible to use a universal first principle model to control and optimize a process plant.
The traditional stack of process control and optimization
For the refining and chemical processing industry to function over the past 50 yr, simplifications had to be made to facilitate control and optimization of plants. A hierarchical stack of layers was formed, where each layer is simple enough to be well understood and manually engineered (FIG. 1). The model in each layer is either linear or composed of a set of first principle equations that can be linearized around a steady state. The plant is controlled in a cascaded manner, each layer controlling the layer below it. Each of these layers represents its own engineering discipline, methodologies and modeling techniques, and has its own separate team. The entire industry has become aligned around these layers.
FIG. 1. Traditional process control and optimization stacked layers.
A regulatory control layer implements thousands of the plant’s proportional integral and derivative (PID) loops, each manipulating a valve or other final control element, while controlling an indicator. The advanced process control (APC) layer implements soft sensors and dozens of unit-level multivariable controllers. Several closed-loop/real-time optimization (RTO) models govern and coordinate multiple APC applications—driving them to targets generated by the planning and scheduling layers—through plant-wide linear programming (LP) models.
A dozen key levers—tens of millions of dollars in incremental annual margin
While this hierarchical control and optimization stack stabilizes plant operation, it misses an opportunity for several millions of dollars of annual margin per major refining process unit or chemical plant. Generally, the stack manipulates all plant process parameters suboptimally. For most of these thousands of parameters, the suboptimal operation has no economic detrimental effect. For a critical dozen or so of these process parameters, optimal manipulation on a 24/7 basis happens to hold the key to this margin opportunity. Since most refiners and chemical operators are not aware of its existence, this lost margin opportunity is not even being tracked by leadership teams.
Under the current hierarchical stack, every process variable is manipulated at a period of seconds to minutes by local PID loops and APC controllers, which consider temperatures, pressures, flows and levels at the variable’s vicinity. The models at the higher layers of the stack, which govern a larger portion of the plant, do not operate at a detailed enough resolution to accurately guide the key variable’s optimal manipulation. These simplified high-level models are not designed to relate the minutes-to-hours effect of a change in the key variable on economically critical temperatures, pressures, flows and levels at other parts of the plant. Even if these simplified models were perfect, the required feedstock information for the calculation of the optimal variable value simply does not exist. The simplified high-level models can only provide rough, suboptimal, steady-state guidance to help drive the key variables.
An unfortunate effect of the vast resolution difference between the lower layers and upper layers of the stack is siloed teams. Process control teams are centered around the PID and APC models. Planners and economists are concentrated on the full-plant LP model or scheduling model. Each modeling layer requires wildly different skills, experience, terminology and technologies. As a result, these teams often have their own goals and initiatives and are not well positioned to collaborate in optimizing the plant.
The traditional hierarchical stack can solve simple multi-unit coordination and optimization problems, such as product blending. These are special cases where the relationship between key process parameters and the resulting economic properties is linear or easily linearized. It is only the large number of settings to be manipulated simultaneously that makes product blending an optimization opportunity. For such cases, a simplified subset of the plant LP model can run on a minute-wise or hour-wise frequency to coordinate the units and capture most of the value.
When it comes to more complicated (and more profitable) parts of the plant, the hierarchical approach cannot capture the full optimization opportunity. Conversion units are an example. First principle process models or simplified LP models can generate a rough recommendation for reactor temperature, accounting for feedstock and product economics. However, conversion unit reactors exhibit substantial nonlinear dynamics and are sensitive to the slightest changes in feed composition. Even 3°F–5°F away from the optimal reactor temperature implies over-cracking or under-cracking the feed and is detrimental to product yields. In these cases, the first principle model does not operate with enough resolution to manipulate the reactor temperature on a minute-by-minute basis, with respect to disturbances and ever-changing feed composition. Since feed composition is not reliably measured in real time, it is literally impossible to solve this problem via the traditional stack.
An interconnecting closed-loop model
Capturing the lost margin opportunity of closed-loop optimal manipulation of the plant’s key levers requires refiners and chemical operators to break the mold. It requires them to look beyond the traditional hierarchical stack and to build a closed-loop interconnection between planning and economics, process engineering, process control and operations.
This new interconnecting model is not meant to replace the traditional stack. In fact, for most of the control loops, the replacement of linear technologies by a new approach will not provide a meaningful return on investment. The new interconnection model must be tightly built around the set of 10–15 most economically critical process parameters.
As opposed to augmenting a single-layer first principle optimization, the interconnecting model must be able to truly manipulate the critical process parameters in closed loop. The model must also have the internal complexity required to capture subtle to severe nonlinear dynamics between its inputs and outputs.
Critically, the new model must compensate for missing feedstock composition data, even when that composition varies subtly on a daily and hourly basis. The interconnecting model must deduce this compositional information from other measurable real-time sources of information, such as product yields and unit conditions.
This new closed-loop interconnecting model must not be limited to single layers of the traditional stack. A single model should be able to simultaneously operate at various layers, in different units and areas of the plant, and to interface different teams and engineering disciplines. It must understand the global nature of the optimization problem at hand, without losing the detailed resolution for local process subtleties.
To capture the lost margin, refiners and chemical operators are looking to artificial intelligence (AI) and machine learning (ML) to implement this new approach. Unfortunately, simply adding AI and ML capabilities into the stack will not provide the desired step change and will not capture the lost margin.
Integrating AI into the traditional layered stack of process control and optimization
Introducing AI and ML capabilities into the current stack may improve engineering productivity within each layer. Process control teams might be able to tune PID controllers faster and maintain APC models more efficiently with less invasive step testing. Planning and economics engineers may get better tools to consolidate multiple large spreadsheets. These are the types of advancements one might expect from augmenting the current layers with AI and ML.
A nascent industry discussion is the combination of first principle models with AI and ML for closed-loop optimization. However, the use of statistics in this respect is not new. For the several decades in which first principle models have been in use, plant data has always been employed to reconcile and fit the models to the current state of the plant. In fact, the recent scientific breakthroughs and developments that are driving AI and ML into public awareness are the result of going beyond first principle models. Recent AI discoveries are focused around enabling algorithms to build their own process representations and models.
For example, one may consider using a combination of ML and nonlinear dynamic first principle models for closed-loop optimization of complicated conversion units. In this case, thousands of parameters must be calibrated in real time by using some form of ML. There may not be enough process data to automatically fit the large number of model parameters in real time. If the first principle model is simplified enough to allow automated ML-based fitting, it will not capture the subtle nonlinearities that the critical valuable opportunities require. Furthermore, first principle models require feed composition data, which is not measured adequately and is, therefore, not available as a reliable or accurate input.
Decades-old statistical model-fitting methods are now being rebranded as ML or AI. While some of these methods might locally improve engineering productivity, they do not provide step-change improvements in plant profitability.
A closed-loop DNN for interconnecting process optimization
There exists only one type of AI model capable of capturing these tens of millions of dollars of lost annual margin per plant: a closed-loop deep neural network (DNN) (FIG. 2). The DNN receives controlled operational and economic variables, real-time product prices and operational constraints, and then directly manipulates the critical variables. This DNN is a closed-loop interconnection between planning and economics, process engineering, process control and operations.
FIG. 2. Closed-loop interconnecting DNN.
Over the past decade, DNNs have shattered through state-of-the-art benchmarks in areas such as imaging, text and speech. Recently, Google’s DeepMind has used DNNs to solve the 50-yr fundamental computational biology mystery of protein folding prediction—a medicine breakthrough for mankind, potentially enabling drugs for diseases that include Alzheimer’s, Parkinson’s, diabetes, cancer and more. Applying DNNs in closed-loop control—a field called deep reinforcement learning (DRL)—exceeded researchers’ expectations in 2016 by defeating the world champion in the board game known as “Go.” Self-driving car control, one of the most challenging and complicated control problems, is also being solved by DRL.
The closed-loop process optimization DNN manipulates a small number of key process parameters at the APC and PID layers. It controls critical process constraints and optimizes to economic objectives. This translates to millions of dollars per major refining process unit or chemical plant.
The DNN uses historical process data to learn subtle nonlinear dynamics between selected variables. In many cases, several process vessels reside between the DNN input and output variables. Each relationship between these key variables is not consistent or self-contained. These relationships are each affected by dozens of other plant variables that the DNN must account for.
The DNN can compensate for the lack of real-time information about feed composition by using pressure, temperature, flow and level indications throughout process units. Slight disturbances in the (unmeasured) feed composition create subtle patterns across these process variables. These patterns are picked up and learned by the DNN. They are then used in real time to determine how key handles should be manipulated to optimize economic objectives while accounting for feed composition changes.
Aligning and interconnecting stakeholders through AI
The closed-loop DNN crosses process unit and plant area boundaries. It flows through the traditional layers and interconnects all the various teams. This allows everyone to speak the same language and to drive toward a common economic goal. The DNN does not necessarily need thousands of input and output variables. It manipulates carefully selected key variables by interconnecting real-time product prices with process constraints and economically critical properties.
Instead of each team managing its own model in a discrete stack layer, all the various disciplines interface with the closed-loop DNN. Planning and economics feed real-time product and feedstock prices directly into the neural network. Optionally, LP model outputs are also fed into the DNN. Process engineers input true unit operational constraints, along with known process relationships. Process control engineers manage the interactions between the DNN and the existing APC and DCS. Operators interact with the DNN on a 24/7 basis, constraining it to their desired bounds and limits in real time to allow for a safe and compliant closed-loop operation.
Takeaway
A stack of simplified layers has formed over decades to tackle the considerable complexity of refinery and chemical plant control and optimization. This traditional stack makes thousands of ongoing decisions to run the plant in a safe and stable manner. However, tens of millions of dollars in annual margins are being unrealized due to suboptimally operating the 10–15 most economically critical handles. To capture these lost opportunities, a new closed-loop process optimization model must work across the entire traditional stack, interconnect different layers and different teams, capture nonlinear relationships, and compensate for missing compositional information. Only a closed-loop DNN that bridges silos and links disciplines (such as planning and economics, process engineering, process control and operations) can accomplish this mission. This closed-loop DNN not only has the input/output simplicity to focus on critical key variables, but also the internal complexity to capture severe nonlinear dynamics and to use subtle data patterns to compensate for missing composition measurements. This interconnecting closed-loop process optimization DNN is how AI is truly leveraged to capture the incremental margin opportunity of optimal continuous operation of key plant handles. HP
The Author
Cohen, G. - Imubit, Houston, Texas
Gil Cohen is the CEO and Co-founder of Imubit. He has led Imubit’s product development and market adoption by leading global refining and petrochemical operators. Prior to Imubit, Mr. Cohen founded and led Cigol, a leading provider of data center system-on-chip network security solutions. He has published several academic papers on ML and has more than 20 yr of experience in developing new algorithms and applying them on previously unsolved problems in various industries. Mr. Cohen earned BSc and MSc degrees in electrical engineering, as well as a BSc degree in mathematics, all cum laude, from Ben-Gurion University of the Negev in Israel.
Related Articles
From the Archive