IoT aims to reduce unplanned downtime, maximize production
The Lower Tertiary in the Gulf of Mexico is estimated to hold as much as 40 MMboe, and is considered to be one of the world’s most promising deepwater frontiers. However, a key challenge of the Tertiary is that the reservoirs are located at great depths, beneath miles of water and layers of salt and dense rock. Further, these great reservoir depths are characterized by extremely high-pressure/high-temperature conditions.
These extreme conditions are pushing the technical equipment limitations and technological boundaries for offshore exploration, drilling, formation evaluation, well testing and completions. In addition to these technical and technological limitations, there are also financial limitations. There is an enormous cost and risk associated with drilling and completing wells in deepwater. In recent years, operator returns have been further challenged as rig rates, the cost of subsea equipment, pipelines, steel, and other materials have escalated while oil and gas commodity prices have remained low.
Innovative technical, operating, and technology solutions are required to improve well completions, boost recovery rates and reduce costs for offshore oil and gas investments. Operators can seek to leverage innovations in digital technology to optimize asset functionality and maximize equipment uptime. Specifically, innovative technology solutions are available that can provide data-driven insights to help predict potential mechanical failures before they occur and automatically diagnose and remediate equipment issues when they do arise. This reduces exposure to unplanned downtime from underperforming assets and supports efforts to inspect, maintain, and repair critical equipment in compliance with regulatory requirements.
How it works
A key aspect of the technology solution is to leverage the rich streams of data already being collected from equipment sensors and connected devices in field production and processing facilities. Typically, a fully digitalized production site will have hundreds (if not thousands) of sensors capable of generating data at discrete intervals. This data is already critical to facility operating and control systems in order to provide operators with the information they need to maximize production in real time.
Importantly, this same stream of data can also be pulled into a cloud-based architecture using IoT (Internet of Things) technology that can seamlessly integrate with existing facility data collection and control systems. The IoT architecture allows for massive volumes of machine-generated data to be collected and aggregated from any connected device, including from dispersed production sites, where it can then be analyzed to identify issues, trends, and patterns. This process is commonly referred to as “predictive analytics.”
|The ability to aggregate and analyze information from dispersed production sites can help to refine these predictive insights across an entire portfolio of connected assets. (All images courtesy Bsquare)|
This powerful data-driven analytics technology considers both historical data (i.e., what happened before) and transactional data (i.e., what is happening now) to make predictions about unknown events (i.e., what might happen in the future). In this way, the technology is able to provide customized insights that include: (1) the ability to predict critical equipment failure before it happens - such as an expected time to failure for a specific piece of equipment or operating component; and (2) the ability to streamline the diagnostics and remediation process for equipment maintenance and repair.
How IoT makes data smart
The IoT technology solution works by applying sophisticated rules and intelligence to the massive streams of raw data so that it can be refined into “smart data” that is meaningful and actionable. These rules are essential. They harness the intellectual capital of production, maintenance, and engineering teams while also leveraging the advanced machine learning capabilities of the IoT software technology.
Human intelligence reflects the gathering of prescriptive operating criteria from in-house subject matter experts regarding the unique equipment, operating, and control parameters of the facility. Generally, this information gathering process includes regular equipment maintenance schedules, repair history, manufacturer’s specifications, and expected equipment life. It also identifies the range of normal safe operating parameters so that alerts can be automatically triggered when equipment is operating outside of this range.
Further, any unique equipment issues that have been experienced – coupled with the technical solutions to resolve them – can be documented and incorporated into the rules. For instance, technical experts will be able to identify unique equipment issues that have occurred in the past and that can be expected to re-occur in the future. This expert advice provides guidance as to root cause determination, the fix/repair that was applied and the outcome achieved (i.e., if this happens, then that happens, and this is how to fix it). In effect, this human intelligence “primes the pump” on the front-end to guide the self-learning capabilities of the software.
Predictive failure defined
The combined application of human intelligence and machine learning is used to shape the massive streams of historical and transactional data into a precise digital model (a sort of digital twin) for each connected device or equipment asset. The digital model can provide real-time status information, historical operating characteristics, and benchmark equipment performance against other similar and/or best-performing equipment at the facility. It can also be used to support business process improvements such as prioritizing repair work, establishing a prescriptive repair methodology, and supporting inventory management and work scheduling.
Using the digital model established for each equipment asset, and the specific rules established to prescribe normal operating parameters, any marked deviations in operating conditions (such as for temperature, pressure, and power) can trigger an automatic alert to technical and operating teams. This early warning signal enables teams to foresee the increased likelihood of potential equipment failure, or emergent operating issue, and help to identify prospective corrective and/or preventative measures. However, this is not the same as predictive failure.
While it is true that many operating parameters are defined to minimize the amount of time equipment operates under conditions that might lead to an increased risk of failure, the simple fact that a piece of equipment might drift outside of the range for a short period of time does not mean it will fail. The analytics behind predictive failure go much deeper, to consider the unique operating conditions for every single piece of connected equipment over its entire lifecycle. Machine learning constantly analyzes all of the historical and current information to find patterns and trends buried deep in the data.
For example, a pump might fail because of a unique combination of high operating temperature, pressure, fluid viscosity, flow rate, head and shaft power … all for a duration of seven hours. There are many factors - and combinations of factors - that could lead to a mechanical failure that it is simply beyond our human capability to connect all of the dots.
Key benefits of an IoT solution
Simply put, machine learning technology can find the patterns and trends in massive volumes of data that humans cannot. The ability to predict equipment failure with sufficient advance warning to prevent forced downtime and allow for a well-planned remediation effort can drive down operating costs and maximize production. Further, the ability to aggregate and analyze information from dispersed production sites can help to refine these predictive insights across an entire portfolio of connected assets.
Real-time monitoring, via a customized dashboard, can be used to monitor activity and “visualize” the predictive capabilities of the data solution (such as expected time to failure for each piece of critical equipment or component). Dashboards allow teams to view relevant asset telemetry and compare current operating conditions with established utilization metrics and benchmarks. The dashboard can be integrated with existing on-site data management and control systems and, if connected to the cloud, can be viewed from any location around the globe with internet access. The dashboard also provides the ability to compare and optimize performance, efficiency, and safety metrics across multiple operations.
The ability to incorporate human knowledge into the IoT business logic minimizes the uncertainty and learning curve that can occur in the absence of a central repository of expert technical knowledge and operating history. Combined with the powerful data-driven predictive insights related to abnormal operating conditions and prospective equipment failure, facility operators will have better intelligence from which to make informed decisions to help minimize unplanned downtime and maintain production.
When a piece of equipment stops working, every hour of downtime spent on repairs increases service costs and lowers revenue through non-production. Rather than simply generating a time-stamped error or fault code, the IoT solution can provide an important source of data-driven insight to guide technicians to a probable root cause and remediation solution faster. The machine learning capability of the software can also identify diagnostic steps taken, the problem that was identified, the fix that was applied and the outcome.
|Starting with a sound IoT strategy, companies can achieve maximum value upon completion of all five steps.|
A key challenge with any data-driven technology solution is the massive quantity of data that must be captured and acted upon in real time prior to being forwarded to a database for processing, analysis, and storage. In a typical deployment, large data sets are forwarded to cloud databases where the machine learning software works to develop more accurate digital device models. In this circumstance, the analytics work is done in the cloud and pushed down to the site for local processing.
However, in many circumstances, a balance is required between the volume of data needed to drive the accuracy of a “deep data” analytics solution and the bandwidth required to support it. This capability is relevant where there is limited connectivity and/or bandwidth constraints. Advanced IoT solutions use a process called “edge analytics” whereby the complex set of rules that embody the reason and business logic are used to detect specific operating conditions and orchestrate required actions right at the data source (the edge). •
Prasantha Jayakody is a Sr. Product Manager for Bsquare where he focuses on lowering barriers to adoption for IoT via turn-key solutions and cloud-based delivery.