Digitization offers alternate, simplified approach to pipeline engineering

May 1, 2018
There is wide scope for a digital approach to resolve or simplify many of the issues that complicate offshore pipeline design, construction and operation.
Industry-wide data model could enhance field planning

Vincent Cocault-Duverger

Saipem SA

‘Digitization’ is the new buzzword: everyone talks about it, yet no-one has actually ‘seen’ it in practice. Will it make the offshore industry more prosperous as it adjusts to a lower-for-longer oil price era, and what realistically can digitization achieve? How can small pieces of a giant oil or gas development puzzle be incrementally transformed into an optimized process leveraging the latest data science techniques?

Subsea pipelines are an essential piece of this puzzle. Almost any new reservoir development requires pipelines to bring production to market. The design, construction and operation of a subsea pipeline should be an area where some degree of digitalization is possible. Traditionally the subsea industry has approached each new pipeline as a prototype, made-to-measure: from a wider perspective it is seen as just one of a series of pipelines. But there is wide scope for a digital approach to resolve or simplify many of the issues that complicate pipeline engineering. Computers are particularly good at repeating numerous operations in split-seconds without error. And if a process, problem, or challenge is described and solved in generic terms, computers can solve all the related problems at once.

The same applies to field development engineering and offshore construction activities, where each new project is typically treated as an exception, to be ‘hand-crafted’. There have been only limited successful attempts at introducing systematic solutions for field development, perhaps because it takes more effort and a higher degree of abstraction to be systematic.

Data normalization. (Images courtesy Saipem/SPE)

Data management

When observing the flow of information through all stages and aspects of an offshore field development project, what is most surprising is the huge volume of data that is keyed in manually and transferred back-and-forth among the project’s stakeholders.

Typically, an operator starts out with a development plan and a document outlining several architecture scenarios. These are then screened by numerous internal and external parties, with the process further refined by a front-end engineering design (FEED) contractor, and, finally, designed and built by an engineering, procurement, construction, and installation (EPCI) contractor. At the end, the infrastructure is handed back to the operator for long years of (hopefully) prosperous production, and along the way, further large volumes accumulate of operational, integrity management and monitoring data.

Throughout this process, the data is mostly saved as electronic documents. Each player reading these documents adds further data, manually transcribed into new documents for the next stakeholder. In terms of efficiency, this exchange of data is not much different from the systems employed by patient copyist monks in the Middle Ages, except that the documents are electronic and disseminated by email.

And many individual engineering reports are still formatted to fit paper and printer, even though this is becoming irrelevant and contradictory to zero-paper policies. Long, costly hours are spent compiling, handling, producing and writing data in document-centric data models: numerous surveys have shown that these tasks take up around 50% of most engineers’ work time.

These factors make the process of bringing the design to maturity tortuous and difficult in terms of changes and re-working. The multi-disciplinary and multi-stakeholder, document-centric data repository is also inconsistent with the goal of achieving, through trial and experimentation, a variety of innovative design solutions through trial and experimentation – which is precisely how cost savings are attained.

A document-centric data repository makes it very difficult for a field architect at an operating company to test many variations of field architectures across the project chain, from design, through procurement, cost, schedule, construction and installation. Some attempt to do this by applying contractual options, but that introduces further complexity and extends time-to-delivery. From the contractor’s perspective, it is a lengthy, over-elaborate, and expensive process because each field development prospect is assessed with all those options from the very early stages, with only a handful converted into actual revenues at the end.

Normalizing a data model

Normalizing and serializing data is the foundation of any digital edifice. If data cannot be accessed consistently by the machine, digitization opportunities are somewhat limited. Data normalization and serialization are achieved by designing a data model that can describe the various physical objects and their attributes in a consistent manner.

Take, as an example, the design pressure of a pipeline. Field architects can find this data in a Word document or a pdf, named something along the lines of ‘design basis’, and saved somewhere on a server. But the human eye is necessary to read the actual target, i.e. the pressure readings in Pascal. There is no way this data can be accessed safely by a programmed computer action, because there is no such thing as an identically laid-out design basis. Normalization is the action of designing a data model, in conceptual form first, then logical, and finally physical. And if this data model can be shared by various parties, they can exchange in split seconds tremendous amount of data, and really start collaborating.

Spidex conceptual data model

The Pipeline Open Data Standard, PODS [1] and the Utility and Pipeline Data Model, UPDM [2] each provide a database architecture for pipeline operators, used to store critical information and analysis data about pipeline systems. However, these data models do not seem to have made their way into offshore field development, perhaps because they do not provide the level of detail necessary for subsea field design activities, or perhaps because many stakeholders have not yet perceived their value.

Some operators are arranging data models laid-out in a Geographical Information System (GIS) database, but no common basis has emerged yet. This unfortunately means the contractor chain has to accommodate several data-models, leading to additional work rather than the savings desired. However, in all cases, a GIS database can easily encompass any normalized data model, as long as the model is geo-referenced.

Saipem is developing GIS systems to improve its engineering and operational data management. The company is also developing for internal use SPIDEX [3], a data-model designed specifically for subsea field development projects: however, a wider industry initiative is needed. There is plenty of experience to draw from successful collaborations in other sectors, notably aerospace, automotive and banking [4] [5] .

Exploiting data

In a recent white paper for the Data Driven Drilling & Production Conference [6], Jim Claunch, VP of Operational Excellence at Statoil, highlighted a striking fact: only around 3-5% of the data captured by the oil and gas industry is actually used, which means that 95% remains unused. Clearly there is scope for significant improvement, but exploiting and learning from such an ocean of data – and the even greater volumes of data to come – cannot be done easily using the current, document-based systems.

But if the data is normalized and retrievable, field-proven IT and data science techniques and tools can be implemented immediately to safely mine and retrieve value from this data. Then exploitable data can be recorded by multiple devices, sensors or via numerical modeling, properly sorted and stored for further analysis and useful insights.

Pipeline design could be one of the beneficiaries. Over recent years, field development projects have led contractors to generate extensive finite element analysis (FEA) for spool designs or pipeline in-place analysis. This process involved many hours of calculations and multiple parametric studies, and it is questionable whether such data can be retrieved and exploited today. The problem is that in each case the contractors were thinking about the project as if it were a first-of-a-kind, never to be replicated. Nowadays, using properly designed data models, any model run is systematically stored in ever-growing libraries. Machine-learning algorithms can then be trained on such a dataset, replacing a long Finite Element Analysis (FEA) simulation when a fast response is needed.

Coupling similar techniques to a wider data model opens up the application of available data analytics to field development projects. That would increase considerably the speed of time to market and may give rise to ideas not yet imagined. The Internet of Things, advanced Machine Learning, or multi-objective optimization techniques can all then be contemplated.

Overcoming silos

The offshore industry works in a very sequential way: the downsides are long lead times and an aversion to change. A sequential approach means that every task is performed typically on the critical path, with mistakes sparking a domino effect throughout the whole chain. In addition, each player wants to keep an edge over its peers, sometimes simply with a view to protecting a job in a shrinking market. An analogy is an orchestra where the musicians would play in sequence rather than together in order to make sure each is heard.

Model-centric data repositories allow multiple disciplines or stakeholders to work together in sync and in parallel, gathering and sharing information. Interfaces are minimized: each team can maintain its valued expertise and each stakeholder can focus on improving input-to-output delivery time, quality and accuracy, with the potential for automating internal processes.

In a paper published at OTC this year [7], Saipem describes how it implemented such a model on a small mono-discipline scale. Tasks were automated and scripted to work on a unique model-centric data repository, in a star-shape process, adapted from object-oriented-programming. Results to date suggest that productivity improves by an order of magnitude, with the model resilient to change and unaffected by work volume.

A model-centric data repository is a description of physical objects: each specialist can find or create the attributes necessary to perform his or her own work or modeling. Pipeline design, flow assurance or cost estimate models are simply various representations of the same reality: the physical infrastructure. Hence all discipline-specific models can be linked to such a common data model, which de-facto breaks silos and provides transverse data management and collaboration along the entire value chain.

Data normalization also opens up greater automation of certain tasks, particularly those that are man-hour intensive but deliver low added value. Automation of data wrangling, pre and post-processing and report generation and updates can all be implemented with marginal effort if data is smartly organized and serialized, as Saipem has found. Automation means that fewer people are needed to execute those low-value tasks now automated, and the workforce’s focus and energy can therefore be redirected to higher-value work. This in turn should make the industry more attractive to young talents.

Digital transformation requires skillsets beyond those developed by offshore industry engineers. Being able to think one’s own job in a systematic and ultimately algorithmic manner is a rare skill that has to be mastered, and even this is not enough. One has to have the will and courage to learn from scratch the basics of information technology, programming and data science. Only a handful of individuals will climb that mountain. Yet the industry must elevate the skills of the workforce if it is to achieve a successful digital transformation.

“It’s not the technology, it’s the people” says Jim Claunch [6]. People are the cornerstone of digital transformation and the industry must be able to tap into previously unsought classes of workers, “taking a more proactive approach to specifically seek out technologically advanced candidates from other sectors,” as highlighted by Ferguson [8].

Conclusion

Developing a common, multi-discipline, scripting friendly data model will facilitate efficient use of modern data analyses and exchange protocols. It significantly reduces the amount of expensive and lengthy manual data handling prone to human error. It also makes the design process very resilient to design changes.

A normalized data model is the building block of digitization. For field development and subsea architecture, pipelines in particular, an industry shared data-model would rapidly open numerous improvement opportunities throughout the entire chain at a very low investment cost.

References

[1] PODS,https://www.pods.org/

[2] UPDM,https://solutions.arcgis.com/utilities/gas/help/utility-pipeline-data-model/

[3] “Normalization of Pipeline Design Methodology to Automated Framework”, Laye, Fournier, Victoire & Cocault-Duverger, OPT 2018, Amsterdam.

[4] “Introduction To Model-Based System Engineering (MBSE) and SysML”, Laura E. Hart, Lockheed Martin, IS&GS, Delaware Valley INCOSE, 2015.

[5] “Integrating Multi-Disciplinary Optimization into the Product Development Process”, Dr. Mikael Törmänen, Volvo car Corporation, Product Innovation, Munich, March 2016.

[6] “Data Integration In Oil & Gas” by James Gavin, Upstream Intelligence, created for the Data Driven Drilling & Production Conference, 2018.

[7] “Model-centric Digital Subsea Pipeline Design Process And Framework”, A. Laye, V. Cocault-Duverger, K. Victoire, OTC 29013, Houston, May 2018.

[8] “Oil & gas companies need to adapt to attract digital natives.”, Offshore, Feb 2018.