Growing use of AI brings increased contractual, insurance and regulatory risks

Generative AI may have a role to play in EPC contracts, but it will require a ‘single source of truth.’
Sept. 12, 2025
9 min read

Key Highlights

  • Contractual considerations for AI include defining data sources (SSOT), overseeing AI performance, and allocating risks related to AI failures or inaccuracies.
  • Insurance providers are developing specialized coverage to address AI-specific risks such as hallucinations, model drift, and mechanical failures, reflecting evolving risk management strategies.
  • Regulatory frameworks are emerging through legislation and executive orders, aiming to promote AI development while safeguarding national interests and ensuring ethical use.

 

By Glenn Legge, Endeavor Management  

 

The December 2023 Offshore magazine article, “Artificial Intelligence Applications Promise Improved Drilling Efficiency,” examined the upstream oil and gas sectors consideration of using artificial intelligence (AI) for a range of applications, ranging from drilling optimization to completion designs. 

The upstream sector’s use of AI expanded so rapidly that, 18 months later, Offshore conducted a survey of its readers to determine the most significant AI challenges and technologies facing the industry. That survey resulted in an article published in June 2025, “Survey Reveals Biggest Challenge in Implementing AI Technologies in Offshore Oil and Gas Sector.”

The survey, which included operators, service companies and EPC companies, indicated how the use of AI was being allocated in this market:

  • 43% for predictive maintenance
  • 31% for seismic data interpretation
  • 17% for drilling optimization, and
  • 10% for reservoir management.

The survey also revealed that the industry’s primary concerns about the use of AI were:

  • Integrating AI with existing systems
  • Lack of skilled workforce
  • Data security concerns, and
  • High costs.

This recent survey also asked industry participants about how AI will enhance safety protocols in offshore operations. The responses included:

  • Computer vision for equipment monitoring
  • Real-time data analytics
  • Automative inspection systems
  • Predictive risk assessment.

Following these analyses of the industry’s baseline data concerning the increasing use of generative AI, the upstream sector must now determine how to:

  • Determine AI’s applications and functional scope of work (SOW) for offshore construction under EPC/EPCM agreements, seismic interpretation/reservoir management, facility operations, predictive maintenance and decommissioning.
  • Address the acknowledged benefits, risks/exposures related to the SOW for generative AI generally, and with respect to specific applications, such as offshore construction, facility operations, predictive maintenance and decommissioning.
  • Contractually allocate the use of AI and the related exposures, which include internal and external challenges (training, data collection/consistency, AI hallucinations.
  • Procure property, liability and business interruption insurance coverage that will address exposures arising from AI applications.
  • Comply with the rapidly evolving regulations and legal developments regarding AI utilization.

Some members of the offshore construction industry have identified AI as a suitable tool for various applications, ranging from supply chain, materials procurement and project design/management to quality control and identifying faulty installations. Some of the “real world” challenges related to the use of generative AI for construction projects involve:

  • Training personnel with experience in these types of complex construction projects to use AI in an appropriate and cost-effective manner.
  • “Educating” generative AI about complex construction projects to address all elements/stages that may include all relevant stages of the program, including engineering, procurement, construction, installation and commissioning.
  • Creating an effective protocol to manage the training, and use, of AI, and to recognize/record situations when generative AI may be impacted by faulty data or hallucinations.
  • Initial use of AI on a project may involve unavoidable financial and temporal impacts on project management due to the “first use” concerns that may impact the cost effectiveness and scheduling of a project.
  • Appropriately resolving any “conflicts” between generative AI management recommendations and existing HSE and regulatory requirements.

These types of “first time” AI application challenges must be anticipated and addressed in construction contracts in the appropriate manner.

EPC contracts and AI

In January 2025, Westwood Global Energy reported that the EPC construction activity in 2024 for offshore oil and gas and CO2 sequestration had totaled $52 billion in project awards. Most offshore construction projects utilize existing types of contracts, including Engineering, Procurement and Construction Management (EPCM); Engineering, Construction, Installation and Construction (EPIC); and Engineering, Procurement, Construction, Installation and Commissioning (EPCIC) formats. Westwood’s analysis indicates that the volume of EPC related to offshore oil and gas projects in 2025 may be in the $54-billion range.

Traditionally, under EPC contracts the contractor may provide services for design, engineering, construction and procurement, often on a turnkey format, and provides approved subcontractors. In addition, the EPC contractor frequently assumes responsibilities for scheduling, delays and cost overruns.

EPCM contracts may have similar terms, however, in many instances the EPCM owner will retain significant construction risks, including scheduling and cost overruns. The EPCM contractor frequently serves as an advisory role to the owner, rather than serving in a project manager role.

Mintmesh, a US-based company founded in 2015, has been developing a cloud-based program “for engineers and unsolved engineering problems that include engineering, procurement and construction (EPC) projects.” Mintmesh references Bechtel as an example of a company utilizing AI to assess project data, address scheduling issues and “improve prediction capabilities.”

The possible SOW for generative AI in construction projects is rapidly evolving. A 2025 report from Autodesk, a global software company that creates technology for architecture, design and construction industries, found that 76% of leading construction companies are increasing investments in AI. Construction projects utilizing AI applications may involve some of the most variability and challenges with respect to contractual risk allocation. 

Contractual risk allocation

The scope of generative AI’s application and utility on offshore EPC and EPCM construction projects is well beyond the scope of this article. However, the foundational contractual risk allocation formats, which have been utilized for years in these type of construction projects, may have to be altered to address risks arising from the use of generative AI.

“Single source of truth”

When utilizing generative AI in construction projects, the parties will have to clearly state how AI will be provided with data from a “single source of truth” or “SSOT.” The SSOT will be the source of substantive data, information and regulatory guidelines for the project SOW and specific AI applications. In addition, the parties should identify the means by which the AI is “trained.”

The designation and use of an SSOT is essential for generative AI to understand and incorporate a wide range of essential information concerning allocation of tasks, agreed industry and regulatory standards/requirements, component capabilities and manufacturing data. The SSOT designation requirements will be substantive elements of the construction contract.

In the event of an AI-related construction defect, failure or malfunction, the designation of, and compliance with, the SSOT designation will likely be a key element in determining what caused, or contributed to, the failure. In addition, claims may allege that the AI tool was not correctly educated or trained by the manufacturer or user. It is anticipated that AI manufacturers may claim that the tool was used incorrectly.

It is anticipated that, in some AI-related construction disputes, the EPC or EPCM contractor or owner may pursue contractual claims against the AI manufacturer/creator. In some instances, the agreement between the EPC/EPCM contractor or owner and AI manufacturer/creator may limit their liability for claims against the AI manufacturer/creator.

AI oversight

However, because the generative AI tool had to rely upon data from a SSOT, the source that provided that data may have exposure under the terms of the contract, or based upon the negligent provision of data.

Parties utilizing generative AI for EPC/EPCM contracts should contractually designate individuals to oversee AI’s work, as well as protocols for quality assurance and notification of any problems to designated parties.

Risk allocation terms

In many offshore construction agreements, the parties rely upon “mutual” or “knock-for-knock” risk allocation terms. These terms essentially require each party to be responsible for injuries to its personnel, or damage to its property, as long as it resulted from a negligent, rather than a grossly negligent or intentional act, of a third party. The foundation for this “knock-for-knock” contractual format is that each party will likely be insured for injury to its personnel or damage to its property.

Insurance coverage

Similar to the insurance industry’s reaction to business related claims arising from cyber-attacks, the insurance industry is determining if, and how, it will provide coverage for damages arising from the use of generative AI. 

In April 2025, Armilla and Chaucer Insurance at Lloyd’s of London announced an insurance coverage for the “evolving risks of mechanical underperformance of AI systems…and their associated liabilities.” 

Although the announcement did not provide a detailed listing of coverages, it did state that the policy coverages include “hallucinations (false or misleading outputs), model drift (performance degradation over time), mechanical failures, and other deviations from expected AI behavior. It also provides legal defense and liability protection for claims arising from such underperformance.” The Armilla/Chaucer coverage was created in cooperation with Lloyd’s Lab innovation accelerator. 

Testudo is another Lloyd’s of London “start-up” that provides data and technology for underwriting coverage for AI risks in the London market, supported by investment from Goldman Sachs. Testudo is also part of Lloyd’s Lab and has plans to introduce an AI coverage policy by late 2025.

New regulatory frameworks

There were at least two key provisions regarding the use of generative AI in the recently passed “One Big Beautiful Bill Act.” The OBBBA provides for:

  • Federal funding and tax incentives for businesses investing in US-based AI development and infrastructure, including AI research, data centers and semiconductor production.
  • Heightened restrictions on foreign influence and investment by “prohibited foreign entities” in AI activities supported by federal funds.

Executive orders

There have also been a number of AI and data-related executive orders (EO) from the Trump administration:

  • EO 14318 Accelerating Federal Permitting of Data Center Infrastructure – In an effort to streamline the development of AI-related infrastructure in the US, this EO targets the development of data centers and infrastructure to support data centers. The US Department of Interior and Department of Energy will collaborate to authorize appropriate federal lands for energy development.
  • EO 14320 Promoting the Export of the American AI Technology Stack – This EO focuses on exporting US-based AI technology to decrease international dependence on AI technologies developed by AI adversaries.
  • EO 14319 Preventing Woke AI in the Federal Government – This AI directs the Office of Management and Budget to issue guidance to federal agencies to implementing unbiased AI principles.

The private sector and federal government in the US are investing immense amounts of capital and resources in developing AI for commercial applications on a broad scale. Many of these applications involve offshore energy development and large capital projects. 

An overview of the rapidly evolving commercial and regulatory AI developments indicates that industry is moving faster than the regulatory framework that is intended to govern AI development. In a similar manner, the insurance and risk allocation sectors are also trying to analyze and “ring fence” the possible adverse impacts of AI that is utilized in an untested or unreasonable manner. At Endeavor Management, we have seen this same phenomenon play out in deepwater oil and gas exploration, where in the past federal regulators were analyzing new technologies and rapidly trying to create regulations to govern their use. Hopefully, this evolutionary process will continue to proceed in a productive and reasonably safe manner.

About the Author

Glenn Legge

Glenn Legge is a Senior Advisor with Endeavor Management focusing on the energy transition. He has forty years of experience as lead counsel in commercial transactions, litigation and arbitration matters involving upstream/downstream energy, construction, trade secrets and insurance disputes. Legge also advises companies on regulatory issues, risk allocation and insurance coverage for projects in the upstream, midstream and downstream sectors.

 

 

Sign up for Offshore eNewsletters