Exploration productivity without compromising interpretation

Data compression enhances imaging, workflow
March 1, 2000
9 min read
Compression ratio and fidelity factor for four data sets. This shows that the achievable compression ratio is data-dependent and generally higher for more coherent data sets. The higher the fidelity value, the lower the compression ratio.
Click here to enlarge image

To compete in a world of increasingly complex geologic regimes and fiscal realities, exploration and development (E&D) companies are using the latest technology and innovative seismic exploration approaches available. For example, the historically high drilling success ratios in frontier regions, like deepwater West Africa, are often attributed to 3D seismic technology. Although 3D data has been a boon for interpreters, the technology has not only challenged interpreters to extract and leverage the information content in 3D seismic data, but it has also challenged interpreters to manage a "seismic data explosion."

Only 10 years ago, 3D surveys were acquired primarily over proven fields to optimize development programs. In contrast, interpreters today routinely evaluate vast areas for regional studies, basin analysis, and lease sales from 3D mega-surveys. Service companies have dramatically increased acquisition capacity as acquisition technologies and economies of scale make it cost effective to collect large-scale 3D surveys covering thousands of square kilometers. Acquisition vessels that once towed a small number of 240-channel streamers are now built and equipped to tow up to 20 streamers, between 6,000 and 8,000 meters long. Large, high-resolution surveys have become the norm.

Furthermore, as 4D seismic is utilized more to monitor reservoir performance, and as the industry embraces multi-component data to better image the subsurface or characterize the reservoir, explorationists will undoubtedly require technologies to manage and interpret even greater volumes of data. Faster computing platforms, improved communication networks, and data storage technology have contributed to accom modate the seismic data explosion, yet explorationists continue to face interpretation workflow challenges and information technology (IT) issues. How can today's explorationist maintain productivity without compromising interpretation results?

Productivity, data management

Seismic data compression is a key technology for managing seismic data in a world of ever-increasing data volumes. By storing data in a format that requires less space than the original data volume, seismic data compression technology provides greater flexibility in managing local or remote server disk space as well as reducing network traffic. Seismic data compression not only enables explorationists to maximize the value of the IT infrastructure, but it encourages innovative interpretation work flows to leverage the vast information content in 3D seismic data while maintaining or exceeding current productivity levels.

The word compression often raises concerns of lost information and reduced accuracy or resolution, despite the fact that compression plays a vital communication role in our everyday lives. The use of compressed images on the World Wide Web is one example where compression technology has been applied to enable improved productivity. Although compressed image files viewed on the Web have a lower resolution than a gallery photograph, the compressed image most likely has sufficiently high resolution to be recognizable and to transfer the needed level of information. The lower resolution image has value to the viewer because the required information is transferred quickly. Although the viewer may be willing to wait for a superior image in the mail, the ability to view an image quickly may place a higher value on the lower resolution image. It would take significantly longer to navigate the Web without image compression and quite likely, the slower speed would prohibit access too much of the information available on the Web. In this case, the compressed Web image is "fit-for-business."

Decimation vs. compression

The effect of compression technology versus a traditional decimation strategy. The decimation strategy (above) has smeared the fault zone image.
Click here to enlarge image

Historically, seismic interpreters have adopted data decimation strategies to save time or to accommodate computing environment limitations that are out of their control. In an attempt to manage and view increasingly large volumes of seismic data, geoscientists have routinely reduced the data volume size using various strategies. These strategies range from processing techniques like summing adjacent traces, to a commonplace process of storing data in 8-bit formats. In fact, it is sometimes necessary to simply throw out traces or entire lines so that the remaining data fit within interpretation environment limitations.

In addition to throwing out or decimating seismic data prior to analysis, data are often divided into small sub-volume data sets for interactive viewing and manipulation. This strategy has been especially prevalent where mega-marine surveys have continually increased the upper bounds for seismic survey size. Although decimation strategies can lead to as much as 4:1 disk storage savings, this benefit comes at the expense of throwing out valuable information, of losing dynamic range, or of losing regional context while viewing small subsets of data, all of which may have a negative impact on interpretation decisions.

Data decimation strategies have minimal positive impact on IT resources and a negative impact on interpretability when compared to strategies that leverage data compression technology. Seismic data compression has significant implications for IT resource management and interpretation productivity as the industry moves to larger dataset sizes and better utilizes multiple attribute volumes to unravel complex geology. Although interpreters will always face IT resource and time constraints, compression technology provides additional flexibility to develop strategies for productive workflows within those constraints.

Data compression is capable of much greater disk savings than that provided by traditional decimation strategies. For many data sets, 20-to-1 compression ratios may become common practice, while even greater compression ratios will be suitable for some interpretation work flows. Also, consider the immediate benefit of compression technology on data archiving. Compressing 100 gigabytes of data to merely 10 gigabytes (10:1 compression ratio) provides direct savings in disk storage costs. Storing and managing seismic or attribute volumes on local systems also becomes practical for much larger data sets, opening the door to more productive interpretation strategies. As one would expect, seismic data compression presents a trade off between preserved data fidelity and stored data volume size.

Seismic data compression

Compression technology (above) preserves the fault image and provides greater disk savings.
Click here to enlarge image

Although data compression offers many benefits to interpretation productivity and IT resource management, interpreters must balance these benefits with the inherent loss in data fidelity during the compression process. The notion of "fit-for-business" guides the interpreter's decision in determining how much fidelity loss is tolerable relative to the benefits gained from compressing the data set. Since compression is "lossy" (compression does not preserve 100% of the input data), interpreters must first consider the degree of fidelity suitable or required to achieve interpretation goals.

For example, regional seismic evaluation projects may likely require less data fidelity than a reservoir characterization project. For the benefit of rapid access to very large data sets, an interpreter would most likely be willing to tolerate less detail (fidelity) in the data. In this case, the interpreter can work with compressed data that is "fit-for-business" and leverage the benefits of smaller more portable data files.

Analysis vs. development

The concept of "fit-for-business" data compression is illustrated when considering data fidelity requirements for a basin analysis interpretation, in contrast to the data fidelity required for an interpretation supporting field development decisions. In a regional basin analysis, by leveraging compression technology and accepting lower fidelity data, an interpreter may be able to merge multiple surveys into a single project. The ability to store or quickly access larger data sets can enhance interpretation productivity in basin-wide and regional studies or in lease sale evaluations where interpreters must evaluate a great deal of data in a short period of time.

Lower fidelity data may be acceptable for interpreting a regional structural framework and fault model so long as the faulting remains clear on the seismic data.

In a field development example, an interpreter may require much higher fidelity data to perform a detailed attribute analysis around a reservoir interval or to identify bypassed reserves and compartmentalization. Reducing the data volume with an appropriate degree of compression can maintain fault clarity, whereas conventional decimation strategies will smear the seismic data across fault zones.

Multiple volume interpretation

Explorationists once made inferences about the subsurface from a single seismic attribute - seismic amplitude. Today, a typical workflow may involve seismic interpretation on multiple volumes of data, each representing a different seismic data attribute. The following example illustrates the importance of access to multiple attributes and the value of data compression technology.

Consider a highly faulted Tertiary prospect where amplitude variation with offset (AVO) anomalies indicate productive zones. Volumes from several different processing sequences may be required to unravel complex geology and increase confidence that hydrocarbons are present. A compressed amplitude volume can be used to describe the initial structural framework, the trapping mechanism, and as an indicator of amplitude anomalies. Additional attribute volumes provide information sources that can improve interpretation confidence.

For example, a similarity volume can show detailed faulting and compartmentalization, and a waveform analysis volume may illuminate the stratigraphic or depositional framework. Near-angle and far-angle stack volumes can provide confidence in the presence of an AVO anomaly. Various other attribute volumes might also assist in reservoir characterization.

Each of these attribute volumes can be compressed, providing local or optimized network access to these multiple information sources. The benefits of data compression may encourage interpreters to utilize additional attribute volumes in innovative interpretation work flows.

"Fit-for-business" seismic data compression can streamline the interpretation work flow, improve productivity, and improve the accuracy of the earth model. In the future, compression technology will have significant impact on data storage and work flow productivity as 4D and multi-component surveys bring more data to the interpreter's desktop. Compression technology will also play an essential role in interpretation sessions as interpreters access prestack seismic data, a source of detailed velocity, and lithologic information.

Sign up for our eNewsletters
Get the latest news and updates