Geophysics: Linux, storage area networks may lower implementation cost for exploration
'Proprietary' has become an awful word in the seismic industry. In a significant break with the past, the trend toward commodity-based building blocks for processing and data storage has accelerated with the widespread acceptance of Linux clusters and heterogeneous storage area networks (SAN).
Not so long ago, the compute and storage requirements of the E&P seismic industry demanded specially engineered products to handle the volumes of data and imaging algorithms. Complex, data-rich images generated by sophisticated seismic imaging technology have long since replaced the paper documents of the past. The ability to freely manipulate these images is crucial to obtaining the best value from them.
All of these factors are combining to drive the information technology (IT) requirements of the seismic industry ever higher. Of course, even before the dawn of the age of the analog computer, the seismic industry was pushing computing technology to its limits. Seismic IT pioneer Geophysical Service Inc. was successful at creating technology and became Texas Instruments in 1951. (Geophysical Service Inc. remained a TI subsidiary until sold to Halliburton in 1988.)
Recent history
In the 1970s, we saw the beginning of the dominance of high-powered floating-point array processors. Companies such as Floating Point Systems, and later Star, provided the specialized computing power, while being front-ended by mainstream computer systems from vendors like Perkin-Elmer, Burroughs, and Digital. As seismic datasets increased in size and quantity, so did the number of magnetic tapes, thus driving the need for higher density media.
IBM's "square tape" arrived on the scene in 1984. It provided much-needed relief in terms of both storage capacity and space required for the drives themselves. This reduction was especially critical on the seismic vessels that were now recording 3D datasets.
Back on land, the use of high-priced vector computers like Cray and Convex became common with both seismic contractors and oil companies. The widespread use of these computers placed the seismic industry second only to the federal government's national labs in total computing power.
As we entered the 1990s, the dream of using mainstream components was realized in the use of massively parallel processor (MPP) computers. Though these systems were not themselves mainstream computers, they were built with mainstream components.
MPP finally gained acceptance as a mainstream product with the IBM RS/6000 SP2 in the middle of the decade. Later, larger symmetrical multi-processing (SMP) computers were introduced and came to play a dominant role in seismic processing.
As the industry's need for data throughput increased, its need for increasingly sophisticated storage exploded. Seismic data volumes soared to new levels with multi-streamer acquisition. In 1995, IBM introduced its 3590 tape drive, which soon became the de facto standard for seismic data, maintaining IBM's position as the industry's tape technology of choice.
Reducing cost
While the ability to handle large volumes of data is vital for any modern information system, another demand placed on technology in the exploration and production field is low cost of implementation. Even small, independent operations need to be able to handle and analyze seismic data, but they do not have the economic resources of the larger players. This need is one of the driving forces behind our company's focus on developing economical IT solutions.
Today, we are seeing mainstream technology being utilized in many ways. From clustering of Intel-based servers running Linux to SAN, modern information techno- logy is being used to provide efficient access to seismic data. High performance, common arch-itecture, wire-speed ethernet switches from Extreme Networks allow for connecting clusters of servers with performance matching that of the expensive proprietary switches that would have been needed in the past.
Aside from the benefits of using an open-source operating system, Linux clusters are also an efficient way to achieve the throughput capacity required when dealing with images and documents that may be a terabyte or more in size. Today's Intel-based servers exemplify both economical acquisition and cost-effective operation.
SAN benefits
The beauty of a SAN lies in its flexibility to allocate resources as requirements change, as well as high-speed, block-level access to seismic data for processing. Combine that with the ability to access that data through a LAN, using NFS, and you have efficient data sharing. Add the additional functionality of LAN-free and serverless backups over the SAN and you have a more secure environment.
Best of all, you can create this environment using some legacy storage devices (SCSI, Fiber, SSA arrays) alongside new inexpensive, high-performance, commodity type storage products and manage this from a single console. In this way, you build a storage system that is flexible, easy to manage, and can grow with amazing speed.
Historically, the raw data processing technologies have always kept pace with the requirements of the seismic industry. Until recently, however, this capability came with a price that restricted it to the major players. Today, Mark III Systems and other IT suppliers are recognizing the significance of providing state-of-the-art compute power to a broader base of users.
By using mainstream building blocks of hardware and software and more flexible and innovative acquisition programs, technology providers can offer a new generation of cost-effective systems to a larger segment of today's seismic industry. Ultimately, this approach benefits us all as the savings appear not only from reduced technology investment, but also from lowered costs of exploration.
Authors
William Rickert is Vice President of Business Development for Mark III Systems. Before joining Mark III, he held senior management positions with PGS Data and Solid Systems.
Bill Cawthon is Director of Communications for Mark III Systems. He began in NMOS applications at Motorola Semiconductor in 1976 and branched out to technical communications two years later.