Y2K offshore: do you have a problem or not?

Multi-function production platforms such as this one in the North Sea have hundreds of microchips aboard to serve automated control and monitoring processes and communications. The transition from the year 1999 to 2000 could substantially impact platform processes.


Pre-1997 microchips could be vulnerable

Harry Woodstrom
Systems Integration Group


Multi-function production platforms such as this one in the North Sea have hundreds of microchips aboard to serve automated control and monitoring processes and communications. The transition from the year 1999 to 2000 could substantially impact platform processes.

A global recession is the worst case scenario for a dilemma increasingly gaining more attention: the Year 2000 (Y2K) computer date change problem. Experts estimate that fixing the Y2K problem may cost an estimated $600 billion worldwide, with the risk of general business disruption being potentially disastrous. Yet studies show that numerous companies of all sizes have yet to address the issue.

Many believe that if they have new PCs or mainframes, no computer date change problem exists. In fact, microchip technology is the real issue, involving everything from electronic switches to security systems.

For the worldwide offshore oil and gas industry, microchip technology is so pervasive on such a massive scale, comprehensive Year 2000 compliance action should be initiated as soon as possible. Otherwise (for those not yet familiar with Y2K), following December 31, 1999, microprocessors may read the following day as "00." That double digit may be electronically translated as either 1900 or simply an invalid date. Either way, the results for business operations can be catastrophic.

It is particularly important to consider the snowball effect that noncompliance can have throughout the operational chain. How many of an offshore company's suppliers or customers will have their operations and/or cash flow stop if a timely Y2K solution is not implemented offshore?

Y2K process overview

While debate continues over Y2K's scope and most appropriate response, most information technology (IT) experts agree the problem is real. Solutions vary, but the key is to preserve a company's assets by making sure that electronically "all systems are go" following December 31, 1999. A successful process may proceed as follows:

  • Perform a detailed assessment of the business environment (uses of microchip technology)
  • Analyze the data unearthed in the technical assessment for planning purposes
  • Determine the scope of a company's Y2K project and the mission-critical programs to be fixed
  • Devise an immediate Y2K renovation and testing plan
  • Fix and test applications in an environment that does not affect production.

In short form, the Y2K process is risk discovery, evaluation, planning and problem solving, and solution implementation. Therefore, resolving the Y2K issue is more than a repair project. It is an opportunity for management to discover new value in its IT investments since viable data existing in many legacy applications only needs to be extracted and re-implemented. In turn, that establishes a positive and cost-effective basis for future technology-based decisions.

In "fixing" the Y2K problem, it is critical to avoid tunnel vision. An effective approach addresses enterprise-wide business, application, and technology concerns.

Phase 1

The Y2K process begins with a candid evaluation of risk. To do that, clear steps need to occur. One is risk discovery (inventory subdivided into three areas):

(1) IT applications (both vendor and client-developed, hardware and internal/external data connections)
(2) Supply chain, which involves information and applications not under your control, but under your supplier's, but which may jeopardize your operations if necessary supplies cannot be produced for ongoing operations (particularly JIT inventories)
(3) An evaluation of chips that contain logic (embedded software).

Embedded software poses the most complex challenge. Since logic chips are rarely obvious - residing throughout plants, PBX, or a variety of systems - they must be physically located and a due diligence search performed to uncover the risk factor.

Phase 2

Evaluation and planning requires the gathering of as much information as possible. That is, if vendors are involved, contact them to see what testing they have done and consolidate the findings with other applicable industry testing. Then determine what other discovery action(s) may be required.

The key is to minimize evaluation costs, otherwise these costs may be as expensive as complete remediation. Focus expenditures in the area having the most potential Y2K business and safety impact. Evaluation defined in this context is:

  • Whether or not date processing is involved
  • Whether that data processing will have a Y2K problem (determine via direct examination of the source code, interrogation of vendor, evaluation of research by others or [worst case scenario] applications or chip testing).

A "black box regression test" can be as expensive as doing a final systems integration or acceptance test. And testing is projected to require 50-60% of the total effort of any remediation project. The message is clear: if at all possible, avoid a disproportionate expenditure upfront solely to see if a Y2K problem exists.

Alternative courses are recommended first. Direct inspection of the source code can generally be managed as tool-based. The result feeds into an estimating model that provides a clue to the project's scope and cost. With vendor-supplied equipment and software, interrogate the vendor and other firms' research first.

A caveat: in some cases of embedded chips, testing is not possible. No test lab may exist or be available. Testing may interfere with the item's function. Also, though test labs can be built, often that is not cost-effective. And all this comes full circle as to the overriding issue: "What is the potential for failure?" Additionally, there may be considerations such as environmental, safety, stockholder, personal/employee liability, financial, or testing may simply be an inconvenience.

For all the preceding reasons, focus on the failure factor. Examine the probability of failure, the impact, and what will failure cost? Therefore, the company must determine the level of risk it can - or cannot - assume, and establish a proactive, cost-effective Y2K solution.

In weighing decisions about individual risk items, options are not in short supply, but some have limited appeal. Risk may be ignored, remediated, triaged, or worked around. At the extreme, the offending hardware or software may be replaced or the company may be forced to cease business operations.

Phase 3

Once the alternative is selected, a management project addressing that solution should be established and managed. In phase 3, risk remediation projects may range from monitoring whether failure has occurred to fixing identified problems. As projects are completed, additional activity must proceed. That includes developing standards and procedures that roll forward to ensure the problem is not reintroduced (that older versions of hardware or software are not re-introduced).

Significantly, hardware and software which may not be Y2K compliant may continue to be used on an "exception" basis. For example, some labs have hybrid systems, part digital and part analog. In this environment, testing instruments may have the wrong date on a report while the instrument's baseline is not affected in terms of its intended function. Likewise, some items only have a problem when rolling from 1999 to 2000 and calculating dates and times within a limited span.

So, determine the risk and acceptable risk exposure. Next, establish a plan based on sound business decisions and move forward in executing those plans. The specific solution will drive your company to a decision track with specific actions to take.

Representative project

At the outset of any project, a statement of work and detailed work plan are the first two deliverables. These specify scope, approach to the problem, team structure, tools to be used, and timeline.

First, finalize the upgrade unit assessment plan. If any other assessments are necessary, they should be executed on the project's front end.

Second, a finalized compliance plan, detailing the necessary conditions for compliance, is critical.

Third, execute upgrade unit component expansion. That is, do any items within the solution for this particular upgrade unit need additional detail? For example, situations may exist whereby a plan can be developed and repeated multiple times to address the Y2K issue. Conversely, those solutions may have to be expanded to include all Y2K items at issue.

Fourth, upgrade the application data files and databases as necessary as well as upgrading any unit changes. The latter may be changes in the source code or replacement of applications/chips. In the case of source code, perform unit remediation and testing, integration testing, and/or compliance testing for Y2K(requiring date-sensitive testing).

The bottom line? In concluding the process, validate that any changes made will meet the compliance standard established earlier and that all the changes jointly meet the company's Y2K compliance plan. That requires testing across applications, across interfaces, and across hardware and software cooperative systems. Within the complexities of compliance, two significant points should be noted:

  • A uniform global compliance standard does not exist. Additionally, within industries, standards vary.
  • Some items are "fixable" and some are not. If not fixable, options are narrow: replace the item(s), regardless of the cost or cease doing business.


What signs indicate a potential Y2K problem on an offshore installation? In fact, any computerized equipment may have the problem, due to whether the equipment or components were manufactured (not the purchase date) prior to 1997. However, to complicate matters in determining the location and extent of Y2K problems, a homogeneous dilemma may arise. For example, although an equipment supplier may state an item is Y2K compliant, are all the item's components compliant. Are the microchips and service companies compliant?

Offshore, virtually every device contains embedded microchips. Consider whether any have Y2K compliance issues and what occurs if a particular chip fails. Do not assume all chips will fail or that they will all fail simultaneously. Do assume, however, that such a huge number are at risk, some will fail. And do assume, even though you may do everything possible, some will fail anyway.

Overall, it is most important to state specifically what compliancy is. To what level and how have tests been conducted by manufacturers stating equipment is Y2K compliant? Because the findings may not be satisfactory for your particular company. For example, data stored in 4-digit format is no guarantee of compliancy. It simply means data has been stored in that format but may have been truncated to two digits to fit other internal software and hardware instructions.

Other considerations complicate a simplistic conclusion that four digits are the Y2K solution. How is the date represented if there is no date? Zeros, spaces, nines? All those have been used in the development of computer programs. Are digits/data with a special meaning being generated? So, solutions are based on the individual problem and risk encountered. There is no single solution to fix all Y2K problems.

For key decision-makers throughout the offshore oil and gas industry, what is the upshot? See what Y2K plans, if any, your company currently has in place and what action has already been taken.

There may well be a Y2K team, of which you are not aware, at work within the corporate structure. In any event, if you think the company may potentially have Y2K risk exposure, personally address the actions the company has or has not taken. The solution may be focused on an application, a group of applications centered around a specific data store, or a particular type of hardware or firmware. "Draw circles" around upgrade units with similar risk and make immediate decisions which will take the company seamlessly through December 31, 1999 to the new millennium.


Harry Woodstrom is Vice President of Consulting Services for Systems Integration Group (SIG, Inc.), a Houston-based information technology (IT) outsourcing firm serving Fortune 1000 companies since 1988. Woodstrom has managed numerous Y2K projects for a variety of industries.

Copyright 1998 Oil & Gas Journal. All Rights Reserved.

More in Equipment Engineering