The technician's role in offshore safety instrumented systems
Edward Marszal
Kenexis
The offshore oil and gas industry spends vast sums of money on Safety and Environmental Management Systems (SEMS). However, with respect to safety instrumentation, the industry may not be receiving the primary benefits from this investment because a key stakeholder is not involved in the process. Regulators are presented with a mountain of easy-to-access and standardized data, and management can be comfortable knowing that after the data has been entered, its compliance obligations are met. The technician who is required to enter the data into the forms needs to be in the loop.
When an offshore production facility is engineered, the design is based on many assumptions. With safety instrumented systems (SIS), these assumptions include failure rates, failure mode distributions, diagnostic capabilities, and demand rates. One benefit of investing in elaborate database-driven SEMS is that they should provide the ability to compare actual performance of safety and environmental protection equipment against the assumptions made in the design phase. This feedback of the actual performance of equipment into the design process allows for modifications in equipment selection, configuration, maintenance, and testing to account for actual conditions. This type of feedback loop is not only good engineering practice, but also is required to comply with the ISA 84 standard for SIS.
While this sounds good on paper, it rarely, if ever, happens. This author often hears grumblings at the technician level about being required to use inferior equipment because some "bean counters want to save a nickel." In these same organizations, the central engineering staff has no idea that the equipment item in question has an unacceptably high failure rate. The disconnect is that data on failure rates seen in the field by the technicians do not make their way to the corporate engineering staff that makes design decisions. Increased safety, increased reliability, and decreased opex (i.e., maintenance costs) are all possible if the required SEMS are properly exploited instead of only being considered a no-value compliance reporting activity.
How does the industry break the cycle of apathy and start to use SEMS as a proactive design tool instead of a compliance reporting burden? It all starts with communication and training the people who interact with SEMS at the most fundamental level – the technicians – who log the maintenance and testing activities into the system.
In order to improve the situation, technicians need more training. This training must ensure that the technicians are included in the overall process and not just be data gatherers. The training needs to explain not only what data is necessary, but also why it is necessary, what it will be used for, and how the data will improve the work environment.
This author recommends a formal training program from existing resources such as professional societies. The extension of the ISA Certified Control Systems Technician with a certificate program for testing of SIS, and a strong focus on documentation, would greatly benefit the industry. Technician training should include an overview of ISA 84 safety lifecycle so that they understand all of the tasks that are being performed, along with what activities they are responsible for and how those activities impact others in the organization.
After the general overview, a detailed analysis of each general type of instrument that the average technician is expected to encounter should be presented. As opposed to traditional training which explains how equipment operates, this training would be dedicated to the opposite – how does it fail. Each of the failure modes that are possible with each equipment item would be discussed in detail, along with activities that can be undertaken by the technician to detect the presence of each of those failure modes.
It would be optimal if in addition to the classroom lectures, laboratory sessions are performed where specific known failure modes are injected into instrument items, and the students are required to properly diagnose – and document – the failure mode.
This sophisticated level of training about failures is not done in general industry, but would result in significant benefits. First, technicians would be more inclined to spend time documenting their activities because they know that the data will be used to improve the facility, and not just be filed away somewhere. Second, knowledge of all of the possible failure modes in a standardized format will result in more comprehensive testing because the technician will have a better idea of what to look for instead of blindly following a procedure. Finally, standardized training will result in standardized and more accurate reporting, which is essential if the large sums of data that are being gathered are to be combined to generate summary statistics that can be fed back into the design process.
This author has had the experience of looking foolish in front of a group of technicians because of recommendations made on the basis of insufficient data. I once worked with a chemical plant that used a variety of instruments in SIS service. All of the documentation related to installed equipment, test events, and repair events had been collected and organized. Based on this information, the failure rates, safe failure percentages, and diagnostic coverages were calculated for all of the components. The failure characteristics analyses that were performed indicated that a certain type of flow switch generated far superior performance to other flow switches in use at the plant. As a result, this author recommended that all of the switches be replaced with the apparently superior model. This recommendation, which was given in a face-to-face meeting, was met with a good deal of (mostly) polite laughter. Obviously confused, I asked what was so funny. They replied that the particular flow switch was horribly unreliable. All of the plants' technicians and operators knew that this switch was a problem and part of their normal plant walkthrough procedure was to check if the output of the transmitter showed any movement, or if it "flat lined." If the output signal did not show any variation, it meant the meter's primary element was stuck in position. This problem was then remedied by firmly smacking the transmitter to free up the element, after which it would start working again.
This event exemplifies the problem. Operators and technicians know there is a problem, but the information never makes its way to engineers who continue to recommend obviously inferior equipment. In this specific case, every time that the transmitter got stuck and then smacked back into operation, a failure and repair occurred, but neither was documented.