July 2013, Vol. 240 No. 7

Features

Determining Recalibration Intervals For Ultrasonic Meters At CEESI

Tom Kegel, Colorado Engineering Experiment Station, Inc. (CEESI)

Calibration is based on the comparison of a measuring instrument and a standard. Calibration may include adjustment of the instrument to match the standard. Every measuring instrument requires calibration prior to use because a manufacturer cannot produce an instrument that measures a parameter, with reasonable uncertainty, without direct calibration.

Pressure Transmitter Example
A pressure transmitter, for example, exposes a diaphragm to an unknown pressure. A sensor detects the resulting diaphragm deflection and produces an electronic signal. The signal is processed to result in a numerical value stored in a computer. The prediction of the diaphragm deflection and sensor response involves a large number of parameters, some of which may be unknown. The only way to account for these parameters is by direct comparison with a pressure standard. Manufacturers and users both recognize the need for initial calibration, calibration always represents the final step in the manufacturing process,

Once a pressure transmitter has been placed in service the mechanical properties of the diaphragm can change. Repeated pressure and temperature loading of any mechanical structure can change the stiffness. A change in pressure transmitter diaphragm stiffness will result in a measurement error. The industry recognizes the need for periodic recalibration; API MPMS Chapter 21 recommends pressure transmitter recalibration on a quarterly basis. It is noted that API 21 separately defines calibration and verification processes based on whether a transmitter is adjusted. For the purpose of the present discussion verification and calibration have the same meaning.

The periodic recalibration of a measuring instrument can be compared to insurance. While we all purchase automobile insurance year after year, some drivers make more claims than others. When a driver makes no claims for a period of time there is a temptation to question the value of the insurance. What am I paying for? We all realize that the insurance will be available if we need it and prudence dictates maintenance of insurance. Often a pressure transmitter is calibrated over and over again but is never observed to change. It is tempting to question the value of the periodic calibration; the answer is that a shift in measurement will be identified if it occurs. Without periodic calibration the shift will likely go undetected.

Uncertainty Consideration
Instrument recalibration decisions are often directly related to measurement uncertainty. The uncertainty of a measurement is an estimate of the range of values that is likely to contain the true value. The statement, “P = 100 +/- 1 psi at a 95% level of confidence” means that the true value of the pressure is estimated to be between 99 and 101 psi. The level of confidence is required due to the statistical nature of uncertainty analyses.

A typical measurement process uncertainty analysis includes multiple components. Some components contribute random variations in the process output; a manufacturer may identify these components as repeatability or hysteresis. Other components result in a systematic difference or bias in the process output, the linearity of a transmitter would represent a systematic effect. Some components only contribute uncertainty over a longer time frame; they might be identified as reproducibility, stability, or drift. These long term effects result in the growth of uncertainty over time. When a process is observed over a short period of time, the long term effects are always systematic. One view of calibration is the substitution of long term systematic effects for a smaller systematic effect plus a random effect. The smaller systematic effect is associated with the calibration standard; the random effect is the repeatability of the measurement process.

What Does AGA 9 Tell Us?
The use of ultrasonic meters for the custody transfer of natural gas is standardized by Report Number 9 of the AGA Transmission Measurement Committee. AGA 9 states, “It is a requirement that all custody transfer metering packages be flow calibrated in a flow calibration facility or by a calibration system that is traceable to a recognized national/international standard.” AGA 9 further states, “Calibration adjustment factors shall be applied to eliminate any indicated meter bias error.”

AGA 9 requires a variety of factory tests be completed prior to calibration; several of these tests are relevant in a discussion of recalibration. The dimensions of the meter have a direct impact on the flow measurement uncertainty. As a result the manufacturer will perform various dimensional measurements including multiple internal diameter measurements, transducer spacing and acoustic path lengths and angles.

Another mandatory factory test is called zero flow verification. The meter is filled with pressurized gas, usually nitrogen, and allowed to come to thermal equilibrium. Speed of sound measurements are compared to calculated values, agreement must be within specified limits. This test is typically used by the manufacturer to make final adjustments to software based parameters that correct for small performance variations in transducers, connecting cables and analog electronic circuitry.

 width=
Figure 1: Calibration Results for 23 Meters.

AGA 9 requires meter calibration results to meet performance specifications for non-linearity, repeatability, and maximum error. These specifications do not represent the uncertainty associated with a calibrated meter but rather the performance limits reasonably achievable by the zero flow verification testing and dimensional measurements. Calibration results for 23 smaller meters (smaller than 12 inch) are shown in Figure 1. Each line represents a curve fit of the data from a single meter. Most of the data are within +/- 0.5%; one meter shows a negative bias of nearly 1%. The Figure 1 data are typical. CEESI experience shows that virtually all meters meet or exceed the AGA 9 performance specifications. The result of the ultrasonic meter calibration process is to enter a software correction into the meter that essentially eliminates the variation shown in the graph. The calibration process reduces the measurement uncertainty by as much as 0.5% for most of the meters and by nearly 1% for the single negatively biased calibration. The process complies with the AGA 9 requirement to eliminate indicated meter bias error by applying calibration adjustment factors.

As we read more of AGA 9 we find the statement, “The decision to perform periodic transfer proving or flow calibration is left to the parties using the ultrasonic meter.” AGA 9 does not recommend or require a specific time interval for periodic recalibration.

Ultrasonic meters feature extensive diagnostic parameters that can indicate a change in meter condition. Examples of diagnostic parameters include the speed of sound, transducers signal to noise ratios, and the standard deviation of multiple transit time measurements. Additional diagnostics are developed based on comparing signals from different transducer pairs, or comparing upstream and downstream facing transducers. Some meters even include transducer pairs used only for diagnostic purposes. Over the years technical conferences have included case study type presentations where variations in diagnostic parameters are correlated with adverse meter conditions, one example of many is contained in Reference 2. The industry is in broad agreement that monitoring diagnostic parameters is critical to the maintenance of ultrasonic meter uncertainty.

CEESI Test Programs
CEESI has been actively monitoring and analyzing ultrasonic meter calibration results for a variety of purposes, including assessment of recalibration intervals. The testing and analyses discussed in the current article follows two distinctly different paths. The first test program involves investigating the long term performance of a single meter; the second involves investigating a range of meters that have been returned to CEESI for re-calibration.

There are advantages and disadvantages to each approach. In the first test program data taken on an ongoing basis results in a smooth continuum of meter error as it changes gradually over time. The conclusions based on the analysis of a single meter may not be applicable to other meters, different sizes, or products from the other manufacturers. In the second test program the results are based on multiple meters. Comparing only two data points to identify a measurement shift does not provide a significant body of statistical data on any given meter. Finally, the analysis of meters returned to the CEESI lab may not represent typical meters. The results might represent meters exhibiting problems which could be considered an advantage if these “problem meters” represent the worst case.

First Test Program
The CEESI Iowa calibration facility includes the use of check meters. An ultrasonic check meter is installed in series with a meter under test to monitor consistency of the calibration process. The same data will also indicate any variation in the check meter. While multiple check meters are used at the facility, the test program described in Reference 1 is based on data from a single 24 inch meter. The database consisted of 7,246 data points obtained between August 2000 and May 2006 over the velocity range of 1.5 to 75 ft/s.

Experience calibrating ultrasonic meters shows that the meter error varies with both velocity and time; the analytical process required the separation of these two effects. The approach involved dividing the velocity range into multiple intervals each containing approximately 100 data points. The data within an interval were obtained at essentially a constant velocity, the only observed variation will represent the variation over time. The large number of data points within each interval provided for robust statistical analyses.

 width=
Figure 2: Variation of Check Meter Error Over Time.

Some of the results are contained in Figure 2. Each line represents a fit to the data corresponding to a constant velocity. Most of these lines represent data obtained over the 20 – 45 ft/sec range, it is noted that the trends with velocity are very consistent. A similar plot for data outside this velocity range does not indicate the same level of consistency. A “worst case slope” identifies the maximum observed change in meter error. The selected value is 0.04% per year which results in a meter error of 0.2% after five years.

Second Test Program
The second test program is described in Reference 3. The database supporting this effort is the result of twelve years of ultrasonic meter calibration history. The database includes 95 recalibration events, recalibration time intervals from less than one year to nine years, meter sizes from 4 inch to 20 inch, and gas velocities between 10 and 100 ft/s. The most common meter sizes were eight (33%) and twelve (19%) inches, with fewer ten (14%) and sixteen (15%) inch meters. A “recalibration event” represents a meter that was calibrated by CEESI and returned for recalibration. Some meters have been recalibrated more than once; each recalibration is a separate event.

A first look at all the data indicates the mean values are within +/- 0.1%. On average the meters do not exhibit a tendency to read either high or low. Further analysis indicates that 95% of all the data are within +/- 0.8%, a slightly wider variation is observed at higher velocity. Any one meter returned for re-calibration is unlikely to have shifted by more than 0.8%.

The analysis continues to separate the effects of meter diameter, gas velocity and recalibration time interval. In general the recalibration shift decreases with increasing meter size; smaller meters tend to shift more than larger meters. This makes sense because a smaller meter will have a higher uncertainty. The recalibration shift is observed to increases with recalibration time interval. This also makes sense, the longer you wait the more the uncertainty grows. Finally, the recalibration shift increases at velocities above 75 ft/sec.

 width=
Figure 3: Average Recalibration Shift.

The recalibration shifts are presented in graphical form in Figure 3. Each line represents the average value of all the data for a particular line size. For the first five years the data are all consistently within +/- 0.2%, after five years the data begin to indicate larger shifts.

What Is The Recommended Recalibration Time Interval?
The results of the two test programs can be used to determine a recalibration interval based on uncertainty growth over time. The appropriate value for a particular installation depends on many variables. In general, as the recalibration interval increases, the uncertainty will increase. The acceptable magnitude of increased uncertainty will vary from one user to another and therefore the recommended recalibration interval will vary.

The data of Figures 2 and 3 indicate an increase in uncertainty of 0.2% for a recalibration interval of five years. The two fundamentally different test programs yield similar results. In the absence of site specific requirements this combination of uncertainty and time is judged to be reasonable and a five year recalibration interval is recommended.

Author:
Thomas (Tom) Kegel is the senior staff engineer for Colorado Engineering Experiment Station, Incorporated (CEESI). Ph: 970-897-2711.

References
1. Kegel, T.M. and Britton, R.W., “Characterizing Ultrasonic Meter Performance Using A Very Large Database,” 26th International North Sea Flow Measurement Workshop 21st – 24th October 2008.
2. Lansing, J., “Ultrasonic Meter Condition Based Monitoring – A New And Simplified Solution,” AGA Operations Conference, May 2007.

3. Kegel, T.M. and English, S., “A Proposed Ultrasonic Meter Recalibration Interval Tool,” 29th International North Sea Flow Measurement Workshop 25th – 27th October 2011.

Related Articles

Comments

{{ error }}
{{ comment.comment.Name }} • {{ comment.timeAgo }}
{{ comment.comment.Text }}