Your sample failed—three words that can shut you down and cost you dearly. But who is right? Your laboratory tested the same material and it sailed through. Consider the following.
Samples drawn from the same lot are sent to two independent laboratories for verification testing. PAVDSR results from the first laboratory pass easily while the results from the second laboratory fail convincingly. Which result is correct? Which is “acceptable”? According to AASHTO T-315, they both are, if the results vary by less than 40.2%. For the user/producer/supplier depending on these test results to move product, the data from the second laboratory is surely less “acceptable” than the data from the first. Tests with precision estimates allowing variances of 40.2% in a purchase specification significantly reduce the value of the data.
The Table below illustrates a fictitious scenario where the acceptable ranges of test results are stretched nearly to the limit. Lab A has performed verification testing on a PG 82-22 and the data looks good but there is a requirement for independent laboratory testing. Identical samples are sent to two laboratories for round-robin testing. When compared to data from Lab A, all of the results from Lab B and Lab C are within tolerance for d2s%. This is good, right? While still acceptable according to the various AASHTO precision estimates, the results are far from desirable. Now compare data between the independent labs. The difference in PAV DSR data is 67%. The flash point either fails convincingly or passes comfortably and for several reasons (indicated in bold), neither data set meets the requirements for a PG 82-22.
Our example is, of course rather preposterous, but a less extreme version of it occurs routinely. The calculation of a precision and bias statement is only as accurate as the data received from the large group of participating laboratories. There currently exists no national certification to standardize best practice. Without one, 40.2% is likely to stay essentially unchanged and by any standard, that is unacceptable.
In the years since the implementation of the PG system, technicians have become familiar with the required tests and have developed in-house methods that are meticulously followed and yield excellent repeatability (in-house). Reproducibility (between labs) has also improved but not as rapidly as repeatability. The proliferation of testing techniques has contributed to the multi laboratory precision we sometimes struggle with today. For instance, AASHTO T-313 requires that the BBR specimen be trimmed flush with a hot knife but it does not provide guidance when the hot knife won’t do the job on a very stiff binder. The test must be completed, so the technician finds a way.
It is not practical for a standard test method to cover every scenario. Recently, the Asphalt Institute commissioned Dr. Dave Anderson to create a manual that draws on the practical laboratory expertise of industry leaders. The intent of the Institute’s new manual, Asphalt Binder Testing, is to serve as a link between the published method and the technician, to standardize interpretation and best practice, and to narrow the testing gap. Click here for information on obtaining a copy of the manual.