Dont understand the function of a Control Sample in Empower 3

Options
I have read many posts on control samples but I don't get it. Apparently if you put in a value for a control sample in the Value part of your component editor then quantify this sample using a standard, so for example your control sample is labelled C0101 and your standards are S0101 and S0201, then you quantify this by having calibrate set to S01* S02* and quantitate set to C01* you get a recovered value, or Amount and you compare this amount to the Calc Value in the calibration curve, then you see what the %Deviation is in the same view of the calibration curve. Is that right?

If so, whats so different between that and the "Standard" samples that are entered into the component editor and then compared with the Calc value to see the % deviation? If anyone can explain this with maybe an example that would be great, thanks!

Best Answer

  • MJS
    MJS
    Answer ✓
    Options
    @Empower2018-See the attached...I've got a screenshot of the sample set method (the template) including the processing instructions and a little description.  I have a custom field as the mathematical formula specified in the applicable test method for the percent agreement is slightly different than the built-in %deviation in Empower.  As I mentioned, I am trying to get new methods to re-word criteria and update the mathematical formulae to match what Empower does to minimize custom fields where I can, but it is a process and likely will only happen on new methods.

Answers

  • Standard to standard deviation is a function of area variance from items that are assumed to have exactly the same response factor (same standard solution or preps from the same stock injected multiple times or as part of a curve).
    We tend to use control samples to check for response consistency among similar standards (same prep, different weights). Percent deviation between a control and a standard is a better measure of that. It insures that your standard has been prepared correctly, assuming that you would not have a similar loss of material on weighing twice in a row. 
  • Hi Dan, so if you wanted to measure that, would you have the second standard or check standard as a control with its own label, eg C0101 and your theoretical concentration in the Value of the component editor then quantitate that control sample, but what standards would you use for this, this is what I don't get? Or is it a case of checking the actual concentration versus whats entered into "Value"? 
  • For my case...most of my methods utilize two working standards from separate stock standards.  One is used for the initial suitability, another as a verification standard.  Most of the methods then use bracket calibrations using one of the two once agreement is established.  To verify the agreement between the two standard preparations, I'll utilize the control approach for the suitability and verification as I don't require them to generate a calibration curve, but I do want to be able to enter stock wt and purity with the dilution.  I then have a custom field which verifies agreement of the two preparations.

    I've had a rare method that has utilized a linearity from the suitability and then QC brackets among the samples.  I can't have the QC brackets affecting the calibration, so those are injected as controls.  Empower can then calculate a recovery value based on the QC info (wt, purity, dilution) using the calibration.

    The only real difference between standard and control is that a standard will contribute to a linearity curve when processed and a control will not.  Using a "control" allows you to still enter all the same information to determine the agreement/drift/etc.
  • MJS said it pretty well... That second standard is basically there to check for consistency between response factors, which should reflect consistency in preparation.
  • Thanks guys. So to clarify MJS, you inject both standards as controls and what custom field do you use? is it the difference between the percent deviations of both standards and do you put a Value in the Value part of the Component editor for both standards? Im trying to visualise what in the sample set you do and how you are selecting this difference..
  • @Empower2018 Most of my methods use a percent agreement/ratio calculation...a bit of a holdover from many years ago where we were using Excel to calculate everything.  Mathematically:
    Ratio=Area VS / Amount VS * Amount SS / Area SS

    SS=final sys suit injection only
    VS=verification standard (2nd, independent standard prep), single injection

    Criteria applied is usually 0.980-1.020.  You could alternately apply a *100 to get a percentage instead.  The amount value comes from the component editor/purity/dilution just like with any standard.

    I've been trying to get new methods to utilize the built-in fields more, but it is a slow process when we have a lot of established methods and many times are leveraging older methods to avoid re-development.  People don't like changing something that is clearly working and since they already have a custom field/report set up, they don't see the benefit as significant enough to change the field used to a built-in one.
  • Thanks for the extra info MJS. I get what you are trying to do with the check standard and system suit standard but as far as doing a sample set with these paramaters  and what labels you use and what exactly you put into the component editor Value for the system suit and control injections, do you quantitate with a Label Reference, that's what I cant understand. Do you have a print screen please of a recent method which used the control and when it was processed do you look at the % Deviation in the std curve or do you use a custom field to compare the amount between your system suit/controls?
    Something like that would be great, cheers. 
  • Thanks very much for that MJS, that filled in the pieces for me nicely as I wanted to see exactly how it was set up to understand the sample set side of it. Thanks again.