Using custom functions with spiked and unspiked sample pairs
Good Morning,
My apologies if this question posts twice, there were a few issues with activating my user profile.
We're running a series of samples where we make two injections from an "As Is" sample (u), "Spiked" sample (g), Reference Standard (r), Blank (b), and spiking solution (s). Using a spreadsheet, the average of the constituent peak area is used in a simplified standard addition equation:
amt = ((600 x 1.5 x (s - b) x (u - b)) / ((10.5 x (r - b) x (g - u))A typical sample sequence would be, with all the samples using 2 injections, same method set, runtimes, injections volumes:
Row | Vial | Sample | Label | Function |
---|---|---|---|---|
1 | 1 | Ref | r | inject Samples |
2 | 2 | bLank | b | inject Samples |
3 | 3 | Spike | s | inject Samples |
4 | 4 | AsIs1 | u001 | inject Samples |
5 | 5 | Spiked1 | g001 | inject Samples |
6 | 6 | AsIs2 | u002 | inject Samples |
7 | 7 | Spiked2 | g002 | inject Samples |
8 | 8 | AsIs3 | u003 | inject Samples |
9 | 9 | Spiked3 | g003 | inject Samples |
... | ... | ... | ... | inject Samples |
N | ... | ... | ... | Summarize Custom Fields |
N+1 | ... | ... | ... | Report |
+ I can consistently obtain the values for "r", "b", and "s" with out any issues. The values for each set of unspiked and spiked samples is driving me crazy. I've tried inserting [Summarize Custom Fields Incrementally] followed by [Report] after row 5, after row 7, after row 9, etc...; however, the amounts just don't calculate correctly - instead, the amount is using the progressive average of the areas from the previous samples. Of course in the example with the summarize function at the end it's a real mess - the"r", "b", and "s" are correct, nothing else is but I expect that to happen as this was my starting point based on some other functions I've created.
In the project folder I have the following equations setup
Name | Type | Formula | Comment |
---|---|---|---|
myNullToZero | Peak | REPLACE(Area,0) | Sometimes there is no response |
myAvgB | Result | b.%.%.AVE(Constiuent[myNullToZero]) | - |
myAvgG | Result | g%.%.%.AVE(Constiuent[myNullToZero]) | - |
myAvgR | Result | r.%.%.AVE(Constiuent[Area]) | - |
myAvgS | Result | s.%.%.AVE(Constiuent[Area]) | - |
myAvgU | Result | u%.%.%.AVE(Constiuent[myNullToZero]) | - |
myAmt | Peak | (600*1.5*(myAvgS-myAvgB)*(myAvgU-myAvgB))/(10.5*(myAvgR-myAvgB)*(myAvgG-myAvgU)) | - |
+ as I said, myAvgB, myAvgR, and myAvgS calculate correctly; however, when the pair of spiked and unspiked run then things get goofy. In the above example, if I put [Summarize Custom Fields Incrementally] followed by [Report] after row 5 then myAvgU and myAvgG calculate correctly and the myAmt value is correct; however, once u002 and g002 and subsequent samples get in the mix the calculated result for the myAvgU and myAvgG are wrong... turning into the progressive average for all of the previous injections...
I just don't see it being feesable to greate a myAvgU1, myAvgG1, myAvgU2, myAvgG2, myAvgU3, myAvgG3, etc... for each of the (Un)/Spiked pairs for some 20 to 100 samples - easier in the spreadsheet or LIMS
Any pointers in the right direction are greatly appreciated!
TYIA
Answers
-
Im confused as to how you are calculating amount for the constituent peak? That formula you mention (600*1.5...etc), how are you accounting for the fact that you have multiple "u" samples injected? You are using Result custom fields and that may be screwing up the order of calculation. A few points which might help:
Your naming convention for the custom fields is affecting the order in which they calculate:
Change myNullToZero to L_NulltoZero because a custom field beginning with L with kick in before a custom field beginning with a higher letter. Custom fields and particularly summary custom fields which depend on each other, are calculated in ASCII order which means Capitals get precedent over lower case letter so ZZZ is calculated before aaa. For L_NulltoZero, make sure you have the Missing Peak option ticked, then if your constituent peak isn't present or picked up by the processing method, it will be replaced by 0 but you must have the Missing Peak activated.
I can see why you aren't getting correct amounts for more than one set of spiked and unspiked samples, your formula for amount is referencing all the peaks injected previously so while one set of samples will work, the next set with Label of U and G etc is going to include all previous values not just the previous values you want. Honestly, I think you need to set up some kind of standard curve to get over this issue and work with labelling and label reference. Inject your Ref r as inject standards.then all your spiked etc as usual. insert a Line called Calibrate and Label S0101 for the ref standard, then the next line quantitate with the label as U0101 U0102 etc do U01* G01*, then Clear calibration, inject your next series of U and G but label them U0201 and G0201 etc with a Quantitiate label of U02* G02*.
Of course for that to work you need to put your custom filed for myAmt as the X Value in the processing method against your constituent peak. Try that and see if it works. I would also specify that myAmt formula a bit more by adding extra brackets to make absolutely sure its nested correctly:
((600*1.5*((myAvgS-myAvgB))*((myAvgU-myAvgB)))/((10.5*((myAvgR-myAvgB))*((myAvgG-myAvgU))))
0 -
Good Morning Empower2018Im confused as to how you are calculating amount for the constituent peak? That formula you mention (600*1.5...etc), how are you accounting for the fact that you have multiple "u" samples injected?
Each injection has the same label associated with it; thus, you have something like:
- Sample One-injection One: U001.1.Chanel.Constiuent[Area]
- Sample One-injection Two: U001.2.Chanel.Constiuent[Area]
myAvgU Result u%.%.%.AVE(Constituent[myNullToZero]) Resolves to:
AVE(myNullToZero(U001.2.Chanel.Constiuent[Area]);myNullToZero(U001.2.Chanel.Constiuent[Area]) )
works like a charm for the myAvgR, myAvgB, and myAvgS calculations. I use these same calcs in several of my other analytical methods - I didn't know about the [SAME...(field)] syntax when I built them for the other projects and now they're "validated" and it's not worth the effort to do the revalidation for a simple change.
Your naming convention for the custom fields is affecting the order in which they calculate:Let me go back and review this a bit; however, I've not had this be an issue for other custom calculations.For L_NulltoZero, make sure you have the Missing Peak option ticked, then if your constituent peak isn't present or picked up by the processing method, it will be replaced by 0Yes the "missing peak" option is set to true. I discovered that very early on with another analytical method as my blanks usually do not have a response. I setup the null>0 because even with the missing peak set to true there would be a null value generated which played a bit of havoc with the calculations.I can see why you aren't getting correct amounts for more than one set of spiked and unspiked samples...Which is why I was playing with the [Summarize Custom Fields Incrementally] function - I think I mis-understood the online description of its behaviorHonestly, I think you need to set up some kind of standard curve to get over this issue and work with labeling and label reference. Inject your Ref r as inject standards, then all your spiked etc as usual....You may be right about using the calibration curve. I'll have to look at your suggestions here in a little bit to work thru the actual application. I'm also not seeing how the spiking solution is implemented in this suggestion. We're using a relative response between the reference solution and the spike solution.brackets to make absolutely sure its nested correctly:....The Oracle parser will drop the extra parenthesis.0 -
I was going to suggest instead of making a Result CF to make a Peak CF using SAME.%..AVE(CCalRef1[Area]) and throw your constituent peak into CCalRef1 but if its a big hassle you don't want to do that. The SCF Incrementally will calculate results between these functions but uses all of the injections above the function, with the final Summarize Custom Fields function using all the injections in the sample set for the calculation. I'm not sure how you can get around the fact of multiple samples without using the SAME syntax though as otherwise you would have to make a tonne of custom fields to allow for each set of samples..
I wonder can you change the MyAvru to a peak custom field and drop the label syntax and set the Search Order to Result Set Only? That way only each individual sample will be considered on its own and wont be including all the previous "U value" or "G Value"samples. So inject your r, b and s as usual and calculate the myAvrr myAvrb and myAvrs so that when Empower is considering the AsIs samples it will calculate my Amount using the previously calculated myAvrS, myAvrB, myAvrR and for myAvru, the CURRENT line or sample will be used and calculated and stored before moving on to the next sample. For this to work, you need to insert one Summarize Custom Fields function at the very end of the sample set so that Empower will scroll back up to the top of the sample set and calculate the results.
The only issue I can see is that in your formula for amount, the myAvrG wont be calculated for the ASIS sample yet but I think what will happen is that when SCF function is activated all the relevant myAvrU and myAvrG values for each set of ASIS and Spiked will be calculated then stored and used in the correct sequence. Again I haven't verified this..
0