Result Set First when using a summary function

I created a peak, real cf as per ZZZ.%..AVE(Area) and I had one sample with 2 injections which was labelled ZZZ. The search order was set to Result Set First. I processed the sample set and the result set gave me the correct average of the 2 injections.


I processed the sample set again and I got the exact same result for the CF. i.e. total area/4 injections rather than total area/2 injections in the first result set. Then I added ZZZ to another sample in the sample set via Alter Sample so when I processed this run I expected to get the value of the the total area of all the results with label of ZZZ divided by 5 (i.e. 5 total injections with the ZZZ label) but I didn't get this, I got a value that was off by a few thousand. Even with full 14 decimal place precision, I still don't get the same value. Am I misunderstanding how Result Set First works?


I presumed with this Search Order Empower will calculate the average area of all samples labelled as ZZZ first, then add this value to all the other results with ZZZ (in other words, the 2 result sets) in the project but that didn't give me the answer I expected. Any ideas? Thanks!

Answers

  • I've always noticed similar functionality when I have come across similarly coded custom fields. Though, to be honest, I've never taken the time to sit down and sort out what the calculation is actually doing. 

    This is what I have always suspected: 

    Is it possible a weighted average is being used? Using your examples above. 

    1. Processing 1: correct average 

    2. Processing 2: coincidental correct average. Processing 1 AVE + Processing 2 AVE / 2. 

    3. Processing 3: Processing 1 AVE + Processing 2 AVE + Processing 3 AVE / 3. 

    Mose of the time when this strikes the group I work for we are never able to locate the first instance to determine where Empower is pulling the data from. Based upon what I have been able to piece together, the above is what I assumed. 

    I also assume is you adding a random ZZZ label to an additional sample for processing 3 means that added sample has a very different peak area (or what ever), which is why your average is being weighted thousands off.


  • Hi Shaunwat, I spent a bit more time on this but I still get odd answers after the 3rd or 4th Result Set. I tried a new project, I kept ZZZ on a sample with 2 injections, processed it once and got the right average, processed it twice and got the correct value (it seems to do a weighted average as you suggest so value from Result Set #1 plus value from Result Set #2/2) and even when I processed it a 3rd time, it returned the correct value (Value from RS#1 plus Value from RS#2 plus Value from RS#3/3).

    When I processed it a 4th time, things went strange. I was getting a value 3000 more than I would have expected. Tried with 14 decimal places but still nowhere near it. I'm not sure how Empower behaves when you use this Search Order. I don't use it much to be fair, I think most labs use the Result Set Only option but its odd that I cant find what logic Empower is using in case this search order ever is required.

  • Yes, as I mentioned, I've never been able to lay a finger on exactly what is going on. it would seem that it is also possible that Empower searches the project for additional data until that search consumes too much memory, and then spits out an average with what it has found.

    Getting different results every time I try to figure what is going on out has also been a common experience. 

    As you mention, I generally limit to result set only as well. If I need to use calculations the need to extend beyond one sample set (Content Uniformity, USP Dissolution, etc), then my instructions to the end user lab is to create a process only sample set using all of the sample sets they need to include. Again, this allows me to wash my hands of what exactly a result set first search order is doing and also allows me to continue to set custom fields as result set only.