I only want a custom field to calculate for one particular label.

I have an intersample summary custom field that calculates the sum of all areas in a sample with the label R1.  But I don't want the custom field populated for any other injection. For this, can I just add something like: R1.%..SUM(Area)+NEQI(Label,"R1")*-1*50000? or maybe multiply the custom field formula by a Bool expression like R1.%..SUM(Area)*(EQ(Label,R1)? I noticed a lot of recent articles on custom fields seem to be doing this eg for peak calculations something like R1.%..AVE(Area)*(EQ(Name,"Caffeine")

Best Answers

  • MJSMJS
    Accepted Answer
    Based on your initial question: You can certainly return the "blank" with the boolean test*-50000 result.  That would likely be the simplest way to have it move forward and not report extra/incorrect values for any injection not properly labeled R1.

    Your second message makes it seem like you are partially concerned with the raw data space used by the custom field.  One thing to clarify is that results from custom fields don't impact raw data space, but rather impact the project table space.  Empower will attempt to calculate all custom fields for all injections (Stds Only vs Stds and Controls, etc.) and peak type (found, found and groups, etc.) that match the filter criteria so constraining those as much as possible will be the only way to reduce table space consumption due to the custom field.  You may be able to reduce the processing effort a little depending on the constraints and calculation, but that is likely minimal in any practical setting.  

    If the processing time is important, I've played out a few scenarios in my head...Based on your example calc, assuming you've already minimized the field criteria as much as possible (Unknowns only really since you want all peaks), your options are fairly limited.  Based on the calc, I'll assume it is a single injection even, so if that is the case, I see two things you could do that could provide a slight improvement to speed, but both would likely be negligble and both would still result in using a small amount of table space which would be unavoidable.

    1) You could eliminate the intersample calc portion and just approach it as a basic summary calc with the "NEQI(Label,"R1")*-50000" approach.  If you leave it as an intersample approach, Empower would attempt to make the calculation by looking back up through the sample set to identify and calculate the result for each R1 label and only after generating the sum would it then execute the boolean test and generate a final result.  By removing the intersample nature, I imagine it could speed things up a little as it doesn't have to search the previous injections for the values creating the sum.  Likely more of an improvement if R1 is early and there are many injections later.

    2) In conjunction with #1, you could create an BOOL field first.  Test the label and if it fails to match, returns blank.  If the label matches, returns the calculation.  This may ultimately be slightly faster as it won't do any attempt to sum areas unless there is a label match.

    Again, I don't imagine any further changes to the field would save any appreciable amount of processing time and I don't see any way to avoid consuming a little table space.
  • MJSMJS
    Accepted Answer
    Empower creates a .dat file for each injection.  Those files are stored in a folder on the system/server.  Processing the injections does not alter this .dat file as seen by the create/modify date never changing, the size not ever changing, etc. and this is relied upon by the integrity verification during project backup.

    Spectral data can create quite massive files quickly.  Based on other messages, you don't seem likely to be able to influence methods often before they get to you, but anything you can do to reduce sample rate and increase the resolution value would contribute to decreasing file size.  Depending on the instrument method/instruments, you can get a preview of the data usage per minute, so a simple method from 200-300 nm, sampling rate 1, resolution 1.2 generates a data rate of 1.2 MB/hr, moving the resolution to 2.4 cuts that in half.  A single wavelength method runs about 0.01 MB/hr.  One of the groups on my system have a UPLC method from 190 to 500 nm at 20pts/sec, resolution 1.2...a 6 minute injection generates a 14 MB file (140 MB/hr!) so a "simple" run of about 50 injections takes a whopping 700 MB.  It adds up fast!

Answers

  • or you could just filter by label in your report method...
  • Hi Dan yes I could but I'm thinking more to try and reduce raw data space in the project. Intersample summary custom fields can add up to a lot of space if not managed correctly and I want to limit the calculation to just one sample. 

    In review of the result set there will only be one line where the custom field is populated. 
  • Thanks very much for the reply MJS. I thought that raw data space would increase due to the background calculations of many custom fields, including intersample summary custom fields. I have a project with about 15 complex custom fields operating on PDA runs on derived channels and the raw data is very high in that project, perhaps that is due to the sampling rate and all the spectrum/PDA data connected to those runs and not the calculations?

    Thanks again for answer. 
Sign In or Register to comment.