Jump to content
Salesforce and other SMB Solutions are coming soon. ×

Nth Record


djusmin
 Share

Recommended Posts

Anyone can explain why the nth record stop calculate an unstore value at record number 250.

 

I was trying to use this to replace the summary field that update 300000 record all the time in a layout. the calculation field is as follow:

Case(Get ( RecordNumber )=1;GetNthRecord ( num; Get ( RecordNumber ));num+ GetNthRecord ( SumStore; Get ( RecordNumber ) - 1)

)

 

Thank

 

Oh Really!

Link to comment
Share on other sites

Hello djusmin,

I'm not aware of any published documentation which specifically talks about the limit you have encountered.

 

However consider the implications for FileMaker's calculation engine of what you are trying to do. An unstored calcaltion must be evaluated fresh each time it is referenced. So (for instance) to calculate the value for your SumStore calculation on record number 250, FileMaker has to evaluate iteratively through the dependencies spanning back across the preceding 249 records to retrieve the first value for num, then work its way back down the stack calculating and adding successive values. But for each of these successive values (since each must be independently evaluated) it must return up the stack to the top and work its way back down.

 

If we pause to think about what that means, even factoring only the downward calculation path, not the initial stack build as FileMaker works back through the dependencies, we get the following:

 

to evaluate the calc on record 1 there is 1 step

to evaluate the calc on record 2 there are 2 steps

to evaluate on record 3 there are 4 steps (1+2+the current calc)

to evaluate on record 4 there are 8 steps (1+2+4+the current calc)

to evaluate on record 5 there are 16 steps (1+2+4+8+the current calc)

 

I think you can see where this is going. By the time FileMaker is calculating the 250th record, it has to first recalculate each of the preceding 249 records, but to recalculate each one of them it must also (separately) recalculate each it *its* preceding records because none of the intermediary values are stored. By the 250th record, therefore, the total number of dependencies that must be resolved to return a result had become exceedingly large!

 

It's reasonable to suppose that the engineers at FMI have placed an arbitrary upper limit on the number of function calls that can be evaluated to return any one value, and that in the above scenario, that limit is exceeded after 250 records (small, wonder). If they had not done this, the probability is that FileMaker would become unresponsive or run out of memory (perhaps both), with possible unfortunate consequences for the file or other files that might be open at the time.

 

What that points to is a need to reconsider what it is you are trying to do and find another - more appropriate - way to get there. For instance, on the face of it, what you are trying to do appears to be going to produce exactly the same outcome as you would get by creating a Summary field defined to produce a Total of num, with the Running Balance option enabled.

 

Summary fields are also subject to limits and performance penalties when the number of records becomes large, but a running balance will compute over many times more than 250 records before the CPU load becomes such that the process slows down.

 

I suggest that you try the summary field approach and see if that will suit your current purposes. If not, you might like to clarify what it is you are trying to do and why, and then perhaps other sugegstions will be forthcoming. smiley-wink

Link to comment
Share on other sites

Thank Ray. Just trying to see if I can use nth record to create a summary field that have to summarize >500,000 records within a foundset. It does takes a long time to summarize 500000 record. Recursion have a limit of 10,000. Is there anyway to bypass the long summarize. Scripring is too long too, sum of valuelist for a given found set is long too. Perhap it does take that long to summarize 500000s record even if it is G5 machine.

Link to comment
Share on other sites

...a summary field that have to summarize >500,000 records within a foundset. It does takes a long time to summarize 500000 record...

Hi djusmin,

I think you would do well to take a step back and consider alternatives.

 

Procedures which dynamically calculate results are well suited to compact solutions with moderate numbers of records. But asking any CPU to perform operations across half a million records every time anything changes is bound to be grossly inefficient.

 

Systems which are requried to maintain summative data across source value sets of that size generally do so by using a transactional model. That it, the summary data is computed once and stored, then any change, addition or deletion to the values that form the source for the summary, is captured and an adjustment is made to the stored value to bring it into line with the change.

 

In this process model, when any change is made, the CPU has to perform 2 operations (the change itself and an adjustment to the stored summary value), which needless to say is much faster than the 500,001 operations that would be required for every change when using a live calculation model.

 

It's a whole different way of thinking and of working - and I am not about to try to summarize all the ins and outs here (way too much for a forum post!). However suffice to say that trying to apply dynamic calc and summary techniques across a database of half a million records or more is rather like building a skyscraper on the foundations of a small cottage... smiley-surprised smiley-wink

Link to comment
Share on other sites

HI Ray,

Try that approach as well, but if my first found set is 500000 record and the summarize field is stored somewhere, what about my second found set of record of 300000. The stored number is no longer applicable to my summarize number for the 300000 records. I am not talking about static summarize field but rather is there much quicker way of getting summary field populated in multiple subsequence found set of records?

Link to comment
Share on other sites

...is there much quicker way of getting summary field populated in multiple subsequence found set of records?

If you need to do this and use a transaction-based approach, then that woudl mean storing and maintaining a series of summary values which correspond to different sets of search criteria that will be used to isolate the various groups of records.

 

Even if there are dozens of criteria that you'll need to maintain, it will still be faster than re-computing a half-million values on the fly.

 

However this approach does depend (among other things) on you being able to predict the various criteria that will be used to filter out different groupsa of records into the found set. If it is the case that you need to provide summary data on totally open-ended criteria, then you may have little choice but to compute the results across massive record sets. In which case, you may wish to experiment with optimization to get the best performance possible, but there is no way even a fast/powerful system is going to perform live calcualtions of that magnitude without perceptible delay.

Link to comment
Share on other sites

This thread is quite old. Please start a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share



×
×
  • Create New...

Important Information

Terms of Use