Jump to content
Salesforce and other SMB Solutions are coming soon. ×

List Unique Values in a Found Set


millennium
 Share

Recommended Posts

I've seen and attempted to apply the countless approaches to finding the unique values in a field or to count the unique values in a found set, but I can't seem to get to the place of listing all of the unique values of a found set :grumpy:.

 

In my Clients table, I have a home address with a state. If I search the clients table for a subset of clients, I'd like to see what states are represented.

 

i.e. If I have 10,000 clients total, but I search for one-armed clients and find 7512 (hey, I have a propensity to serve limb-challenged individuals), I'd like to know that they live in only 47 of the 50 States so that I know where to target my advertising for one-armed clients.

 

My best solution will be a calculated field which results in the list of states being in a text field, no scripts, no extra tables, okay, if I must, a value list.

 

Any ideas? Am I just stupidly missing something?

Link to comment
Share on other sites

Oh, how I wish FileMaker would change List() to work on a field in the local table to get a list of the values in the found set. But for now, it doesn't work that way.

 

Instead we have to rely on either a scripted method to get the found set into a list, or a recursive custom function to traverse the record set with GetNthRecord().

 

And while we all want the simplicity of the custom function, it's probably not the fastest. In my testing a few years ago, the recursive CF was considerably slower than the right script.

 

What scripted methods to use, you may ask. You could use a loop to traverse the found set and build the list (if the list is sorted by the field in question, the traversal can be optimized using subsummary counts, and the duplicates can be omitted right there and then.) This isn't a bad method for a few thousand records, and doesn't require a lot of overhead, maybe one or two extra fields.)

 

The method I like uses the old Copy All Records command on a dedicated layout with only the target field, then Paste to a globally stored field. Sounds kinda ugly, but it's FAST. This doesn't filter out duplicates though so you still need to do that in another step. My suggested method would be to do the Copy All Records on the record ID, paste that into a global text field. Have a relationship match from that global self-joined to the record ID in the table. Define a value list based on a related field (the SelfJoin::targetField,) then use valuelistitems() to get the unique list of values from the related target field.

 

Sounds convoluted, but it works very fast. If you're worried about losing the clipboard, that can be preserved by setting it to another global while the script is run.

Link to comment
Share on other sites

Yeah, I have a script running that will do what I want, but I can't trigger the script consistently (this is a human problem, not a technological one).

 

I thought perhaps I had found the answer in the GetNthRecSetUnq function, but I can't even add it as there is an error in the declaration of startParam. Filemaker says, "The specified parameter cannot be found." regarding recordNumStart. A comment mentions this, but no fix. Anyone have any ideas on how to get this custom function working? Details are below:

 

Let (

[

// ------ BEGIN VARIABLE DECLARATIONS BELOW -----

startParam = recordNumStart ;

start = Case ( startParam

end = recordNumEnd ;

thisVal = Case ( // this limits "?" result on empty related records

IsValid ( GetNthRecord ( fieldName ; start ) ) ;

// this is the result

GetNthRecord ( fieldName ; start )

) ; // END CASE

nextStart = start + 1

] ;

 

// ----------- FUNCTION RESULTS BELOW -----------

 

If( // this limits duplicate entries

PatternCount ( THISVAL ; GetNthRecord ( fieldName ; NextStart ) )

 

// this returns result 1 for non unique values

thisVal & Case (

// TEST 1: the next value exists to use

IsValid ( GetNthRecord ( fieldName ; nextStart ) ) and

 

// TEST 2: Either no end Value given or next value is within the end value

( ( end = "" or end = 0 ) or nextStart

 

// RESULT1: Return Character & Recurse again

"¶" & GetNthRecSetUnq ( fieldName ; nextStart ; end )

) ;// END CASE

GetNthRecSetUnq ( fieldName ; nextStart ; end )

)//END IF

) // END LET

Link to comment
Share on other sites

For anyone interested, there are at least four solutions (more I haven't explored) to a calculated field list of unique entries. They require a set of custom functions:

 

CLIENTS::Unique_States, Calculation, Unstored, from CLIENTS, = ListOfUniq (ListValues ( State ) )

or

CLIENTS::Unique_States, Calculation, Unstored, from CLIENTS, = ListOfUniq (GetNthRecordSet ( State ; "" ; "" ) )

or

CLIENTS::Unique_States, Calculation, Unstored, from CLIENTS, = UniqueList (ListValues ( State ) )

or

CLIENTS::Unique_States, Calculation, Unstored, from CLIENTS, = UniqueList (GetNthRecordSet ( State ; "" ; "" ) )

 

Each of these will create a list of unique states for the found set. My question to those of you who might know is which of the custom functions is better and why?

 

ListValues http://www.briandunning.com/cf/1323

or

GetNthRecordSet http://www.briandunning.com/cf/439

or

UniqueList http://www.briandunning.com/cf/789 (note: requires CustomList http://www.briandunning.com/cf/868)

 

I could probably also use GetFoundSet http://www.briandunning.com/cf/1093, or XORvalues http://www.briandunning.com/cf/39 but haven't tested that yet.

Link to comment
Share on other sites

I haven't used any of those CFs, but I'd encourage you to test them. Brian Dunning has some sample data on his site too, you can use that to give you sets to pull. Then build simple scripts to post start and end times for using each method to fill a global field with the list of record data. Try it with small sets of a few hundred records, all the way up to a hundred thousand.

 

One of the problems with recursive custom functions that you may run into is the recursion limits. It's some 10,000 or 50,000, depending on the method.

 

Compare times and if there are any differences in results. Also note if the records must be sorted first (sorting within a CF takes longer that FileMaker's native sorting, and many duplicate-removal methods involve sorting.)

 

I suspect UniqueList will be your winner from those choices above, as Ugo and Agnes are pretty sharp (although it appears to have a limit.) I don't know the other authors.

 

Let us know what you find!

Link to comment
Share on other sites

  • 1 year later...
...The method I like uses the old Copy All Records command on a dedicated layout with only the target field, then Paste to a globally stored field. Sounds kinda ugly, but it's FAST. This doesn't filter out duplicates though so you still need to do that in another step. My suggested method would be to do the Copy All Records on the record ID, paste that into a global text field. Have a relationship match from that global self-joined to the record ID in the table. Define a value list based on a related field (the SelfJoin::targetField,) then use valuelistitems() to get the unique list of values from the related target field.

 

Sounds convoluted, but it works very fast. If you're worried about losing the clipboard, that can be preserved by setting it to another global while the script is run.

 

I love you. This is simple and elegant and thank you. I had the same need as millennium and one layout with 2 fields and three script steps later I have a list of all the PKs from the found records on that table. Amazing!

Link to comment
Share on other sites

Doesn't a dynamic value list, show only unique field contents, without duplicates?

So define a value list for that field, perform a Get (ValueListItems) would return a list of those items, yes? Then a patternCount "¶" + 1 (no return on the last line, so need to add one) would return the number of unique items.

 

Haven't tried this, but would it work?

Link to comment
Share on other sites

Value lists don't care about the found set. So, while you could work toward a value list via a related set, often the found set is derived in a different way.

 

Of course, depending on the situation, you may be able to use summary fields and getsummary calcs to get counts of unique values from the found set.

 

What's nice about grabbing all the unique IDs, is you can then USE those for something later. For example, you can store the found set for later retrieval.

Link to comment
Share on other sites

This thread is quite old. Please start a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share



×
×
  • Create New...

Important Information

Terms of Use