Jump to content

Blogs

 

Dependency Injection in JavaScript, a Node.js example

As developers, we are tasked with reducing complexity. Complexity is all around us, and good code organization reduces complexity while at the same time supporting increased flexibility, ease of change, quicker onboarding, faster debugging, and my favorite, better testing. Reduced Complexity, Better Testing In this article, I discuss one of the widespread strategies for code […]
View the full article

Beezwax Buzz

Beezwax Buzz

 

Salesforce Lightning Resizable Text Area Component

In my series of component-based posts, I have another small, but useful, component - an auto resizing text area. Salesforce Lightning Dynamic Text Area For some of my other component based blogs, look here: How to build a Date Picker How to build a Multi Select Component How to build a Drag and Drop Duelling Picklist This time, I'm going to show you how to create an Auto Sizing Text Area. Background To build this component, I used a lot of basic HTML and JavaScript as well as just a little Lightning.
A key part of the component is binding its data properly. This means it can be used as easily as any other built-in Lightning Component (no special events needed, just bind to the value attribute). Important features of this component: Debounced data binding - this means that the component won't send out a flood of notifications on changes; it will send out a rate limited update stream. No form divs are wrapped around this component because this sometimes proves difficult to manage if not in a form. It's easy to put in a regular slds form though. No third party static resources used. Method Start by defining a Generic ComponentEvent - this has two attributes, TYPE and DATA (I use it for most events in my systems). I use it for onfocus and onclick event handling. <aura:event type="COMPONENT" description="General change event" > <aura:attribute name="type" default="DATA_CHANGE" type="string" description="can be anything - use this to verify that the data you receiving is what you expect"/> <aura:attribute name="data" default="{}" type="Object" description="Object holding changed data"/> </aura:event> The actual markup of the component is quite small - just a text area with all requisite bindings and methods. You can see the two events that the component emits and also note the use of the Lightning `GlobalId`
Which is a special feature of Lightning and is replaced with an id that is Global to THIS component, but unique across all components. This means we won't get id collisions if we have multiple text areas. <aura:component> <!--some elements omitted--> <aura:method name="recalculateHeight" action="{!c.recalculateHeight}"/> <aura:registerEvent name="onfocus" type="c:ComponentEvent" /> <aura:registerEvent name="onclick" type="c:ComponentEvent" /> <textarea aura:id="simpletextarea" value="{!v.value}" id="{!GlobalId + '-textarea'}" readonly="{!v.readonly}" tabindex="0" onfocus="{!c.handleOnFocus}" onclick="{!c.handleOnClick}" oninput="{!c.handleOnChange}" class="{! v.value.length gt 1 ? (v.class + ' dynamic-textarea slds-textarea') : (v.class + ' dynamic-textarea-empty slds-textarea') }" /> </aura:component> The controller code contains most of the logic. I'll explain it method by method: `onRender` is the init routine of this component. I used `onRender` as an init event because I needed window height, which is not available on init. onRender: function(component, event, helper) { if (component.get("v.rendered")) { return; } component.set("v.rendered", true); helper.setHeight(component); }, `focus` allows focus to be set via an `aura:method` from outside the component. This then triggers the following method, `handleOnFocus`. focus: function(component, event, helper) { var el = document.getElementById(component.getGlobalId() + '-textarea'); el.focus(); }, `handleOnFocus` handles input and sets the focus ring. This also allows the text area to resize via css that targets the `:focus` pseudo class. handleOnFocus: function(component, event, helper) { component.getEvent("onfocus").fire(); }, `getValue` returns the value of the text area - note the use of the `GlobaId`. getValue: function(component, event, helper) { var el = document.getElementById(component.getGlobalId() + '-textarea'); return el.value; }, `handleOnChange` fires on every change to the text area, but is debounced to prevent too many screen updates / events being emitted. handleOnChange: function(component, event, helper) { var el = document.getElementById(component.getGlobalId() + '-textarea'); component.set("v.value", el.value); var timer = component.get("v.storedTimer"); var timeout = 20; if (timer) { window.clearTimeout(timer); } timer = window.setTimeout( $A.getCallback(function() { helper.setHeight(component); }), timeout ); component.set("v.storedTimer", timer); }, Finally, as you can see in the `handleOnChange` method, the autoresize part of the component is triggered in the helper class. This is the `setHeight` method.
This method measures the scroll height of the text area, adds a small buffer and resizes the text area to that. setHeight: function(component) { var el = document.getElementById(component.getGlobalId() + '-textarea'); // compute the height difference which is caused by border and outline var outerHeight = parseInt(window.getComputedStyle(el).height, 10); //3 pix is just a tiny static extra buffer. Adjust if necessary var diff = (outerHeight - el.clientHeight) + 3; // set the height to 0 incase it needs to be set smaller el.style.height = 0; // set the correct height // el.scrollHeight is the full height of the content, not just the visible part el.style.height = Math.max(60, el.scrollHeight + diff) + 'px'; } As you can see, it's really quite a simple component. To use it, just place it in your form. <c:DynamicTextArea aura:id="noteTextArea" value="{!v.noteBody}" readonly="{!v.editing ? false : true}" onfocus="{!c.handleEditClick}" onclick="{!c.handleDivClick}" class=" slds-m-top_xx-small slds-m-left_x-small " /> via GIPHY
Download the Files All the files are located on
Github. Enjoy! Moving Forward in Lightning Have questions? Let us know in a comment below or contact our team directly. You can also check out our other posts on customizing your Salesforce solution. The post Salesforce Lightning Resizable Text Area Component appeared first on Soliant Consulting.
View the full article

Soliant Consulting

Soliant Consulting

 

Live Stream! San Diego User Group

Next Monday ( October 22 ) Geist Interactive will be hosting the San Diego FileMaker User Group at our office in San Diego, if you don't live in the area and still want to attend you can join us via Facebook Live on our Facebook page. The post Live Stream! San Diego User Group appeared first on Geist Interactive.
View the full article

todd geist

todd geist

 

FileMaker Walmart Integration

FileMaker Walmart Integration Walmart is the largest retailer in the world with $485.9 billion in total revenue in 2017, and reaches over 100 million users a month on Walmart.com. This makes Walmart a useful tool to expand your customer base. And with FileMaker, you can streamline the process of working with Walmart.com by integrating your FileMaker app with Walmart Marketplace.  Becoming a Partner with Walmart Before you can connect your FileMaker app with Walmart, you must first have an account. The onboarding process for Walmart Marketplace is free; however it requires you to be approved based on a series of questions about your business. After you submit your application, Walmart will review it and will let you know if you qualify to be a partner and they will give you a sellers account. From there, you will be able to start integrating with the Marketplace API. Getting your Keys to the API Once you are a Walmart partner, the first step of the integration is to generate and obtain your API keys. Go to the Developer Center and select your account. From here you will be able to navigate to the API Keys page. From the API Keys page, you will see your Client ID and Client Secret (if either of these are empty, or you would like them changed, click the Reset Client Id and Client Secret button). You will then need to copy these to use later. Keep the secret in a safe place as this is essentially your password and should not be shared with anyone else. As of September 27th, 2018, Walmart has upgraded their authentication methods to use a token-based authentication vs the signature-based method. This greatly simplifies the authentication while still maintaining a high level of security, so you will want to make sure you grab the right information. Authentication Once you have your API information, you will need to authenticate your solution with Walmart Marketplace API. At a high level, in order to authenticate your solution, you will need to get the token by using an Insert from URL with token address and the correct cURL information. Set Variable [ $url ; Value: "https://marketplace.walmartapis.com/v3/token" ] Set Variable [ $data ; Value: "grant_type=client_credentials" ] Set Variable [ $auth ; Value: Base64Encode ( $clientID & ":" & $clientSecret ) ] Set Variable [ $cURL ; Value: "-X POST" & ¶ & "-H \"Accept: application/json\"" & ¶ & "-H \"Authorization: Basic " & $auth & "\"" & ¶ & "-H \"Content-Type: application/x-www-form-urlencoded\"" & ¶ & "-H \"WM_QOS.CORRELATION_ID: " & $randomString & "\"" & ¶ & "-H \"WM_SVC.NAME: Walmart Marketplace\"" & ¶ & "-H \"WM_SVC.VERSION: 1.0.0\"" & ¶ & "--data @$data" ] Insert from URL [ $result ; $url ; cURL options: $cURL ; Do not automatically encode URL ] The result will be a JSON-formatted text containing an access token. You will need to use this token to make other calls. The token only lasts for 15 minutes, so you will need to repeat this process in your code to get a new token. Making Requests and Parsing the Response Before you make requests, it is important that you understand JSON and cURL. Once you are comfortable with those, look through the Walmart Marketplace Docs. There you will be able to see all of the functions that you can use, such as pulling, tracking, and acknowledging orders, pulling and updating inventory, and more. The Get Inventory request will pull a item by its SKU and return the inventory information. Request: Set Variable [ $url ; Value: "https://marketplace.walmartapis.com/v2/inventory?sku=1234" ] Set Variable [ $cURL ; Value: "-X GET" & ¶ & "-H \"Accept: application/json\"" & ¶ & "-H \"Authorization: Basic " & $auth & "\"" & ¶ & "-H \"WM_QOS.CORRELATION_ID: " & $randomString & "\"" & ¶ & "-H \"WM_SVC.NAME: Walmart Marketplace\"" & ¶ & "-H \"WM_SEC.ACCESS_TOKEN: " & $accessToken & "\"" & ¶ & "-H \"Content-Type: application/x-www-form-urlencoded\"" ] Insert from URL [ $result ; $url ; cURL options: $cURL ; Do not automatically encode URL ] Response: { "sku":"1234" "quantity": { "unit": "EACH", "amount": "0" }, "fulfillmentLagTime":1 } I strongly suggest taking a look at our accompanying example file as certain portions of the API can be a bit tricky. For instance, the API does not handle the standard carriage return FileMaker uses for new lines, so you must use char ( 10 ) to denote a new line in order for the Walmart API to handle the request correctly. Conclusion Integrating your FileMaker app with Walmart Marketplace will streamline your workflow allowing you to take on more orders with the confidence that you will still make your deadlines. Feel free to contact us if you need further assistance or to discuss getting your Walmart account integrated with FileMaker. Download FileMaker Walmart Marketplace Integration Database Please complete the form below to download your FREE FileMaker database file. Name* First Last Company Phone* Email* Deployment Assistance?Please contact us to assist integrating into your FileMaker Database. Yes Terms of Use I agree OPT-IN: I agree that I am downloading a completely free FileMaker application file with no strings attached. This database is unlocked, and I may use it for my business or organization as I see fit. Because I am downloading a free database, I agree that I should receive occasional marketing. I understand that I can OPT-OUT of these emails at anytime.
View the full article

dbservices

dbservices

 

Faux Subsummaries via JSON + Virtual List

Today we’re going to take another look at a challenge we discussed last time (in Conditional Summary Report Header)… namely how to cajole FileMaker into displaying a subsummary, or a reasonable facsimile thereof, at the top of a report page when items in the group begin on an earlier page. Demo file: Faux Subsummaries via […]
View the full article

Kevin Frank

Kevin Frank

 

Tableau to FileMaker | Scheduled Extracts

Once you have crafted a Tableau dashboard, pulling data from your FileMaker solution, then your next goal might be to make sure the dashboard data gets refreshed at regular intervals, and in an efficient manner. And that is what we’ll do in this blog post. Avoiding the Data Traffic Jam If you have never been […]
View the full article

Beezwax Buzz

Beezwax Buzz

 

Portal Column Techniques: Sort, Filter, and Batch Update

FileMaker Portal Columns I was recently asked to sort portal columns for a client, and I figured there has to be a newer and cooler technique out there to accomplish portal sort than when I did it last. I reached out to the other FileMaker developers at Soliant, and I got a great sample file from Ross Johnson. He shared a really cool technique with me, crediting mr_vodka (sounds like a fun guy!). Read mr_vodka’s original post here. For my client, I was also asked to filter the portal and batch update columns. The end product came out pretty cool, so I decided to create a sample file with all these techniques put together in one file to share with the FileMaker community. The data in my sample file is from Mislav Kos' post: Test Data Generator Figure 1 - Portal with filters and batch update columns Expand image Get the Demo File Download the demo file to follow along with the instructions outlined below. Sort Here’s the step to complete the portal sort. You’ll need to use the sample file to copy and paste some components.
Copy the field “zz_portal_sort_c” into your solution. You’ll need to copy it into the table on which your portal is based on. Open the field definition for zz_portal_sort_c. The only update you’ll need to make to this calculation is set the let variable “id” to the primary key of your table. For example, if your primary key is something like “_kp__ContactID” you’ll need to have “id = ¶ & _kp__ContactID & ¶ ;” (see Figure 2) Figure 2 – Set the let variable ID Expand image NOTE: Be sure this calculation returns a number (see Figure 3), that’s very important!
Figure 3 – Select “Number” for the calculation result Next, copy script “sortRecords ( field { ; object } )” into your app. You’ll need to update line 51 and change “ID” in the executeSQL statement to use your primary key field for the table on which your portal is based (see Figure 4) Figure 4 - Update Line 51 in the script Expand image You should also update line 6 (the header information) to reflect that you added this script to the file, and the date you did. Back in your layout, update your portal to sort by the new field you just added to the table. Figure 5 - Update your portal sort Expand image Name the portal object “portal”. If you prefer a different object name, it can be anything you’d like, but you’ll need to update almost all the scripts for this demo to reflect the new portal object name. Figure 6 – Name the portal object You can now assign the script to your labels with a parameter of: “List ( GetFieldName ( <table>::<field> ) ; "portal" )”. I also added some conditional formatting to the label to turn the label bold when the column is sorted. Additionally, as a visual cue that the sort is bidirectional, I added an up and down arrow for each column and assigned a conditional hide to it. You can copy and paste the buttons to use in your solution, and then update the hide calculation to use your fields. And that’s it! Once it’s all set up, sorting for each column should work. One thing I want to note: this method assumes that the relationship is one-to-many. I tried it using a Cartesean join, and it broke the sort. I haven’t tried anything more complicated than a one to many. Filter Columns Filtering each column allows the user to do an “AND” search in the portal, which means that your user can filter on multiple criteria. If you used a single search bar to filter, then it is considered an “OR” search. To be honest, I haven’t researched if there’s a better technique out there. This method made logical sense to me when I wrote it, and lucky for me it worked. If you know of a better approach to use, I’d love to hear it; please leave a comment below. Here are the steps to complete this filter technique: Create global fields for every column you’d like to filter. Figure 7 - Create global fields Place those fields on the layout Figure 8 - Place the fields on the layout Expand image Add the script “Trigg_CommitRefresh” to your script workspace and then assign that script as a trigger to the filter fields with a trigger type of OnObjectExit. This script trigger will only commit the record and refresh the portal every time a user exits a filter field. In this case, gender is a little different; it uses an OnObjectModify. You’ll learn why gender is different a little further down in this post. Now we update filter calculation for the portal. You can copy the code from the filter calculation into your portal calculation and then update it in your file to match your fields. The filter calculation is a single let statement that has four parts: Define “AllClear”, which is a flag that checks if all the globals are empty Define which filters have any text in them. In other words, which filters are being enacted by the user Define each filter result at the record level. If the user entered text to filter, does the current record pass that filter, and therefore returns 1, or that record getting filtered out and returns null? Finally, we compare our results. If AllClear is true, then always show the record (return 1). Otherwise, let’s count up how many filters the user is trying to complete, and count up how many columns pass the filter for the given record. If these two sums match, then the record passes the filter check. If not, then the current record has been filtered out. You’ll need to update the following for this calculation to work in your file: The globals you’d like to filter within the “All Clear” definition The filter check section: Filter<FieldName> = not IsEmpty (<Table>::<FilterGlobal> ) The filter result section: Filter<FieldName>_R = If( Filter<FieldName>; PatternCount ((<Table>:: <FieldName>; <Table>:: <FilterGlobal>)> 0) NOTE: You’ll notice the gender result is a little different, see item V. below which explains why. The results comparison will need to be updated: If( AllClear; 1; Filter<FieldName1> + Filter<FieldName2>….. =  Filter<FieldName1_R > + Filter<FieldName2_R>….. ) Gender Difference: For most of these filters, I’m using the patterncount() function because I want to include partial matches. However, with gender, if I searched for “male,” I would always get male and female results since the string “male” is inside the string “female.” Since in this case there are only two options, I turned the filter into a radio button so that I don’t have to worry about partial word entries and now I can make a complete word comparison in the calculation. That’s why gender does not use patterncount() and instead uses “=” to see if the filter and the value are identical. Batch Update The batch update feature goes hand in hand with filtering – the user will filter the data and then perform a batch update. When completing this feature, I figured there are two ways to accomplish it: go to related records in a new window and perform a replace field contents, or loop through the portal itself. I decided to loop through the portal because I liked that you don’t have to leave the current window. However, both methods would accomplish the same goal, and if you have a lot of related records to be updated, the replace field contents might be a little faster. But for a typical use case, looping through the portal works well. To complete the batch column update, you’ll need to copy over the script “BatchUpdate (field)” into your file. If you haven’t already, you’ll need to name your portal object “portal” for this script to work. You should also update the history in the script header to communicate that you added this script to your file and when you added it. I recommend duplicating line 3 and then adding your name/email, the current date, and a note about how you copied this script into the file. The rest of the script is ready for use. If you’d like, you can customize the dialog in step 9. Now you’ll need to add the batch buttons to the layout. Your button will call the BatchUpdate script you just copied over and will pass the parameter of the field you’d like to update, in quotes. That’s the summary of how these three features are set up in the sample file. I hope you find it useful.
Figure 9 – Button setup Questions? Leave a comment below or if you need help with your FileMaker solution, please contact our team. The post Portal Column Techniques: Sort, Filter, and Batch Update appeared first on Soliant Consulting.
View the full article
 

FileMaker Portal Columns: Sort, Filter, and Batch Update Demo

Take Your Portals to the Next Level FileMaker portals are invaluable for showing related data; there are several techniques for enhancing their functionality. Our demo contains three methods for adding the ability to sort, filter, and batch update portals. Follow along with our step-by-step guide and get started with expanding the functionality of the portals in your FileMaker solution.
Complete the form to receive the demo file: Trouble with this form? Click here. The post FileMaker Portal Columns: Sort, Filter, and Batch Update Demo appeared first on Soliant Consulting.
View the full article
 

Celebrating 10 Years!

A decade of discovery, inspiration, hard-work, and coffee. I remember the first time I had ever heard of FileMaker®. It was 1996 and I was sitting in the office at Cape Cod Life magazine with a list of instructions on to how to print the labels for a mailing list in FileMaker 2.1. I [...] The post Celebrating 10 Years! appeared first on The Scarpetta Group, Inc..
View the full article

Joe Scarpetta

Joe Scarpetta

 

Geist Interactive Travels Down the West Coast

Join us (Todd Geist and Jeremy Brown) as we visit the West Coast FileMaker user groups. We're geared up to talk about Innovative FileMaker power tools and frameworks and techniques. Join us for one (or many)! The post Geist Interactive Travels Down the West Coast appeared first on Geist Interactive.
View the full article

todd geist

todd geist

 

AD Bound macOS Systems not Authenticating? Restart opendirectoryd

We have quite a few macOS systems that are bound to our AD (Active Directory) domain. Occasionally, one, two, or perhaps several would lose the ability to authenticate users with AD credentials. Often this was with one of our FileMaker database servers, where the external authentication was being used for either database access or FMS […]
View the full article

Beezwax Buzz

Beezwax Buzz

 

Apple Maps in FileMaker With MapKit JS

Recently we were asked: “Can you use Apple Maps inside FileMaker, rather than using Google Maps or MapQuest?” We decided to give it a shot, thanks to MapKit JS, the JavaScript implementation of the MapKit library widely used on macOS and iOS. Here is a technique our FileMaker and JavaScript development teams designed to enable […]
View the full article

Beezwax Buzz

Beezwax Buzz

 

Using FileMaker as a Low-Code/No-Code Platform for RAD Development

If you’re running a small or mid-sized business, you’ve probably discovered that the price of enterprise software makes you cringe. It often just doesn’t fit into your budget. You need to automate and streamline your business processes, but you don’t want to take out a loan to do it. On the other hand, if [...] The post Using FileMaker as a Low-Code/No-Code Platform for RAD Development appeared first on The Scarpetta Group, Inc..
View the full article

Joe Scarpetta

Joe Scarpetta

 

Introduction to KarbonFM

We at Geist Interactive delivered our first webinar on the Karbon, a framework for ambitious FileMaker applications. We introduced a lot in this session. Take a look at this webinar recording to learn more about Karbon. The post Introduction to KarbonFM appeared first on Geist Interactive.
View the full article

todd geist

todd geist

 

Conditional Summary Report Header

Recently I received an email wondering… When printing a multipage report sorted by a sub-summary, is there a way to display the sub-summary heading on the following page when it breaks across two or more pages? There are various approaches one might take, and I actually buried one of those approaches inside an article + […]
View the full article

Kevin Frank

Kevin Frank

 

Salesforce Dreamforce 2018: Day 1 Recap

As Dreamforce 2018 spins up we’re reminded of the speed at which technology and the Salesforce platform is evolving. In just a few short years the Salesforce ecosystem has fully embraced Lighting as both an experience and development platform. Dreamforce is a key event for our team; it exposes our developers, architects, and administrators to the latest patterns and solutions that the platform and its partners can offer. Our first day has already been quite packed, learning about new tools - like MuleSoft - and in-depth session around Einstein and Lighting based solutions. We’re excited for what the week holds in store for our team and the insights we’ll be able to take back to our clients and partners. Architecting end-to-end Einstein solutions We learned from Reinier van Leuken and Ed Sandoval, from the Netherlands, how to architect solutions that leverage the Einstein platform. They discussed how to assemble the Einstein suite of tools and technologies to solve specific business problems that rely on insights which would be otherwise hard to get: think time-consuming, resource-intensive data insights. The Einstein suite of tools and capabilities can help take a large data set and discover patterns and insights that are statistically significant. It can help us decide which insights are particular and worthy of further analysis. It enables us to leverage the Salesforce platform to make such insights accessible to the right people in near real time. The Einstein suite of tools can drive business processes and workflows based on these insights. The session highlighted a dataset involving 92 million records that were used to predict some really interesting insights for a grocery retailer such as what products aren't selling as much, which store is doing much better from the rest and thus should be emulated across the Enterprise, which store format does consistently better, which products are likely to stock out and cause revenue left on the table, etc. We took away from the session that there are less onerous or intimidating ways to leverage AI and supercharge parts of our client’s business. Using Einstein doesn’t have to be a major development project. Knowing how to assemble the tools could help visualize the one use case to start off a client's AI journey. Introduction to MuleSoft Anypoint Platform Salesforce is not the only game in town when it comes to the number of systems and platforms in a typical Enterprise application network. Often Salesforce comes to the party late, as the newbie. Apart from the usual sales, service and marketing use cases that Salesforce is so good at, MuleSoft offers the potential of delivering a consistent, integrated and delightful customer experience across any touch point with the customer regardless of where the data resides across the application network. Even better it offers the potential to make Salesforce the UX layer on top of legacy systems and to help deliver incredible experiences to employees and customers alike. MuleSoft can simplify and compose, deploy and discovery Enterprise-wide APIs across the Enterprise application network. It enables our client’s to create integrated experiences easier and not be intimidated by it. Integrations with legacy systems do not need to be big long projects anymore. Delivering integrated experiences through the Enterprise and your customer community is closer within reach than you think using MuleSoft. Advanced Lightning Components Session It’s the first day of Dreamforce, and this is my 5th year in a row attending. It’s amazing how much it has grown in that time. For the developer sessions, there’s one speaker, Christophe Coenrates, who always puts together informative talks.  I was eager to attend his session entitled, "Advanced Lightning Components". It was a mix of currently available functionality, and forward-looking features scheduled for upcoming releases. One of the challenges when building Lightning components is how to avoid writing duplicate code within components with shared business logic. By utilizing static resources and the <ltng:require> tag, developers can now write their business logic in one place and reuse it in multiple components. The session kicked off with a few examples of how developers can store their shared business logic inside a static resource. One of the challenges when building Lightning components is how to avoid writing duplicate code within components with shared business logic. By utilizing static resources and the <ltng:require> tag, developers can now write their business logic in one place and reuse it in multiple components. An example was shown where a mortgage calculator was displayed through two separate components. Both components produced the same results because they were using code that was shared within a static resource. This allows developers to follow the “once and only once” rule of writing modular code. Christophe showed how caching in Lightning components can reduce the loading time of a component.  He showed an example where pagination was used and timed the results it took for each page to display to the user.  The pages that had already been visited were able to render much faster than the pages the user hadn’t visited yet.  This brought up an interesting dilemma, as there’s no guarantee that the results from the cache will be accurate the second time they are viewed.  It becomes a question of what’s more valuable: completely accurate results or returning the results quickly and recalculating the accurate results later. Next, the discussion moved towards a topic I was much more familiar with as I am presenting a session on the topic at Dreamforce called, "Custom Record Forms In a Flash, with the Lightning Data Service".  The presenter mentioned how the <lightning:recordForm> component can be utilized to allow the user to edit fields on a parent record.  For example, from a contact record, I can place a <lightning:recordForm> on the page that displays fields from its account.  I can then edit the account fields from the contact record page.  This functionality didn’t exist in Salesforce Classic, and it allows users to remain on a contact record while reading or editing the related account. Finally, an upcoming feature presented is called Live Records. If a record is changed by a user while a separate user is viewing the record, the record will update immediately in the second user's screen without having to refresh the page. This ensures that users are always viewing the most up to date data in Salesforce at any moment. The Advanced Lightning Components session is a popular one, and though it appears at every Dreamforce, it’s always different enough to warrant placing it on your list of sessions to attend, as it gets modified to show off the latest updates to Lightning.  I’m eager to start using some of these new features. Emerging Trends Artificial Intelligence (AI) will start playing a critical role in business processes, which will require Salesforce Consultants to have a thorough understanding of the capabilities and limitations of AI. Salesforce sees clearly that we are currently in the 4th Technology Revolution. They anticipate that technology change will never be slower. With this revolution, we see there are various fronts that require collaboration and focus. Artificial Intelligence will start playing a critical role in business processes, which will require Salesforce Consultants to have a thorough understanding of the capabilities and limitations of AI. For many years, the focus was generally placed on sourcing and exposing data in an integrated manner — providing more data to users. We now see a shift to providing users with actionable and relevant data, rather than presenting any all data available to a user. Our clients will soon have in their hands on an extremely powerful set of AI tools right within the Salesforce Platform. Our team is working diligently to gain experience and mastery of the AI tools we can use to enhance our client’s business applications. We envision AI playing a role in the majority of our client’s projects and will be relevant to clients big and small. Out team is excited to be a part of the 4th Industrial Revolution, and we’re looking forward to bringing our clients into the next generation of the Salesforce Platform. The post Salesforce Dreamforce 2018: Day 1 Recap appeared first on Soliant Consulting.
View the full article

Soliant Consulting

Soliant Consulting

 

fmQBO To LedgerLink Conversion Offer

fmQBO is now called LedgerLink and has lots of new features, upgrades and security enhancements. Because of changes that are coming in the next year we will need everyone to migrate to this version sometime in the next 6 to 12 months. To make this transition as easy as possible we are offering a package deal that includes a free year off LedgerLink and having us do the migration for you. The post fmQBO To LedgerLink Conversion Offer appeared first on Geist Interactive.
View the full article

todd geist

todd geist

 

FileMaker Audio Player Integration

Using JavaScript in FileMaker brings great functionality. In this post and video, we will review how to integrate a FIleMaker Audio Player. The post FileMaker Audio Player Integration appeared first on Geist Interactive.
View the full article

todd geist

todd geist

 

Document Field IDs: Compare Field IDs Between Deployment Environments

Gemini to Apollo Just like trailblazers such as Gus Grissom, working on the Mercury, and later, Gemini projects for NASA, we sometimes require separate or parallel environments in project deployments. Gemini could be viewed as the “Dev” phase that led up to the “Prod” project that followed, Apollo. Development environments are a critical part of projects where experiments and testing can be done without risk to critical resources, allowing bugs to be found and worked out. Similarly, in FileMaker development, it is not uncommon to have different environments dedicated for development and production. Active development occurs on a dedicated server, or offline, where changes are tested before being put into production.
Astronauts John Young and Virgil I. (Gus) Grissom are pictured during water egress training in a large indoor pool at Ellington Air Force Base, Texas. SOURCE: NASA Images at the Internet Archive Expand image There are strategies to consider when working in such an environment to ensure successfully implementing updates without issue. In even larger systems there could be more environs for different purposes, such as development (DEV), quality assurance (QA), and production (PROD). These can occur in sequence or, more commonly, in parallel. Parallel Development Environments During Apollo 13, as with all NASA missions, there was a primary crew and a backup crew. When the accident with the oxygen tanks occurred, Ken Mattingly and a team of engineers were required to come up with a solution on the ground. Essentially, this was a Dev environment, taken to extremes, with the Prod environment being the Apollo crew in actual flight. Deke Slayton shows the adapter devised to make use of square Command Module lithium hydroxide canisters to remove excess carbon dioxide from the Apollo 13 LM cabin. SOURCE: NASA Apollo Imagery Similarly, you can have different environments for your FileMaker project, albeit without the extreme conditions or consequences. Dev environments are sometimes required to work out and test various solutions before eventually being put into production with confidence. The Problem When you create a field in FileMaker, there are things that happen under the hood to make it easy to reference throughout the solution. An internal "Field ID" is assigned to each field that you create. You never see this internal ID, but this is how FileMaker knows what field to reference in layouts and scripts. You might think that fields with the same name would map correctly across different systems, but it is the internal field ID that is used. Therefore, it is critical when deploying any updates, that internal field IDs match. For example, there can even be multiple developers working on a solution, and if different people are adding fields in parallel environments without consideration of the order they are added in, then the internal field IDs are bound to get out of place. The next time you deploy an update, things can break if fields are not lined up. Environment 1 | Developer A | Table 1 Internal Field ID Field Name 1001 ID 1002 FirstName 1004 LastName Developer A created a field, “MiddleName”, which was assigned internal ID 1003. Later in development, the field was deleted leaving a gap in the internal ID numbering sequence.
Environment 2 | Developer B | Table 2 Internal Field ID Field Name 1001 ID 1002 FirstName 1003 LastName Developer B never created the “MiddleName” field so internal ID 1003 exists in this environment but is associated to the “LastName” field.
Internal Field IDs There are some internal tables that FileMaker uses to track schema, which is not visible anywhere in a file, except by way of the executeSQL function. By querying these "under the hood" internal tables, we can get information about the defined FileMaker fields, including their internal IDs. The Solution: Pre-Flight Check Astronaut Roger B. Chaffee is shown at console in the Mission Control Center, Houston, Texas during the Gemini-Titan 3 flight. SOURCE: NASA Gemini SHuttle Mission Gallery Expand image I built a small FileMaker file to automate the process of getting a list of all Tables and their Fields and internal Field IDs and some scripting to show where there are conflicts. You can get the file free from here: https://github.com/SoliantMike/FM-DocumentFieldIDs All you need to do to use with your files is to follow the instructions, update the external data sources and reference the one script you copy from this file and paste into the files you want to be able to compare. What happens then is that the necessary executeSQL function is run and returns results to the parent script as a parameter, where we parse through and build results as temporary records in our file. Special thanks to our own Brian Engert for helping optimize the SQL to only return fields for each base table, instead of each table occurrence in a file.
Now What? Once you find an issue where fields do not match, what do you do? That is going to depend on what has been done, and where those fields appear in your development. This tool merely identifies where issues are at, what you do about it is something different. Ideally, you identify problems before they become an issue and can correct fields in both environments before they become an issue. You might also consider implementing the FileMaker Data Migration Tool for options regarding matching on field name during migration, to get environments back to a baseline file state. Read more about the Data Migration Tool. Also, thank you to Norm Bartlett for reviewing and contributing to this post. References Gihub Repository: https://github.com/SoliantMike/FM-DocumentFieldIDs Gemini 3: https://en.wikipedia.org/wiki/Gemini_3 Apollo 13: https://en.wikipedia.org/wiki/Apollo_13 If you have any questions or need help with your FileMaker solution, please contact our team. The post Document Field IDs: Compare Field IDs Between Deployment Environments appeared first on Soliant Consulting.
View the full article

Soliant Consulting

Soliant Consulting

 

Ledger Link: Getting Started

It is always good to know how to get started with any product. And in this video, Barbara Cooney takes us through how to do just that with LedgerLink. Follow along with Barbara as she helps you get started using LedgerLink: Key Points Obviously you need to have a LedgerLink licence. We have a demo […] The post Ledger Link: Getting Started appeared first on Geist Interactive.
View the full article

todd geist

todd geist

 

Karbon Update: Transaction Logs

We need to do better logging in Karbon, both developer logging at design time, and process logging at runtime. This video covers some of the progress we have made towards solidifying our approach to login. The post Karbon Update: Transaction Logs appeared first on Geist Interactive.
View the full article

todd geist

todd geist

×