Gemini to Apollo
Just like trailblazers such as Gus Grissom, working on the Mercury, and later, Gemini projects for NASA, we sometimes require separate or parallel environments in project deployments. Gemini could be viewed as the “Dev” phase that led up to the “Prod” project that followed, Apollo. Development environments are a critical part of projects where experiments and testing can be done without risk to critical resources, allowing bugs to be found and worked out.
Similarly, in FileMaker development, it is not uncommon to have different environments dedicated for development and production. Active development occurs on a dedicated server, or offline, where changes are tested before being put into production.
Astronauts John Young and Virgil I. (Gus) Grissom are pictured during water egress training in a large indoor pool at Ellington Air Force Base, Texas. SOURCE: NASA Images at the Internet Archive
There are strategies to consider when working in such an environment to ensure successfully implementing updates without issue.
In even larger systems there could be more environs for different purposes, such as development (DEV), quality assurance (QA), and production (PROD). These can occur in sequence or, more commonly, in parallel.
Parallel Development Environments
During Apollo 13, as with all NASA missions, there was a primary crew and a backup crew. When the accident with the oxygen tanks occurred, Ken Mattingly and a team of engineers were required to come up with a solution on the ground. Essentially, this was a Dev environment, taken to extremes, with the Prod environment being the Apollo crew in actual flight.
Deke Slayton shows the adapter devised to make use of square Command Module lithium hydroxide canisters to remove excess carbon dioxide from the Apollo 13 LM cabin. SOURCE: NASA Apollo Imagery
Similarly, you can have different environments for your FileMaker project, albeit without the extreme conditions or consequences. Dev environments are sometimes required to work out and test various solutions before eventually being put into production with confidence.
When you create a field in FileMaker, there are things that happen under the hood to make it easy to reference throughout the solution. An internal "Field ID" is assigned to each field that you create. You never see this internal ID, but this is how FileMaker knows what field to reference in layouts and scripts.
You might think that fields with the same name would map correctly across different systems, but it is the internal field ID that is used. Therefore, it is critical when deploying any updates, that internal field IDs match.
For example, there can even be multiple developers working on a solution, and if different people are adding fields in parallel environments without consideration of the order they are added in, then the internal field IDs are bound to get out of place. The next time you deploy an update, things can break if fields are not lined up.
Environment 1 | Developer A | Table 1
Internal Field ID
Developer A created a field, “MiddleName”, which was assigned internal ID 1003. Later in development, the field was deleted leaving a gap in the internal ID numbering sequence.
Environment 2 | Developer B | Table 2
Internal Field ID
Developer B never created the “MiddleName” field so internal ID 1003 exists in this environment but is associated to the “LastName” field.
Internal Field IDs
There are some internal tables that FileMaker uses to track schema, which is not visible anywhere in a file, except by way of the executeSQL function. By querying these "under the hood" internal tables, we can get information about the defined FileMaker fields, including their internal IDs.
The Solution: Pre-Flight Check
Astronaut Roger B. Chaffee is shown at console in the Mission Control Center, Houston, Texas during the Gemini-Titan 3 flight. SOURCE: NASA Gemini SHuttle Mission Gallery
I built a small FileMaker file to automate the process of getting a list of all Tables and their Fields and internal Field IDs and some scripting to show where there are conflicts. You can get the file free from here: https://github.com/SoliantMike/FM-DocumentFieldIDs
All you need to do to use with your files is to follow the instructions, update the external data sources and reference the one script you copy from this file and paste into the files you want to be able to compare. What happens then is that the necessary executeSQL function is run and returns results to the parent script as a parameter, where we parse through and build results as temporary records in our file. Special thanks to our own Brian Engert for helping optimize the SQL to only return fields for each base table, instead of each table occurrence in a file.
Once you find an issue where fields do not match, what do you do? That is going to depend on what has been done, and where those fields appear in your development. This tool merely identifies where issues are at, what you do about it is something different. Ideally, you identify problems before they become an issue and can correct fields in both environments before they become an issue.
You might also consider implementing the FileMaker Data Migration Tool for options regarding matching on field name during migration, to get environments back to a baseline file state. Read more about the Data Migration Tool.
Also, thank you to Norm Bartlett for reviewing and contributing to this post.
Gihub Repository: https://github.com/SoliantMike/FM-DocumentFieldIDs
Gemini 3: https://en.wikipedia.org/wiki/Gemini_3
Apollo 13: https://en.wikipedia.org/wiki/Apollo_13
If you have any questions or need help with your FileMaker solution, please contact our team.
The post Document Field IDs: Compare Field IDs Between Deployment Environments appeared first on Soliant Consulting.
View the full article
This is the final post in my series about the demo I presented during my “IoT and the FileMaker Data API” session.
The 'master' branch is the more interesting one as it uses the viseme information that Amazon Polly provides. For this to work I needed some linguistics help to interpret what those visemes mean.
In my DevCon session I jokingly stated that the session happened because of the help of both my daughters, one is in university studying Biology (fish, get it?) and the other is in university studying Linguistics. So I was all set Thanks, Girls!
Back to those visemes, Figure 28 shows what Amazon sends you:
Figure 28 - Viseme information
It tells you the mouth position at any time as it changes during the audio. And we can determine the duration from the time elapsed between two entries.
But which of those weird-looking viseme characters indicate whether the mouth is open or closed? Obviously, the plastic fish doesn’t have the full mouth range as we have, it can’t purse its lips for instance. So, I just had to find the ones that mean that the mouth is open, half open, or closed. Figure 29 shows the visemes in master branch code.
Figure 29 – Viseme characters in the master branch code
Query the FileMaker Data API
As we query the FileMaker Data API and we retrieve the viseme data we received from Amazon Polly, we process it and build a smaller list of those events that have to do with the vowels that open the mouth:
Figure 30 – Query the FileMaker Data API
Later in the code when it is time to play the audio and move the head and mouth we spawn a separate thread (lines 267 and 273) to process the data we retained (see Figure 31).
Line 267 invokes this function that loops through that viseme data array and move the mouth at the appropriate time.
Figure 31 – A separate thread is spawned to process the retained data
In Figure 32, Line 78 opens the mouth for a pre-determined length of time (set in billy.ini) and line 83 puts the subroutine in waiting mode until it is time for the next line in the viseme data, based on the time stamps in the Polly data.
Figure 32 – Code
You will note that line 272 in Figure 31 introduces a delay between starting the voice (the audio playback) and starting the thread for the mouth, that's a value set in billy.ini and set to what I found to be necessary to make sure the mouth action did not start earlier than the audio playback.
And with that I was pretty much done. Everything was working as I wanted. The new pinion gears arrived, and I replaced the old gears on all three of my Billy Basses (the original one and my two Billy Bass Bones), and now the heads turned the proper 90 degrees, making Billy look straight at you as it begins talking.
My last worry was whether or not I would be able to access my virtual instance of Windows 10 where my Visual Studio 2017 lives (it’s on a VMware server here in the office, behind a VPN) given how flaky hotel networks can be.
It turns out that the excellent (and free) Visual Studio Code editor on macOS also supports Python as shown in Figure 33.
Figure 33 – Visual Studio Code editor
All I had to do was clone my GitHub repository to my Mac, and away I went. From my Mac, I can SCP (Secure Copy Protocol) to copy the modified Python scripts over to the Raspberry Pi and use SSH in Terminal to run the main Python script to put the Raspberry Pi in its wait-and-read loop. Figure 34 shows the Python script running on the Raspberry Pi, logging every step of its actions.
Figure 34 - Python script running on the Raspberry Pi
And here is a final picture of my hotel room with the whole thing set up for last-minute testing:
Figure 35 - Final testing
Time for a little fun then. Early on I mentioned that the text we are sending to Amazon Polly uses SSML markup. Using SSML allows us to specify things like accents. Or tell Polly to speed up the text narration, like so:
Hi, my name is Andrew Lecates, when I get going, <prosody rate="x-fast">you really have to fasten your seatbelts, because there is a lot to cover and you are standing between me and my coffee, or red bull, or whatever the heck it is that makes me talk like this.</prosody><amazon:breath duration="x-long" volume="x-loud"/><break time="500ms"/>b</speak>
Figure 36 – Demo file on iPhone or iPad
If you open the demo file on your iPhone or iPad, you’ll go straight to a device-specific layout where you can flip between the records and use the big buttons to flag the record you want Billy to speak. You can also use the “Head” button to make Billy only turn his head without saying anything.
I had a lot of fun putting this particular DevCon presentation together, and there is something immensely satisfying to seeing devices collect sensor data and being able to things move from a FileMaker solution.
If nothing else I hope I have demonstrated that the FileMaker platform fits in very well in this IoT environment.
Writing the code to run on the Raspberry Pi was fairly straightforward, in both C# and Python. None of the demos that I used in my session are more than 300 lines of code, with plenty of whitespace, comments and logging. Don’t let unfamiliarity with those languages deter you. It's easier than you think.
Questions about anything in this series; leave a comment here or find me on community.filemaker.com
The Story of Billy Bass - Part One (setting it up)
The Story of Billy Bass - Part Two (making Billy Bass move)
The Story of Billy Bass - Part Three (using Raspberry Pi)
The Story of Billy Bass - Part Four (switching to Python)
The Story of Billy Bass - Part Five (using Visemes)
The post DevCon 2018 Follow-up: The Story of Billy Bass – Part Five appeared first on Soliant Consulting.
View the full article
This is the fourth in a series of posts about the demo I presented during my “IoT and the FileMaker Data API” session.
Making it Work in Python
The first half of my DevCon presentation would be about collecting sensor data from a Raspberry Pi and sending that data to FMS through the Data API, and I am using Windows 10 IoT and .NET code for that. If I could do the second half of the presentation using Python, then it would show off the versatility of both the Raspberry Pi device and the FileMaker Data API.
Raspberry Pis don’t have hard drives; they use Micro SD cards to run from so switching operating systems is as easy as switching SD cards. I got one with the latest Raspbian installed and then followed the Adafruit instructions to get their Python library installed and run the tests. The tests worked just fine, so I could move on to figuring out how to write the code in Python.
Finding the Right IDE
I was trying to figure out what IDE (Integrated Development Environment) I would need to write Python code but turns out that Visual Studio 2017 does that just fine, so I didn't have to learn a new tool.
The fully working source code for what I ended up doing in my Devcon session is here on GitHub: https://github.com/wimdecorte/BillyPython
Figure 22 is from a Windows virtual instance, running Visual Studio 2017 and shows what files there are in my source code:
billy.ini is the config file (my FileMaker Server address, name of the file, layout and login credentials, and so on)
BillyPython.py is the working version of the whole thing: query the FileMaker Data API to see if we need to play audio and the code to make the mouth and head turn
Before I got to writing the whole thing, I needed to figure out what the correct settings were to move the fish’s mouth and head. For that I started with two test files: BillyBodyTest.py and BillyMouthTest.py
Figure 22 – Python code
The purpose of those test files was to figure out what the ideal settings were for the motor to deliver sufficient power for just long enough to make the head and mouth move, but not so much power that I would break the mechanism.
If you want to build your own Billy Bass setup, then I strongly suggest using these two files to find those ideal settings for your fish. I was working with two Billy Bass Bones since I wanted redundancy in case I broke one and found that they each required subtle differences in those values for maximum effect.
Since time was running out, I ended up storing these motor values in the billy.ini config file (see Figure 23) instead of storing them in the FileMaker file and reading them from there. It would not be a big change to add that since I had it working in .NET (see the MOTOR and MOTORHAT layouts in the FileMaker demo file).
Figure 23 – Motor values stored in the billy.ini config file.
Re-coding in Python
I already had most of the code developed in C# so the logic was clear to me. Figure 24 shows the workflow that I needed to re-code in Python. The Raspberry Pi would query FileMaker Server twice every second to see if any records were flagged there. If there was such a record, it would download the mp3 and play it back while moving the mouth and the head.
The first order of business was to find a good Python wrapper around the FileMaker Data API. I settled on David Hamann’s fmrest, on GitHub. As I had never written Python before, David was very gracious in answering my questions to keep me going.
One of the challenges was finding out how to do multiprocessing in Python because moving the head and mouth had to happen at the same time as playing the audio. That took a bit of Googling to get right. Fortunately, there is a ton of useful info around.
Moving the head was easy enough, once I have the audio I can figure out how long it is and then send a signal to the head motor to move in the right direction for that length of time.
That happens in this block of code as shown in Figure 25.
Figure 24 – Workflow for re-coding in Python
Figure 25 - Code block
But there was the obvious big challenge: how do I make the mouth move in an approximation of what the audio was saying?
When you check the source code on GitHub, you’ll notice that there are three branches (see Figure 26).
“Pacat” can be ignored, that was an attempt to have the Raspberry Pi ‘listen back’ to the audio as it was playing to try and determine when to open and close the mouth based on volume or something else. I abandoned that branch as I was running out of time.
The “master” branch is where I am using the visemes as supplied by Amazon’s Polly to move the mouth and “AudioSample” is using the mp3 amplitude to determine when to mouth the mouth. This approach does not work with the visemes but just inspects the audio file itself.
Figure 26 – Three branches in the source code
"AudioSample" in essence uses the block of code shown below to chunk up the audio into segments of 200 millisecond and for each segment it determines the amplitude of that chunk (see Figure 27). If the amplitude is higher than the threshold it runs the motor to open the mouth for the preset duration and then closes it again.
Figure 27 - Code block for "AudioSample"
The variables involved here to make this look natural are:
length of the audio segment
'fish_mouth_duration': how long do you keep the mouth open?
'time_allow_for_spent' to account for the time it takes to mechanically open and close the mouth
With some careful testing, you can get them just right depending on your fish. The code depends on the pydub library for the audio-inspection.
The Story of Billy Bass - Part One (setting it up)
The Story of Billy Bass - Part Two (making Billy Bass move)
The Story of Billy Bass - Part Three (using Raspberry Pi)
In my final post, I'll show how I used Visemes.
The post DevCon 2018 Follow-up: The Story of Billy Bass – Part Four appeared first on Soliant Consulting.
View the full article
This is the third in a series of posts about the demo I presented during my “IoT and the FileMaker Data API” session.
Construct #2 - Billy Bass Bones
As mentioned in my previous post, I decided to do everything from the Raspberry Pi to control the Billy Bass toy.
While I was scouring eBay looking for pristine Billy Basses I came across this one as shown in Figure 15.
Figure 15 - Ordering a Billy Bass
I thought that would be funny and I could make the story about the fact that almost 20 years have passed since Billy Bass’ first DevCon and that he… well… became bones.
Unfortunately, when I got this fish, it too did not turn its head the full 90 degrees that I wanted. Nevertheless, I ‘gutted’ the fish in the same way as the original one by removing its controller unit and only leaving the wires that come down from the motors and the speakers (see Figure 16).
This type of Billy Bass has two motors: one for the mouth and one that drives the head or the tail depending on whether you run the motor forward or backward.
Figure 16 – Inside Billy Bass with the controller unit removed
Issue with the Pinion Gear
Since I was seriously disappointed about both Billy's not turning their head properly, I did some more searching and found that it was a bit of a known issue: the pinion gear that sits on the shaft of the motor that moves the body can crack. In fact, it almost certainly will break given enough usage and time. And since all of these toys are second-hand, it is pretty much a given that they will have been used enough times that those gears will have cracked and that the head will no longer turn all the way.
There are some informative videos on YouTube that show how to replace it so after a lot of trepidation I decided to go ahead and disassemble the fish further to get to the body motor.
Figure 17 - Pinion gear on the shaft of the motor can crack
You cannot tell from the pictures in Figure 17, but there was indeed a crack between two of the teeth and because of it, the gear is not tight enough on the shaft. When the motor turns, it does not deliver all of its power through the gear, and the gear slips. And that is why the head does not turn all the way.
Not all Billy Basses, however, use the same types of gear. While they all have 2mm shafts, some gears have nine teeth and some 10 or 11. So you need to get it out first before you know which ones to get. Mine were all 9-teeth gear, so I got a bunch from Amazon.They are very cheap, but the delivery time was making me a bit nervous. And at this point, I wasn't entirely certain that I could re-assemble the whole fish
While I was waiting for the replacement gears to arrive, I set about finding out how to drive them from the Raspberry Pi. I settled on the MotorHat from Adafruit. Adafruit makes a bunch of quality hardware for Raspberry Pis and Arduinos, the documentation is excellent, and they have a good support forum. They also have a .NET library for their hardware including this motor hat. The MotorHat has four motor connections, more than enough to drive the two Billy Bass motors.
“Hats” or “Shields” are an integral part of the Raspberry ecosphere and that of similar boards like the Arduino and Particle. Hats add hardware functionality very easily. I have a few slides in my DevCon presentation that show the concept: https://community.filemaker.com/docs/DOC-9255.
To my dismay, however, the motor hat had to be soldered together. I looked for pre-assembled ones before I became resigned to the fact that, yes: I would have to solder. So it was back to Amazon, and I got a soldering iron set. But it stopped working after just one use, so I bit the bullet and got a decent one.
It turns out that soldering is a lot of fun. Who knew. It helps that the Adafruit instructions are complete and easy to follow.
By this time my dining table started to look quite full (Figure 18).
Figure 18 – My “workshop”
With just a few weeks left before DevCon...
Of course, none of this would have happened without the support of my loving wife Nicky — a fellow FileMaker developer and great artist — whom I robbed of that dining room table for a few months, and whose patience I know I’ve tested on more than one occasion.
To keep the soldering to a minimum and to allow for quick assembly and disassembly I was still using a breadboard and jumper cables to connect the Raspberry Pi and Billy Bass. Figure 19 shows the Raspberry Pi with the MotorHat on and the cables running to the breadboard.
That takes care of the hardware to control the motors. Now I needed to write code to send instructions from the Raspberry Pi, through the Motor Hat to the Billy Bass.
I started with the MotorHat demo for C# that they have on GitHub and the underlying Adafruit C# library: https://github.com/adafruit/AdafruitClassLibrary.
Figure 19 – Raspberry Pi and MotorHat
Looking at their demo I could see that there are a few settings that need to be specified to drive a motor correctly: the Pulse Width Modulation frequency (PWM), the direction of the motor (forward or backward), the speed, and the duration. I broke those out in my FileMaker demo file so that I can could just modify the settings right there in my FileMaker record and have the Raspberry Pi read from FileMaker Server and send the instructions to the motors.
Figure 20 - Modifying the motor settings in my FileMaker demo file
Having learned that those pinion gears on the fish were a little fragile and since I did not have the replacements yet I did not want to experiment with the actual Billy Bass. It was very likely that I'd mess things up at some point and drive the motors too hard and too long. So I just hooked up a DC motor to the hat so that I could tell if it was running correctly, based on my code. The source code is here on GitHub: https://github.com/wimdecorte/BillyBassPolly2
In Figure 21, it shows the hardware setup, since I am using jumper cables and a breadboard I can easily switch the motor from port 1 to port 2 and change its polarity so that ‘forward’ in my code corresponded to the motor running in the right direction.
Note that you will need a good power adapter to run the MotorHat. I tried to skimp on that but found that the hat did not respond very well so I ended up getting the power adapter directly from Adafruit.
Everything was working well except that I was getting an error every second time I ran the code. Trying to troubleshoot it I came across an open issue in the Adafruit .NET library and the suggested fixes did not work for me. It looked like I was stuck, and we were into June now. I waited a couple of days to see if there was any feedback on the issue, but none came.
Figure 21 – Hardware setup
Since the default OS for a Raspberry Pi is Raspbian Linux, and most of the example code you'll find is Python, and Adafruit has a Python library for their MotorHat, I decided to switch gears and see if I could make it all work in Python. I've never touched Python before, so I was feeling more than a little bit of anxious. But I was also excited by the idea.
DevCon 2018 Follow-up: The Story of Billy Bass - Part One
DevCon 2018 Follow-up: The Story of Billy Bass - Part Two
In my next post, I'll discuss how I made it work in Python.
The post DevCon 2018 Follow-up: The Story of Billy Bass – Part Three appeared first on Soliant Consulting.
View the full article
This is the second in a series of posts about the demo I presented during my “IoT and the FileMaker Data API” session.
Construct #1 - the original Billy Bass
I wanted to find a Billy Bass that was as close to the original one used in the 2001 DevCon session, so I started digging around on eBay, and this is the one I ended up buying:
Figure 9 - Buying an original Billy Bass
When it arrived, I unscrewed the back to see how I could get at the motors (see Figure 10). There's a small controller unit in there with wires going to the motors, the speaker, the motion sensor, and the battery compartment.
Figure 10 - Controller unit wired to components
I cut all the wires as close to the controller unit as I could and then removed everything except the wires that go to the three motors and the wires that come down from the speaker.
This particular Billy has three motors: one for the head, one for the tail, and one for the mouth. Those are the wires that go off to the big white area in the center of the fish. When I was done with the demolition, I had just the three pairs of wires that each led to each of the motors.
Figure 11 – Cut controller wires
These motors are fairly basic 6V brushless DC items (as I would find out much later in the process – at this particular time in the process I saw no reason to take them out just to look at them, since I was not confident that I could put it all back together).
Figure 12 – Motor
Linking the Raspberry Pi and the Motors
Now I had to find something to connect those motors to and make them run: a motor controller board that would bridge the gap between the Raspberry Pi and the motors.
As I was searching for existing Billy Bass projects (there are a few that use Alexa), I came across a small device named the Audio Servo Controller that seemed promising.
In talking to Jack - who created the board - he mentioned that he had another board that was better suited for the task: the Squawker Talker. You can see it an action in a demo video on page 2 of this thread.
Squawker Talker drives the motors and since it has a built-in 3-Watt amplifier that uses Line-In for its audio input, I could use a simple 3.5mm male-to-male audio cable from the Raspberry Pi to this unit to play the audio and connect the amplifier on the board to the Billy Bass speaker. And it comes with a power supply to drive it all. That looked perfect, so I ordered one the Squawker Talker units ($65)
At this point, I was not comfortable with soldering, so I settled on using heat shrink seals to connect the Billy wires to small jumper cables to plug into a breadboard. With the heat shrink seals, I can use a heat gun to ‘melt’ the wires together and avoid the soldering. You can see the result with the heat shrink seals in that picture above that shows the motor, about halfway on the wire.
When I had all the wires connected, it looked like Figure 13. The green unit is the Squawker Talker, the white unit is the breadboard where I plug in the cables.
Figure 13 - Billy Bass wires connected to the Squaker Talker.
How Does it Work?
The Squawker Talker controls the motors and the speaker. So where does the Raspberry Pi come into play then? The Raspberry Pi (RPI) is the 'connected device' in this IoT setup. In other words, it has to run on its own, and have a connection to the internet so that it can check whether we have flagged a record in FileMaker to be played by Billy Bass.
I was already using Raspberry Pis as IoT devices to collect sensor information and send it to FileMaker Server, and I was using Windows 10 IoT as the operating system for the RPIs, so I decided to use Windows 10 IoT here too.
My preferred language for this is C#, and I have an open source project on GitHub that is a .NET wrapper around the FMS Data API (https://github.com/wimdecorte/FMdotNet__DataAPI).
With that, it was a natural thing for me to turn to C# to write the code to run the Raspberry Pi.
You can download the code here: https://github.com/wimdecorte/BillyBassPolly.
In a nutshell, the code does this:
There is a small config file to store the name of the FileMaker server, the name of the FileMaker file, the layout to use and the FileMaker account and password to access the file;
The Raspberry Pi loops every half second and queries the FileMaker Server through the Data API (the interval is a config setting too), it looks for any records that have the “flag_ready” field set to 1 (the checkbox labeled “Speak It” on the BILLY layout);
If it finds a record where that checkbox is checked, it downloads the mp3 from the container field;
And then it uses the Operating System media player to play the mp3;
At this point the Squawker Talker takes over and forwards the audio to the Billy Bass speaker and moves the body and mouth.
Figure 14 – Raspberry Pi connected to the Squawker Talker
And here is the end result:
Based on the audio input, the mouth will move in sync, and you can adjust the threshold level on the Squawker Talker by adjusting screws on the board. The body and tail move randomly.
As you can see in the video, the head does not turn entirely in a 90-degree angle and I was a little unhappy about that. I wanted the head to turn all the way so that Billy Bass would look at you, as it is meant to.
I also felt that the whole construct was a little too... easy — it's not driving the motors directly; the Squawker Talker board takes care of that.
I decided to have another go at it, and this time without the Squawker Talker board, I would do everything from the Raspberry Pi. And I wanted a fish whose head was working properly.
So, I looked around on eBay a little more to see if I can find one that is guaranteed to have a head that turns better and then find a way to interact with the motors.
DevCon 2018: The Billy Bass Story - Part One"
In my next post, I'll show how I used the Raspberry Pi to control the Billy Bass head.
The post DevCon 2018 Follow-up: The Story of Billy Bass – Part Two appeared first on Soliant Consulting.
View the full article
At this year’s DevCon in Dallas, TX, I presented a session on FileMaker and IoT (Internet of Things); in preparing for it I settled on this definition of IoT:
Based on that definition and the highlighted core elements, I split up the DevCon session into two parts:
Collecting sensor data and sending it to FileMaker Server through the Data API; and
Manipulate the environment by driving motors based on data retrieved from FileMaker Server.
The first part was easy, using a couple of Raspberry Pi 3s, a SenseHat on one, and a GrovePi hat on the other and using various sensors to collect data. You can download the FileMaker demo file and find links to the source code here: https://community.filemaker.com/docs/DOC-9255. I had been using those same Raspberry Pis to stress-test the beta versions of FileMaker 16 and 17's Data API, and it has made me a big fan of the Data API: it is fast and robust.
But I had to find a good demo that showed that second aspect; to drive those motors. A robot or a car were apparent choices but probably too obvious. I toyed a bit with the idea of a "useless box" but couldn't quite work in a good FileMaker angle.
Then I thought of Billy Bass. Back in 2001/2002, Rich Coulombre and his team at The Support Group used the Troi serial plugin to connect Billy Bass to their computer and make it talk: https://www.troi.com/software/billybass.html
A Gemmy Billy Bass toy has two or three motors to move the body, tail and mouth so it would be perfect, and a nice link to the past, so I went for it.
The following lists how it had to look like upon completing the work:
a FileMaker file hosted on FileMaker Server, and to keep things in the 'Cloud' realm, specifically a FileMaker Server 17 running on an Amazon EC2 instance;
the user should be able to type anything into a text field in that FileMaker file; and
Billy Bass should then speak what is typed. The Billy Bass is connected to the internet through a Raspberry Pi and listening for instructions through the FileMaker Data API.
Figure 1 - Diagram of the Billy Bass workflow
Off to work then.
FileMaker Server and the FileMaker file: text-to-speech
Putting the file on a FileMaker Server was easy, I chose my dev server which is part of our soliant.cloud infrastructure (https://www.soliant.cloud). It's a Windows instance on Amazon Web Services (AWS), in the Amazon us-east-1 region which comes down to North Virginia. The server is fully patched with FileMaker Server 17v2.
Now for the user typing text into that hosted file. The end result is that we need to get audio for whatever was typed in. We need a text-to-speech service that we can send that text to and that will generate audio for the speech playback. There are several of those services around, but I am most familiar with the Amazon Polly service:
If you want to play around with the service, you can test it out here after signing in with your AWS account:
To interact with Polly from inside FileMaker we will be using the Polly REST API. I could have used the Raspberry Pi itself to interact with Polly through one the available SDKs but I wanted this demo to be about FileMaker’s capabilities, and FileMaker is extremely good at interacting with REST APIs.
So, FileMaker will send the text to Polly and receive the audio file in return.
In the FileMaker file the API settings are stored on the API layout, and that’s where you’d enter your own AWS API key (see Figure 2):
The other thing we need to provide is somewhere to enter the text you want Billy Bass to speak. That’s the BILLY layout (see Figure 3).
Figure 2 – Provide your own AWS API key on the API layout
Figure 3 - Enter text for Billy Bass to speak
As you enter text, it’s automatically tagged with the and XML-like tags. That's because we will take advantage of Polly's ability to process text that is marked up with SSML (Speech Synthesis Markup Language). You will see why later. Polly itself can work with just pure text as well; you have to indicate your choice in the JSON instructions that you send over to Polly.
Besides the actual text that we want to synthesize, we also have to send some other information over to Polly. According to the API documentation, the call should look like this:
POST /v1/speech HTTP/1.1
"LexiconNames": [ "string" ],
"SpeechMarkTypes": [ "string" ],
That's easy enough to do, FileMaker is excellent at creating JSON, and the "Insert from URL" cURL options will let us specify the POST method and set the Content-Type header.
To make sure that Polly will accept the request we are sending to it, we have to include authentication information in the header of the call. That is probably the biggest challenge in making this REST call.
The section quoted below comes from the Authentication and Access Control for Amazon Polly page and describes our options for authenticating.
The SDK and CLI tools use the access keys to cryptographically sign your request. If you don’t use AWS tools, you must sign the request yourself. Amazon Polly supports Signature Version 4, a protocol for authenticating inbound API requests. For more information about authenticating requests, see Signature Version 4 Signing Process in the AWS General Reference.
Since we are not using any of the AWS SDKs or the AWS CLIs, this means that we need to use that "Signature Version 4" protocol, which comes down to sending a special Authorization header as part of the cURL options.
As it so happens, Salvatore Colangelo from Goya (famous for their BaseElements analysis tool and BaseElements plugin) demoed this at the 2017 Devcon and described the process in this blog post: http://www.goya.com.au/blog/generate-aws-signature-version-4-in-filemaker/
So, I gratefully borrowed his script and supplemented it with the rest of the data that Polly needs: what language to use, what voice for that language, whether we are sending SSML text or pure text, and whether we want an MP3 file back and at what bitrate.
The language and voice are something we want to play with, so I made those choices available on the FileMaker layout (see Figure 4).
Figure 4 – Specify the language and voice
The other settings we leave pretty much to their default because they produce a nice small MP3 file that the Raspberry Pi will be able to download efficiently.
To synthesize the text shown above, the request that our script produces is this:
The JSON body of the request is stored in the variable $request_params and contains just the bare minimum that we must send (se.e Figure 5)
Figure 5 -$request_params variable
We use that variable as part of the cURL options in the variable named $options, so that together those cURL options that we will feed to the “insert from URL” script step look like (see Figure 6).
Figure 6 – $options variable
Figure 7 shows the script at the step where we call that "insert from URL" script step and point it to the container field that will store Polly's mp3 audio file.
Figure 7 - Use the "Insert from URL" script step to point to the container field that will hold the audio file.
When we run the script, Polly sends us the audio file of our text:
Figure 8 - Audio file of the text
So far so good. That's #1 and #2 done. Time to start on #3: how to make Billy Bass move.
In my next post, I'll show how I made Billy Bass move.
The post DevCon 2018 Follow-up: The Story of Billy Bass – Part One appeared first on Soliant Consulting.
View the full article
Last week I wrote about my experience assisting with Bob Bowers’ Advanced session. I also interviewed two other trainers about how their sessions went. That was so much fun I decided to interview everyone who gave trainings this year.
I managed to track down three more: Mike Beargie of MainSpring, Jeremy Brown of Geist Interactive, and Cris Ippolite of iSolutions. It was a pleasure to speak with them all. Here’s some of what they had to say.
Intermediate - FileMaker Shared Hosting Master Class
Many of you know Mike Beargie from his consistently helpful presence in the FileMaker Community. He was kind enough to meet with me at lunchtime, even though he had a DevCon session to give immediately afterwards. We spent a little time catching up on the past year, and then he shared his reflections on his half-day training:
“This is the first time I’ve done a training day. I spend a lot of time answering people’s questions in the community, so it seemed like the natural next step. I’ve also been speaking at DevCon for a few years now, and I wanted to try something longer than a session.
My class was all about FileMaker hosting, how to install FileMaker Server or FileMaker Cloud so that you can share out your files. It was a start-to-finish comprehensive course on how to install the server, how to secure the server – including generating and installing an SSL certificate – how to figure all the settings in the admin console, how to actually connect to the server once it’s up and running, and finally touching on troubleshooting and getting help.
My goal was to show people that they can set up their own server. With FileMaker 14 support ending in September, and multi-tenant shared hosting going away, there are going to be a lot of people scrambling to re-host their solutions. The landscape is changing with hosting companies: now they’re offering their IT knowledge as a service to help set up dedicated servers for people, rather than providing a setup where a group of clients save money by all sharing the same server.
When we hit the first break, I spent the whole time answering people’s questions. People were really engaged and just stayed all the way through. They wanted to learn as much as they could.
At the end of the training, a lady from the UK came up to talk to me. She was the embodiment of a citizen developer, a business owner who is trying to provide more efficient software for their staff. Up to now she hadn’t considered doing this herself. But her FileMaker 14 hosting company basically told her, ‘We can set up a dedicated FileMaker 17 server for you and help you manage it, but your rates are going to go up significantly’. So she was worried that she couldn’t afford it, and the license was costing her money and maybe she wouldn’t even be able to use it.
Now she has a lot more confidence. She told me, ‘You know what, we’re setting up our server as soon as we get home. We just got our 17 licensing and we were really scared about doing the AWS part of this. You made it look really easy’ – and I jumped in and said, ‘It IS really easy’ – but the important thing is that now she’s ready to give it a try.”
I’ve known Jeremy Brown for several years now and admire his passion for teaching and willingness to help others. We met in the hallway between DevCon sessions for a quick chat about his half-day training.
I was happy that there were 150 people in the session. Shows there’s lots of interest. Even some people from FileMaker, Inc. were there and wrote about it in their blog post. Now I have a SLACK community (firstname.lastname@example.org) for all the people who signed up during my session so they can continue that conversation in the weeks going forward.
One of the participants was a first-time DevCon attendee who has been working on his solution for a long time and is interested in expanding its platform. He sat in the front row and was there the whole time, working hard. At the end of the session he shook my hand and told me that I helped inspire him to continue his study. He was excited to work with the charting library that I had provided and get it fully integrated into his system.”
Intermediate – Relationships / Calculations
I got to know Cris Ippolite during my time working as the Technical Marketing Evangelist for FileMaker, Inc. I will always be grateful for all the encouragement and support he gave me during that time. And of course he’s a joy to interview — the tricky part for me is to edit down our conversation while preserving his distinctive voice:
“I gave a full-day training in two parts. The morning was about intermediate-level relationships and the afternoon was about calculations.
Relationships is a topic that I’ve been investigating recently, figuring out why people struggle with it so much. The main thing people can’t seem to wrap their heads around is the relationship graph. The concept of relationships in the abstract makes sense to people, but the different ways you actually use the tool can be challenging. So I’ve been separating the idea of creating true relationships between tables from relationships we use for queries – which is where I see people getting lost.
The graph is great for true relationships, but I don’t see the upside of visualizing a query as a bunch of boxes with lines between them. Instead of burdening people with parsing out ‘Is this the same thing as that’, I say separate them so it’s easy to see the difference.
People responded to that honest critique, and to learning a way to sort things out. I could tell it was landing with people – you know, when you get the nods and the ah-has as you’re going along. Then, that night, a group of folks I ran into in the bar – I’m assuming they travel together because they were in the class together too – they pulled me aside and all started talking at once, saying, ‘Hey, that was great! Thanks for letting us know that we weren’t the only ones confused by this.’
In the afternoon I talked about calculations. I wanted to impress upon people that it’s not all about calculated fields. Maybe you’re already comfortable creating field formulas like you do in Excel, but there’s so much more – you can use formulas all over the platform, in custom dialogs, replaces, hidden objects, conditional formatting, portal filtering, tooltips, all that stuff. So if you invest in increasing your calculation vocabulary, you can leverage that information in a lot of different ways.
At the end of the day, I try not to introduce more boring stuff. You know, people are like, holding their heads in their hands and saying, ‘Make it stop!’ so I always wrap up with something fun. This time, I created a dog-walking app that uses four or five different GetSensor parameters to do things like counting your steps and how far you’ve gone. People really dug it, they were rushing to download it to their phone, and they had a great time playing around with it. What could be more fun than getting people on their feet and putting calculations literally in action?”
I had a great time talking to all these folks, hearing how they work and what motivates them as trainers. I hope you got something out of it too!
FileMaker DevCon 2018: Training Day - Part One
FileMaker DevCon 2018: Day 1
FileMaker DevCon 2018: Day 2
FileMaker DevCon 2018: Day 3
The post FileMaker DevCon 2018: Training Day – Part Two appeared first on Soliant Consulting.
View the full article
Today was the final day of DevCon, and it has an entirely different flavor. It is Customer Day! Rather than technical topics, the majority of today’s sessions show-and-tell client stories of problems solved by utilizing FileMaker.
Success Stories of Soliant's Philanthropy Committee
The first session I attended began with a presentation by our own Makah Encarnacao. Makah shared her own story of a chance conversation with Chris Manto during DevCon a few years ago. Over a drink at the bar he shared his experiences in West Africa, working as a film journalist in his early twenties, when he witnessed horrific starvation and death, while all he and his team could do was use their vocation as journalists to tell the story. This serendipitous meeting inspired Makah to do something for others. With the support and encouragement of Bob Bowers, our CEO, Soliant's Philanthropy Committee was born.
"FileMaker in Action: Non-Profit Case Studies" session presented by Makah Encarnacao
Our team of developers, business analysts, and project managers participate on a voluntary basis to provide services to non-profit organizations who otherwise could not afford custom software. The work is done in addition to their normal workload, often after hours and spans across all our practice areas: FileMaker, Web, and SalesForce. Together with Josie Graham, Makah vets the submissions and pairs them with the perfect volunteer.
Makah summarized the story of each organization: what they do, what they needed, how we helped, and the difference it made to their operation.
The Luke Commission
Prince Albert Food Bank
St. Francis Center
Altadena Mountain Rescue Team
Women's Alliance for Knowledge Exchange
These institutions do wonderful work, and now they are more effective, more efficient, more nimble with the help of a little expertise. I've never been prouder to be a part of the Soliant team.
I spent my refreshment break answering the questions in the Visionary Bar. The idea is for DevCon attendees to just walk-up to get answers and advise from FileMaker Business Alliance (FBA) members. I've done this for many years and always find it quite rewarding to help others.
Helping at the Visionary Bar
FileMaker in Action: Media and Arts Case Studies
For the second session of the morning, I saw fantastic examples of FileMaker used by two art and media businesses.
The first was Bryn Behrenshausen of Kalisher, a design house that creates and curates comprehensive art collections and has remote teams in six US cities. They have tons of information to keep track of: customers, designers, purchasers, owners, specs, pricing, and employee 100+ people. At the time Bryn joined Kalisher in 2014, Soliant Consulting had already built them a solution for managing their projects, quotes, vendors, and suppliers. Bryn shared his story of learning FileMaker development and now has built several modules of his own as well as refresh the original solution. He then shared some great tips for newer developers such as "Be Consistent with schema naming and script structure." It was great to hear Bryn's story.
Matt Greger presented the second portion of the session. He shared how FileMaker was utilized to manage the tv spot traffic for 30 years for "As Seen on TV", the company who brings you those great infomercials for Flex Seal, Snuggie, Copper Fit, and so many other products. It was a fantastic example of something I've seen many times: FileMaker can be a fantastic hub for data. In Matt's solution, FileMaker pulls-in data from a variety of sources including Google AdWords, Amazon, Facebooks, YouTube, Microsoft Bing Ads, and tv vendors. The data is processed and aggregated and then accessed by analytic tools Tableau, Power BI, and TIBC Jaspersoft.
During lunch, I met with Mike Zarin. Mike attended my Wednesday presentation "Tackling Sync." He approached me following the session and asked if we could chat about the sync requirements for his project. It was great to hear about Mike's project and make suggestions for easier ways to solve his data posting needs.
As the meal progressed others joined our table. We spent the remainder of the time sharing our story about how we got started using FileMaker. Several of us had been developing in FileMaker for more than 20 years while others were relatively new.
Networking lunch: (front row) Mike Zarin, Dawn Heady, Lee Lukehart, Matthew Dahlquist, (back row) Jenna Lee, Jowy Romano, and Stephen Kerkvliet. Jim Medema (not pictured) graciously took the photo.
From One to Many: Growing Your Consulting Firm
Following lunch, I switched tracks and attended one of the FBA Day sessions. David Knight, the president of Angel City Data, presented a great session on growing your business. I especially liked David's message to "Learn to let go!" It's not stealing the boat; it's rowing the boat when you don't let go.
Closing Session & Awards Presentation
I was thrilled that the Women of FileMaker were awarded a FileMaker Excellence Award for their work in the community! This year, the Women of FileMaker provided scholarships so five first-timers could attend DevCon this year. They've created a Mentoring program and organized a Buddy program to pair DevCon first-timers with a seasoned pro to help attendees find their way around the conference. I can recall attending my first Women of FileMaker luncheon many years ago. There were maybe twenty of us eating at the hotel's restaurant. This year there were several hundred DevCon attendees at the luncheon. The growth of this group has been tremendous!
This year Soliant Consulting was an honorable mention for the Excellence Award for education, an award we won in 2017.
During the closing session the location for next year's DevCon was also announced:
FileMaker Developer Conference 2019
August 5–9, 2018
Gaylord Palms Resort
This year was my 19th consecutive DevCon, and each one is exciting, tiring, and inspiring. I returned with new ideas and determined to learn even more. I love seeing my friends from around the world and my Soliant family from around the country. Hope to see you there!
FileMaker DevCon 2018: Training Day - Part One
FileMaker DevCon 2018: Day 1
FileMaker DevCon 2018: Day 2
The post FileMaker DevCon 2018: Day 3 appeared first on Soliant Consulting.
View the full article
After a busy first day of FileMaker DevCon 2018, day 2 continued the theme of FileMaker being a Workplace Innovation Platform.
FileMaker + Tableau, a Match Made in Data Heaven!
The morning sessions included an eye-opening Tableau integration session from Vincenzo Menanno. In his session, Vincenzo demonstrated how one could use Tableau charting and graphing tools inside a FileMaker WebViewer and subsequently use Tableau's URL Actions, to call specific FileMaker scripts within your solution, which provided a seamless integration between FileMaker ("The Data Curation Tool") and Tableau ("The Data Slicing Tool").
Under the Hood: FileMaker WebDirect Scalability
“Under the Hood: FileMaker WebDirector Scalability” session presented by Emmanuel Thangari
The late morning sessions continued with Emmanuel Thangaraj’s session. This session was great for learning the inner workings of FileMaker Server 17’s Multiple Web Publishing Engine (MWPE) and FileMaker Load Balancer (FLB), which increases the number of users that WebDirect can support and enhances server stability at the same time. I find I always come away from FileMaker’s “Under the Hood” sessions with something new and tangible that I can apply to my development projects.
Data Cleansing for Data Managers and Consultants
“Data Cleansing for Data Managers and Consultants” session presented by Molly Connolly
Following a delectable lunch, Molly Connolly had an insightful session on using FileMaker to scrub bad data from dispersant data sources. Using FileMaker’s calculation and scripting capabilities, Molly walked users through how to cleanse text formatting in specific fields and from spreadsheet data. This session was excellent for beginner and intermediate developers, and Molly organized her presentation in a linear way that built upon each technique that she has used over her many years of experience.
Under the Hood: Data Migration
My second "Under the Hood" session of the day (did I mention I love "Under the Hood" sessions?!?) was with Clay Maeckel on FileMaker's new Data Migration Tool. Earlier this year I wrote a blog post on this tool. He went into detail about the internal implementation of this tool and provided clarity regarding the Rules of Schema Matching between your source and clone files, and explained how this tool could be so fast at migrating your data to a new production file. Clay is one of the original authors of FileMaker's Draco engine (he started working for FileMaker the year I was born!) and his experience shined through in this session.
Later in the afternoon, our very own Dawn Heady presented her session, "Tackling Sync." Dawn started with focusing on five specific strategies for designing your sync solution: such as minimizing historical data, pre-populating the mobile app data, and pushing actions to server side when possible. She then discussed three scripting methodologies for completing a sync, which can be completed using import script step, transactional scripts, or web services. Dawn then uncovered how to use an external data source on the server using a global variable. What a creative solution to this challenge! Next, she demonstrated a working transactional sync solution that will be included with the session materials. From there, Dawn went into well-known FileMaker Sync solutions and discussed their setup process, along with the benefits and drawbacks to each.
"Tackling Sync" session presented by Dawn Heady
Attendee Dinner Party
After our Wednesday sessions, we went to the Attendee Dinner Party and had a wild Texas time! A live band with line dancing lessons, billiards, darts, and ping pong were some of the highlights from this event. Overall, this has been one of my favorite DevCons yet. The variety and polish of sessions have been so impressive and inspiring. The food has been consistently delicious with the bacon being truly remarkable, and I'm a man that knows good bacon.
Click to view slideshow.
I'm looking forward to what the final day of DevCon can bring to inspire us to create innovative workplace solutions.
FileMaker DevCon 2018: Training Day - Part One
FileMaker DevCon 2018: Day 1
FileMaker DevCon 2018: Day 3
The post FileMaker DevCon 2018: Day 2 appeared first on Soliant Consulting.
View the full article
The FileMaker Developer Conference has become, for me, a bit of a reunion. As a remote employee, FileMaker DevCon is a chance to hang out with my co-workers… and it feels like the FileMaker community at large is just an extension of that group.
The sessions are good, too.
Workplace Innovation Platform
The morning kicked off with a Special Session, where Andy LeCates introduced FileMaker as a Workplace Innovation Platform. This framing hits the nail squarely on the head. They’ve summed up what we’ve always known—what we love to do—turning a complex conversation into a succinct story (and a sweet little video): https://www.filemaker.com/workplace-innovation
I particularly enjoyed guest speaker Richard Cox Braden, who spoke about the difference between Creativity and Innovation. As a creatively challenged person, it was helpful to see that amorphous blob broken into distinct, progressive elements: Imagination -> Creativity -> Innovation -> Entrepreneurship. (Maybe I CAN do some of that…)
Modular Programming with Card Windows
My next stop was John Renfrew’s session on card windows. He had great advice on using them for modular, transactional user interaction. He has taken card windows beyond the default centered-highlight use, manipulating their sizing and placement to great effect.
Women of FileMaker Luncheon
One of the highlights of the day, as it is every year, was the Women of FileMaker luncheon. Our developer population is growing every year, and I love this chance to connect with and support each other. I left the luncheon with a new friend AND a pinkie promise to help each other apply to be speakers next year. Win-win!
I attended three fantastic sessions in the afternoon:
Professional Development for All Ranges of Experience
Molly Connolly helped me think through professional development goals (see the pinkie promise above). I particularly appreciate the encouragement to incorporate personal goals into my development plan. I WILL learn the second half of Für Elise some day…
Flexible Reporting with Virtual Lists and ExecuteSQL
Martha Zink rocked the virtual lists lesson. Now I’m FINALLY ready to use them all the time myself.
Delight Driven Design - Transforming Designs from Good to Great
Jordan Watson reminded me of good design principles. As ever, I do best with tangible rules to follow, and his clear Do This/Don’t Do That series was helpful.
End of Day Fun
The day wrapped up with a solid six hours of socializing. Some of us went to a baseball game, some went kayaking, but I chose to stay in the air conditioning and talk talk talk. (Texas is HOT, yo.) This rate of chatting may not be sustainable, but it's definitely my favorite part of DevCon.
Click to view slideshow.
There were so many good lessons from today, but the one I think I’ll apply daily: Every session – and workday – should end with Chuck Brown & The Soul Searchers' “I Feel Like Bustin’ Loose.”
FileMaker DevCon 2018: Training Day - Part One
FileMaker DevCon 2018: Day 2
FileMaker DevCon 2018: Day 3
The post FileMaker DevCon 2018: Day 1 appeared first on Soliant Consulting.
View the full article
Although I’ve been going to DevCon for Lo These Many Years, and even had a role in shaping how Training Day has evolved, I’ve never had the chance to assist with a Training Day session, or for that matter even attend one. This year I had the opportunity to join my Soliant team-members assisting Bob Bowers with his advanced session, and I’d like to share a few observations from that.
First I want to say that I wish I could have attended all the Training Day sessions, especially the one on User Experience, but sadly I’ve never learned to be in more than one place at a given time. Instead, I’ve accosted a couple of the presenters in the hallway to ask them how things went. I hope to speak with the rest of the presenters before the week is over and write another post sharing what I hear from them.
Advanced 1 - Techniques & Advanced - Integration
I’ll start with my report on the advanced session:
Bob does a great job of taking complex topics and reducing them to their essentials, providing a foundation for exploring them in greater depth. Among other things, he guided people through:
the ExecuteSQL function
setting up ODBC connectivity, both with FileMaker pulling data from another data source and acting as a data source itself
the structure of JSON data objects and how to use FileMaker Pro’s JSON functions
the basics of cURL commands and how to incorporate them into FileMaker Pro’s Insert From URL script step
connecting to APIs using the above and parsing the resulting JSON
It was a lot of material, but as promised, he stripped it down to concepts that were easy to understand and put into practice. That said, when we came back from lunch, he announced that his strategy for staying on track would be to start talking faster. And so he did.
I love helping people understand new concepts, so it was a treat to work as an assistant. My only disappointment was that generally, Bob made things so clear that people didn’t need me much. I learned a few things along the way myself, including an approach to looping through grouped data that involves looking at the first record in a sorted (grouped) record set, working with that record, then calculating the number of records that belong to the group and jumping past them to the next unique record. It’s simple, but I’ve always accomplished the same thing in a different way and was happy to be offered an alternative.
Next, here’s what Jim Medema told me about his beginner session.
"My goal was to create a training such that brand-new people – we had people who had been introduced to FileMaker one or two weeks ago – could, by the end of the day, have built an app they could walk away and put into production. And it happened! We were inches away from the point where they could actually sell it.
We had a woman come up really happy with the team of assistants – we had great people, Lee Lukeheart, Matthew Dahlquist, and Bill Nienaber – and she said, 'Any time anybody raised their hand, they were attended to within moments. Whatever problems they ran into, they got solved.'
One of the assistants told me afterward, ‘You were pushing the class pretty hard. I was working on a technical problem with somebody for a while. When I was done, I wondered how the group was doing. When I looked up, and there were people building charts, charging ahead, they were all with you.’
We also had some experienced engineers. One guy said, 'I don’t know if I belong here, I might be kind of bored so don’t be offended if I walk out.' But he stayed all day and told me at the end, ‘You have laid the complete groundwork for everything that I need to know to get started.’ He’d inherited a legacy system built in FileMaker 6 that finally needed to be rebuilt after running for 18 years. 18 years! Can you imagine? And he feels ready to go off and do that now.”
I’d like to congratulate Jim for his skilled work as a trainer, and his commitment to helping new users get immediate success on the platform.
User Experience 1 - Research, Mapping, and Validation
Today I also talked to Matt O’Dell about his User Experience session. Matt was my team lead at FileMaker when I started in the Marketing department, and we’ve become good friends. A couple of years back, we ran off to Denver together to attend a design thinking training put on by Adaptive Path. He’s continued to charge ahead learning more about design and is committed to making it the focus of his work.
I had a wonderful time helping out Bob and working with my Soliant colleagues, but my second choice would have been to spend the day with Matt. He has so much passion for user experience design. Here’s what he said to me today:
“The full-day training involved following the design process to solve an actual problem. We pretended that we’d hired by FileMaker to improve the process of purchasing a DevCon registration. We started with research and followed all the way through to creating a prototype at the end. We had people build paper prototypes, which was a new experience for most people in the group. After they built them, we said, 'OK, now take that prototype and go out and find someone, just randomly find a DevCon attendee and test your prototype with them.' We taught them the basis of usability testing first and then just sent them out.
People were asking, ‘Is this going to work? Are people going to trust us? Will they interact with us?’ -- but you know DevCon people, they’re a helpful bunch. When the trainees came back they said, ‘It was surprising! You just put the paper down in front of them, you tell them what you want them to try, and they just start tapping with their finger. Then you throw the next piece of paper down, and they pretend to type, it was crazy how well that works.’
You got all that feedback after building a prototype in only 30 minutes. The idea was not to prototype in FileMaker or any other software – not to get too invested in a given design – but to make it easy to throw away and try something else.
Some people even managed to test more than once. They identified problems with their prototype, drew up new screens, and went out and found someone else to test with again. That’s how you do it! That was the a-ha moment for people. This isn’t just a fun little art project -- it actually works.”
Hearing about all this from Matt, I especially liked how he got people on their feet and moving. They never touched their computers, so there was no opportunity to zone out and check their email. They stayed engaged every minute.
He had some great assistants too – Alexis Allen, a brilliant design-focused developer, Steve Gleason, who has an advertising background, Karen Craig, who has an industrial design background, and Laramie Erickson, a project manager at iSolutions. I’m sure they made an amazing team. I hope I get the chance to help out in the future myself.
Unfortunately, it sounds like the workshop wasn’t as well attended as Matt had hoped. That disappoints me since I strongly believe that a design-centered process really works. Right now I'm working with a pro-bono client through Soliant’s wonderful Philanthropy Program, and I’m incorporating design activities into our work together during the foundation. In the first few meetings, I was getting disjointed requirements that I couldn’t assemble into a clear narrative. But when we switched to a design-centered approach, everything immediately started coming into focus. We also started having a lot more fun.
So that’s my report so far. Please stay tuned for another post once I’ve had the chance to talk with the other three trainers: Jeremy Brown, Matt Petrowsky, and Cris Ippolite. Happy DevCon!
FileMaker DevCon 2018: Day 1
FileMaker DevCon 2018: Day 2
FileMaker DevCon 2018: Day 3
The post FileMaker DevCon 2018: Training Day – Part One appeared first on Soliant Consulting.
View the full article
Over the past few years, FileMaker has started incorporating collapsible sidebar panes into the design of FileMaker Pro. They started with modernizing the Script Workspace, then added a right-hand pane in the Specify Calculation dialog box, and now the canvas in Layout Mode also follows the same design pattern (see Figure 1):
A new left-hand pane contains two tabs labeled "Field" and "Object". The Field tab contains the Field Picker, previously a floating palette, and the Object tab replaces the floating Layout Object window that was introduced in FileMaker 16.
A new right-hand pane contains the Inspector. If you've ever lost track of where the inspector is, or whether it's open, this should be a welcome change.
Figure 1. New sidebar panes in Layout Mode (click image to enlarge)
Familiar keyboard shortcuts apply to both panes:
If you press command-K (control-K on Windows), the left-hand pane opens and closes. This was previously associated with opening and closing the Field Picker palette.
If you press command-I (control-I on Windows), the right-hand pane opens and closes. This was previously associated with showing and hiding the Inspector palette.
Since the new panes are part of each window you have open in Layout Mode, they are controlled independently for each one.
I like how these changes bring more consistency to the FileMaker Pro user experience and anchor key information in predictable locations.
What else has changed?
Figure 2. Updated "picker" and Field tab (click image to enlarge)
Generally, the contents of each of these panes are the same as in their FileMaker 16 equivalents, with a few notable differences (see Figure 2):
The new "picker" includes icons that make it easier to recognize each field type on sight. (As in FileMaker Pro 16, you can change the field type from the picker rather than going to Manage Databases).
You can now set field control styles directly from the Field tab, where previously you could only do this using the Inspector.
But there’s one other significant change: in the upper left of the screen there is no longer a "book" to page through layouts, or a slider to move through them quickly. I assume that this change was made in the spirit of simplifying the interface and to help prevent confusion between Layout Mode and Browse Mode, which until now used the same interface elements in similar ways.
As an advocate for new users, I’m very much in support of making the new user experience more intuitive, but I have to say that I’m going to miss these elements. For me, it's second nature to navigate from one layout to another using the book, or to jump to the first or last layout in the file using the slider. Fortunately, you can still use the same keyboard shortcuts for moving between layouts one at a time: ^↑ (control-up arrow) to move backwards and ^↓ (control-down arrow) to move forwards. If you have trouble with these on Mac, check your Mission Control settings in System Preferences.
How does it feel?
I'm still getting used to it. For example, here’s a window behavior that caught me off-guard: if your window is positioned in the middle of your screen with ample room to the right and left of the window, then switching to layout mode will expand the window on both sides to accommodate the two docked panes. That’s all well and good. But if for instance your window is positioned all the way to the left of your screen, switching to layout mode will move the content area of your layout to the right. I find this a little disorienting, but it may be something I'll adjust to over time.
Additionally, when working on some legacy systems with wide layouts, I feel a little cramped if I have both panes open at once. That said, a well-designed layout shouldn’t get excessively wide, where “excessively” is a subjective term but has to do with how much information the user can scan easily. Most layouts should fit just fine on a modern monitor – even in layout mode showing both panes – while leaving room to work in the “scratch” or non-display area as well.
However, if you ever find yourself limited in horizontal screen space, or if you just want to position the inspector close to the objects you are working with, do not despair. You can still work with multiple Inspectors, each of which opens as a familiar floating palette. Simply open a second Inspector by choosing the menu item View > Inspectors > New Inspector, and then close the right-hand pane. Note that there isn't a similar option for opening the Fields tab or Objects tab as a floating window.
I’m curious what working habits I’ll develop over time: when the right-panel Inspector will feel solid and reliable, and how often I’ll finally need a floating one. I can tell that opening and closing the left-hand panel as needed will soon become second nature. What do you imagine your preferences will be?
If you have any questions about this or any other new features included in FileMaker 17, please contact our team. We’re happy to help your team determine the best way to leverage them within your FileMaker solution.
The post Now You See Them, Now You Don’t: Sidebar Panes in Layout Mode appeared first on Soliant Consulting.
View the full article
There are many ways to boost your FileMaker solution's capabilities by going outside of the scope of typical platform functionality. For example, you can adopt one of the many plugins available on the market; you could partner with an experienced developer to customize functionality from the ground up and integrate with the tools and APIs provided by other software. A good example is emailing. For this, the FileMaker platform has native capabilities. You can leverage plugins to get additional features or integrate with any of the Outlook APIs.
However, you have an often overlooked third option – microservices.
Leveraging Microservices in FileMaker
Microservices are aptly named – they’re pieces of functionality that perform small tasks. The term refers to a software architecture style of connecting small features together to create one larger cohesive system. Leveraging this type of development makes sense as your business evolves, and your solution requires new functionality for more use cases, or to have functionality shared among different systems built on different platforms.
Microservices also restrict one addition or bug from crashing an entire system by limiting access to one small part of it. This simplifies deployment and security of new features.
Microservices v. Plugins in FileMaker
Microservices also presents distinct advantages over plugins in FileMaker:
Your choice of code: You can create microservices with a wide variety of coding platforms. You can only create plugins using the C programming language.
Available on all FileMaker platform clients: You cannot use plugins in FileMaker Go unless you make special provisions. Similarly, plugins require a special version to work on Filemaker Cloud. However, you can use Microservices with any type of FileMaker client.
No platform dependencies: You must compile plugins for Mac, Windows, and Linux to cover the whole platform. Microservices work out-of-the-box and are agnostic to the client’s platform.
Limitations of FileMaker
Just like plugins, you can use microservices to add functionality to your solution that the FileMaker platform does not offer itself. For example, FileMaker does not provide support for Regular Expressions (RegEx), which work well for finding patterns in text.
Say that you have a bunch of text from an email and you need to check if it contains a US or Canadian postal code and extract that from it. A RegEx expression of
would find instances such as “60607” and “60607-1710” for Chicago, IL or “L9T 8J5” for Milton, ON.
While FileMaker does not do so natively, many other platforms construct a few lines of web service in the following:
In FileMaker, you would use the “Insert From URL” script step to call the microservice and then pass it the text and the expression you’d like to use on it. The microservice would send back the list of matches in JSON format to easily parse with the native JSON functions in FileMaker.
New Ease of Adopting Microservices
Leveraging microservices within FileMaker has been possible for years but has become much easier. Adopting microservices is easy for two big reasons:
Since FileMaker 16, calling a web service and working with its response is extremely easy. The revamped “insert from URL” script step and its support for cURL takes care of that.
Every FileMaker Server already enables a web server (IIS on Windows, Apache on macOS). In addition, every FileMaker Server comes with a Node.js server already deployed, ready for you to use. You already have the platform to deploy the microservice on.
Examples of Microservices in FileMaker
My team and I have built a dozen microservices for clients’ FileMaker solutions over the years. For example, we’ve developed forecasting capabilities and specialized data reporting to fit within a legacy FileMaker solution for a biotechnology research organization. Our team has also worked with a national media company to build a connection between its FileMaker solution and Okta identity management capabilities for secure user login.
Other examples include API-to-API mapping when receiving data from SAP systems into FileMaker and pushing data from FileMaker into financial systems. The possibilities are endless and truly depend on your needs and goals within your FileMaker solution.
If your FileMaker solution needs functionality related to difficult or specialized computations, XML and JSON parsing, or API-to-API mapping, I recommend considering building microservices for your system.
Building Your Microservices
If you have questions or would like to add microservices to your FileMaker solution with an experienced partner, contact our team. Our experience in microservices extends well beyond FileMaker, and we’re happy to provide additional insights for your organization and evolving solution.
The post How to Enhance Your FileMaker Solution with Microservices appeared first on Soliant Consulting.
View the full article
Is your company suffering from Shadow IT?
Many IT leaders claim their companies are devoid of Shadow IT or that it has a very minimal presence. The CIOs and CTOs I often speak with believe they have open collaboration with other teams and an eye on all systems and tools used within their organizations. Unfortunately, research shows otherwise.
Shadow IT is much more common than most people realize and can pose a serious threat to the stable workings of the IT department as well as a company’s security.
What is Shadow IT?
Shadow IT refers to the tools and solutions workgroups build and use without IT’s approval. The issue often arises out of a need for a specific solution unrecognized or unfulfilled by the IT department, driving the workgroup to covertly implement their own solution out of desperation.
Unfortunately, these secret solutions, while borne out of good intentions to make workgroups more productive and efficient in the short run, often don’t fill the bill since they are implemented without oversight and guidance, and can hurt the organization in the long run.
Shadow IT Threats
To truly understand the negative impact of this issue, you first must understand how many risks it presents.
Data security is the greatest and most consequential risk factor with Shadow IT. As teams take control of their data and manipulate it through systems unsupported by IT, they open up this data to a world of risk. Workgroups often to fail to consider how proprietary and sensitive company data moves through the application. Teams aren’t trained to look for weak spots in the application, where individuals outside the organization could gain access to the data. These under-the-radar solutions often exist outside of the company network as well, putting sensitive information at serious risk of hacking, corruption without backup, or critical loss.
Decrease in Productivity
Most workgroups jump into implementing a Shadow IT solution to increase productivity and save time. Unfortunately, they often don’t accomplish this goal. Instead, they end up spending a significant amount of time setting up technology they’re unfamiliar with and manipulating their processes and data to work within it. When they need to make a tweak to these inputs, they may end up spending hours of effort, whereas a member of the IT team could resolve the change in mere minutes. Furthermore, having implemented the solution on their own without guidance from IT, they have no assurance that the solution will even deliver the capabilities or productivity they hoped for.
Lack of Quality Assurance
A successful IT team mercilessly tests new programs and applications before releasing them to the rest of their company. During this process, they know what to look for and how to identify breakdowns in capabilities and functionalities. Workgroups unfamiliar with this process often move forward without even thinking about the possibility of bugs, let alone testing for issues themselves.
What happens when Nancy in accounting tells Tim in HR about the new application she launched to help her with her weekly reporting? If she piques his interest, he could look into finding his own solution to his data challenges or adapt and re-use hers, propagating a problematic situation. Now not only is client invoice data at risk, but sensitive employee data could follow suit.
Decrease in Collaboration with IT
While every workgroup endeavors to innovate, those closest to breakthroughs in technology, i.e. the IT department, have the best grasp on what is truly possible for their organizations. Once workgroups feel disconnected from IT, however, the sharing of ideas and needs often comes to a screeching halt. Without open lines of communication, IT has even less insight into what each workgroup needs, and the effect can snowball.
Duplication of Effort
In some cases, IT is aware of a specific workgroup’s challenges and is actively trying to find an application to address them. Unfortunately, due to a breakdown in communication, the workgroup isn’t aware of these efforts and implements their own solution. This duplicates effort and wastes a great deal of time.
Think You’ve Avoided Shadow IT? Think Again.
In spite of these risk factors, many CTOs and CIOS vastly underestimate the impact of Shadow IT within their organization. They are not aware of the ecosystem of unknown IT solutions right under their noses and/or fail to worry about their risk and potential negative impact.
In fact, according to a study from CEB (now Gartner), IT leaders believe they control 80% of their budget but really only control 60%. That other 40% of the budget is going to Shadow IT. ServerCentral estimates that by 2027, 90% of IT budget will be spent by other departments.
IT leaders think they control 80% of their budget, but they really only control 60% of it.
How IT Leaders Can Avoid Shadow IT
1. Talk with Those Outside of IT More Often
Formally and regularly check in with workgroups to ensure their needs are being met. Are they craving a specific functionality to make their jobs easier? Help them find a solution you can get on board with. Encourage them to reach out to you next time they have a challenge they think technology could fix.
2. Make IT More Accessible
IT can be intimidating. Encourage your teams to reach out and engage with other workgroups often. Provide information about what you do and why; explain it in terms and contexts they can understand. Assure them that one of your most important goals is to help them achieve their goals faster and more efficiently. Embolden them to approach your IT team with any and all technology questions.
3. Speed Up Response Times
What currently happens when a workgroup makes a request for a solution or expresses difficulty with an existing IT-approved technology? Typically, IT makes a ticket and decides to address it “later,” not prioritizing the workgroup’s need. After a long wait, the workgroup resolves to find a solution on their own. To avoid this, keep lines of communication open and provide regular updates to all teams involved.
4. Educate Colleagues
Help your company understand the implications of Shadow IT and the benefits of increased collaboration with the IT department. Consider showing them case studies of Shadow IT gone wrong – share with them the implications of rogue, unapproved technologies.
5. Stay on Top of Innovative Technologies
You can also stay ahead of your company’s workgroups by knowing about new solutions before they do. This requires quite a bit of effort for an IT team. They must monitor all fields relevant to their workgroups and understand how emerging technologies within them could empower them. This puts IT in an ideal situation to pitch solutions to each workgroup, instead of the other way around. You can research each option and choose to present only the strongest and best-aligned solutions.
6. Identify and Fill the Gaps
You may find a workgroup needs a solution that doesn’t exist. Rather than encourage and support them in an endeavor to patch together a shoddy collection of home-grown or slickly-marketed tools built by amateur developers, consider building a custom solution for their needs. If you don’t have the development resources for this effort or your existing IT team doesn’t have the expertise, consider partnering with an experienced team.
Pivoting to Avoid Shadow IT
As Shadow IT becomes a bigger issue, CIOs face growing pressure to keep their company data safe and monitor all internal technologies. Without the cooperation and collaboration of all company workgroups, this task is impossible.
To minimize risk, I encourage you to run an audit of all systems that fall outside your purview. Which pose an actual threat, and which can you adopt under your umbrella of systems?
Following this internal review, I encourage you to put together a plan of how to support other workgroups moving forward.
If you need help integrating systems or building custom solutions to meet the needs of specific workgroups, our team may make a great partner for yours. Contact us to learn more about how we work with businesses to reduce Shadow IT and encourage collaboration between teams.
The post How to Avoid Shadow IT: An IT Leader’s Guide appeared first on Soliant Consulting.
View the full article
Women of FileMaker is excited to launch DevCon Buddy, a new program empowering experienced DevCon goers to assist new attendees in maximizing their DevCon experience. The organization welcomes all experienced attendees to sign up on the Women of FileMaker site. All genders welcome!
Announcing DevCon Buddy
What: New Attendee Meetup at DevCon 2018
When: Monday, August 6, 2018 at 5:00 – 6:00 pm
Where: Texas A
To help DevCon first-time attendees get the most out of the annual FileMaker conference, the Women of FileMaker have launched a new program, intended to assist in the pairing of new attendees with experienced conference attendees. It all starts with the New Attendee Meetup on the first night of DevCon.
What to Expect at DevCon Buddy
The organization encourages experienced DevCon attendees to join this meetup as a “buddy ambassador” who can guide the new attendees throughout the conference. Women of FileMaker will provide special DevCon Buddy stickers for your badges, one sticker type for ambassadors and one sticker type for rookies, so that new attendees can easily identify and connect with the veterans and the veterans can identify the rookies. They encourage new attendees who are interested in having a buddy to approach anyone with a DevCon Buddy Ambassador sticker on their badge, and vice versa. Once attendees make a connection, they can exchange information and keep in contact during the conference.
Women of FileMaker encourages you to attend the New Attendee Meetup on Monday, August 6, at 5 pm in Texas A. Look for people with the Buddy Ambassador sticker, and approach someone you think looks interesting! Then exchange information to keep in touch over the conference. Perhaps you can have lunch together or meet during a break to catch up and discuss upcoming sessions.
If you would like to help out a new attendee, you can sign up as a buddy here.
You will receive email updates and reminders to attend the New Attendee Meetup and instructions on how to get your badge sticker. Women of FileMaker appreciate your efforts to volunteer your time and make someone else’s DevCon experience more enjoyable!
How It All Started
The idea for a DevCon buddy system started on the Dream Board at the Women of FileMaker booth at DevCon 2017. A booth attendee took the time to make this suggestion, and the organization was happy to make it a reality.
Dreams do come true
The post Women of FileMaker Launches DevCon Buddy appeared first on Soliant Consulting.
View the full article
The Breakdown Between Workgroups
As a business evolves, its workflows, internal processes, and teams follow suit. Companies quickly outgrow their trusted tools, and teams find themselves distanced from one another by their specific responsibilities and goals. As a result, they eventually operate in silos and face different challenges, creating a major breakdown in efficiency and productivity for the company overall. To remedy the situation, they need to find a bridge to one another and to the common goals of the company.
At this point, companies face what we call the workgroup conundrum -- they must adopt a technology solution to address their business challenges while simultaneously connecting their teams
As a Business Solutions Architect, I see a very common problem related to software development for workgroups. My team and I call it the Workgroup Conundrum. It defines the struggle for all teams in an organization to get what they need from existing internal systems.
For example, a team may accomplish 75% of their work with a program. However, they struggle with the other 25% manually, spending the bulk of their time on just a few tasks. Once this technology need gets to a critical point, they often notify IT and request a solution.
IT then must start considering a new application to help this department accomplish their goals and increase efficiency. The first question they ask is, “Do we purchase something off the shelf or build something ourselves?” The former is often less expensive but may not fit the exact needs of the workgroup, exacerbating the issue. The latter can be expensive and time-consuming, draining resources from all teams involved.
We find ourselves helping organizations navigate this challenge so often through our consulting work that we wrote a white paper to share our top insights. You can download that resource here. We recommend it to anyone on the journey of getting software that meets their needs.
The Alternative: A Hybrid Approach
However, with the advent of powerful, heavily customizable platforms like Salesforce, we talk to many companies with a slightly different problem. They already have the systems they need in place. Unfortunately, these systems don’t talk to one another, creating silos in the organizations and cutting down on collaboration between teams. In the long run, this disconnect breeds inefficiency and halts the flow of data through the organization.
These companies have entirely valid reasons that they can’t just put everything into one package – they’re going to have to move forward with multiple systems in place, sometimes used by different workgroups, sometimes by the same workgroup. These combinations look all different ways and often involve hybrid ecosystems of enterprise software, off-the-shelf packages, and custom software.
So how do they remediate this issue and cohesively connect their systems? This brings us to the topic of custom integrations.
Integrating Critical Systems
Many enterprise suites and off-the-shelf software packages offer some tools for integration with other systems and data sources. Some really well-built custom software will also have its own APIs.
In all cases, these pre-created tools and APIs themselves were still designed with a relatively generic purpose in mind and are often “tool-centric” in their vision. They are designed with the tool from which they originate in mind as the crux of your use case.
Often the tools and APIs available require data to be structured in a certain format or conform to a simplified understanding of the rules, data, and technical wherewithal of the users who will rely upon them. These options work for some of our customers, and we help many customers implement them. However, as often as not, our clients suffer from their limitations.
And we are in this new era in which Software-as-a-Service, Platform-as-a-Service, etc., have become so dominant, our clients often turn to custom integrations as an approach to bridging these internal systems.
What is a Custom Integration?
An integration connects two systems, allowing them to pass critical information between one another to enhance the capabilities and functionalities of each. They typically are bridged with an Application Programming Interface (API) and streamline complex exchange of data to both improve the user experience and drive a more agile system.
Many existing SaaS products and cloud-based systems have built-in public APIs and partnerships with other systems, empowering users to easily gain additional functionality. As a simple example, consider how Instagram users can connect other social network, such as Facebook and Twitter, to their accounts. This functionality allows them to share their content across the networks. It eliminates the process of having to copy the text and url from an Instagram post and switch to new applications to share it. Users can simply do all of this in one place. Other examples include connecting Salesforce to Mailchimp or Google Maps to WordPress.
Custom integrations, however, are a bit more complex. Developers build these to deliver on a unique need for organizations. They are often private in nature and only accessible by a specific set of users through careful permissions and protocols. Custom APIs enable an organization to connect its internal systems, enhancing the productivity of its team members through the collaboration of information between applications.
For example, an organization can integrate its invoice system with its customer management system to automatically track customer billing and send out invoices once a month. This removes the manual burden of checking every account and building invoices for each. It saves teams potentially dozens of hours every single month. Or, an association could connect its membership database with its public website. This would allow members to log in and manage their account profiles themselves, empowering the membership coordinator to focus on other priorities.
Depending on the need, developers can build these using Representational State Transfer (REST), Simple Object Access Protocol (SOAP), or Remote Procedure Call (RPC).
Data Structure Considerations
However, you can’t just write some code and go; development teams need to put thought into how they structure the flow of data from one system to another. You must focus on consistency and build clear definitions for data the systems are sending and receiving.
Focus on intuitive data structure that works for all potential use cases. You don’t want to have to update your entire API to support minor changes or small user needs. Planning your integrations beforehand is key to successful custom integrations! Take some time to map out all potential needs from the API before you get started.
Getting IT’s Blessing
When workgroups face issues with existing systems they can't navigate around, they often resort to “Shadow IT.” They adopt off-the-shelf applications or sometimes even build their own systems to accomplish their goals and automate their processes. These systems often fall under the radar of IT and obviously aren’t in compliance with the organization’s IT standards. My team and I often come across this issue during our consulting process.
The better way for workgroups to get the capabilities they need is to work with IT to outline and build custom integrations between their systems. Then they add the functionality they need. This does, however, require collaboration with (and approval from) the often overloaded IT team.
The best way to get IT’s blessing is to build a compelling case for why the custom integration is needed. Include an outline of top use cases, and then help secure a budget for the work. You’ll have to provide a solid ROI calculation and show how it would improve workflows for the teams who will use it.
Of course, sometimes IT can see the reasoning and needs behind a pitched custom integration but still cannot schedule time to build it. Why? They’re just too busy.
In this case, finding an external partner to build the integration can serve as the best solution. They’ll have the bandwidth to complete the project on time and within scope while simultaneously offering an outside view. This ensures your organization doesn’t miss any crucial details related to use cases or data needs.
My team and I offer such custom integration development services and often support existing development and IT teams. If you’re looking for a partner, we are happy to discuss your envisioned integration and see if we’re a good fit. Contact us today.
The post Bridging Systems to Connect Teams: Your Guide to Custom Integrations appeared first on Soliant Consulting.
View the full article