An Intro
Hi all,
I want to give background on a frequently-encountered situation that I’ve seen across a few software engineering places.
Alright, what’s the story?
Junior engineer Kartono needs to quickly code up a feature, which involves the retrieve of GraphQL entites to operate on enterprise assets. He’s developing out a brand new business workflow, and the pseudo-code for his changelists resembles the following structure :
def execute_business_workflow(...):
...
unorganized_preprocessing_steps
graphQLClient = makeGraphQLClient(params)
graphQLResponse = graphQLClient.fetchData(datasetParams)
unorganized_postprocessing_steps()
....
This is a good start, BUT, senior engineer Quorra notices a couple of changes that can be put in to make the code better. Quorra immediately gets into her mentoring mode , starting with a few observations.
Quorra’s Observations
- Logging Posture – There’s a major lack of logging logic. If I have to onboard a new database type and execute a business workflow to verify that it’s working, how do I tell how much positive progress I’m making on the code? Which step am I on? Am I able to get through all the pre-processing steps? Or am I stuck on some odd step in the unorganized post-processing steps?
- Profiling – What if I need to profile the speed of function execution and methods? If there’s latency issues coming up in calls to execute_business_workflow(), how do I filter and triage the location of performance degradations or failures? What part of that workflow behaves slowly? Is it in the pre-processing steps, the API calls, or the post_processing steps?
- Single Responsibility Pattern – that graphQLClient really doesn’t belong in the entirety of executing a business workflow? Can I remove that away
- Evolvability – what if I suddenly change my APIs ( e.g. today we’re executing GraphQL calls, but a couple weeks, we might be executing HTTP calls )
What does better code resemble ?
def executeGraphQLCallWrapper(params):
graphQLClient = makeGraphQLClient(params)
graphQLResponse = graphQLClient.executePost(params)
return graphQLResponse
def execute_business_workflow(...):
preprocessing_steps()
graphQLResponse = executeGraphQLCallWrapper(params)
postprocessing_steps()
In this code piece, we’ve (1) introduced functional decomposition and (2) Isolated the entirety of graphQL flow into it’s own wrapper call. We’re going to get a multitude of benefits, such as
- Faster debug time – I can introduce a couple of log lines at the start and end of each method call and see if I entered or exited the function. I can also introduce logging lines at the granularity of code within those functions, to get a deeper view into which step fails
- Profiling Latency Delays – suppose invocations of execute_business_workflow() consume 2 seconds, and I need to meet a tight-Enterprise SLA of 1.5 seconds. Which step is slow? Because we introduced functional decomposition, I can quickly put start-stop monotonically-increasing timers in three functions. Let’s suppose this breakdown :
- preprocessing_steps() [ 0.3 seconds ]
- postprocessing_steps [0.1 seconds ]
- executeGraphQLCallWrapper() [ 1.7 seconds ].
Alright, looks like the graphQL calls are taking up the most cycles – maybe we should throw a cache or other pre-computation structures here?
What’s my Takeaway?
Your takeaway is quick – functional decomposition governs good coding practices, and partitioning out the functions that introduce network calls or external dependencies will help you expedite your software development 🙂 !!!

Leave a comment