harisrid Tech News

Spanning across many domains – data, systems, algorithms, and personal

  • TOTW/18 – Unlock Real-World Skills and Bias to Real-World-Esque Interview Problems. You can assess candidates better.

    Because I’m just as sick of seeing questions that remind me of inapplicable problems, like shooting penguins out of cannons in your AP Physics Textbooks, and I’d rather solve something that entails more pragmatic engagement.

    Why real-world applicability?

    Today, I want to share one of my favorite leetcode problems to solve, and why I think it’s a worthwhile problem to stress test possible candidates. Since I’m someone who’s solved a boatload of Leetcode problems, I’m always thinking about how interviewers can select for better problems and index for candidates who demonstrate better real-world, on the job performance thinking skills.

    The problem is Leetcode 428. Serialize and Deserialize a N-Ary Tree. I’ve attached the description underneath, but in this problem, TC ( the canidate ) needs to be creative and conjure up their own approach to serializing ( converting from an in-memory data structure to a string ) a n-ary tree and then deserializing said tree ( converting back from a string to the original structure ) . It’s technically a HARD category problem, but in my honest opinion, it borders a hard MEDIUM difficulty question. I would definitely ask it to a Google L4+ candidate ( using current leveling systems ). I think this is a good problem because we stress-test the following criteria :

    1. Leetcode problem 428 : A pragmatic hard that one can expect to frequently encounter.
    The Stress-Test Criteria
    • Word problem translation – the description closely mirrors real-world case studies and ambiguous problems engineers encounter. Without the constraints mention, TC can spend some time asking really good clarifying questions.
    • Design thinking open-endedness – candidates can get super creative with their approach ; their really is no “one-size-fits-all” strategy, meaning that there’s room for an interviewer to possibly learn how to do a problem differently ( and maybe better ). My approach involves a root-left-right style with more paranthethicals : it resembles (1 (3 (5) (6) ) (2) (4))
    • Real-world applicability – thinking of how to handle serialization and deserialiation, or at least handle converting structures across multiple formats ( e.g. JSON to memory or in-memory to Protobuf ), is frequently encountered when transmitting payloads across environments.
    • Recursion – in the general, most good interview problems stress test recursive thinking.
    • String handling -TC needs to think about parsing strings – how to handle the different tokens ( integer values, ‘(‘, ‘)’, ‘,’ ). This may entail the use of Regexes or a combination of library functions and delimeters. It also involves conjuring up a rules engine to adjust a pointer in string input, based on the token under processing.
    • Top-down/ bottom-up chunkification – TC needs to think about how to segment the input in their recursive-esque approach.
    • Compression – interviewers can gauge how well TC think of making minimal serialization strings. This matters in the real-world with at-scale settings, where input compressions translate to revenue.
    • Minimal data structures – the problem is solutionable without data structures ( implicit stack space recursion ) or with a stack ( explicit memory recursion ). There’s minimal “trip up” room in TC needing to think of to many data structures.
    Footnotes

    URL Link = https://leetcode.com/problems/serialize-and-deserialize-n-ary-tree/description/

  • BEHAVIORAL INTERVIEWING : OBSERVATIONS NOTICED

    A Primer

    Hi all,

    I’ve been working hard on behavioral interviews for senior and senior-plus interviews across companies, and there’s a couple of personal paint points that I’ve observed ( note : n==1 here, this won’t apply to everyone ) ( and as of this writing, dated Wednesday, November 12th, 2025 ).

    Let me begin!

    Currently Observed Struggles [ TC := The Candidate ][ TI := The Interviewer ]:

    • TC needs to communicate a stronger sense that they took on leadership roles : they led groups of people.
    • TC needs to communicate a strong sense of ownership. That they owned systems end-to-end.
    • TC can refine thought communication : currently communicates in a non-structured, non-linear manner with tangents. They can communicate in a more structured, linear fashion.
    • TC needs prompting by interviews to highlight (A) Their specific roles and responsibilites
    • TC needs TI prompting to highlight their impact across folks – within teams and across teams – not just themselves.

    Tips for Leader of Group Settings :

    • The groups need not be “explicitly labeled” and lasting as permanent positions. The groups can be for multi-quarter or multi-week projects lasting 1-18 months, compromised of 5-15 individuals, within teams and across teams, and across careers : engineers, Product Managers, Designers, and Program Managers.

    Commonly Encountered Questions ( what’s been asked of a lot )

    • Tell me about the most challenging project you worked on?
    • Tell me about a time when you took on something significant outside your area of responsibility / went above-&-beyond expectations?
    • Tell me about a time you disagreed with someone?
    • Tell me about a time when you were able to deliver an important project under a tight deadline?
    • Tell me about a time you had to delegate a task?
    • Describe a time when you made a mistake. How did you handle it?

    The practice Questions I want to Target

    1. Tell me about a time you worked with someone difficult to get along with. What constructive steps did you take to address the situation?”
    2. Tell me how you work with your PM
    3. Tell me about a time you dealt with limited information and a fast & upcoming deadline.
    4. Can you tell me about a time you gave constructive feedback to a coworker.
    5. Can you tell me about a time you delivered negative feedback to a coworker.
  • SENIOR BEHAVIORAL INTERVIEW PREP : Working with Limited Information and Making Wrong Decisions. And the lessons learned.

    Behavioral Interviewer : “Tell me about a time you had to deal with limited information and you made a wrong decision.”

    A Primer

    It’s a stumpy question for senior-level and above behavioral interviews; I’m still determining the most appropriate manner to story tell. But here I go 🙂 !!!

    Situation

    I’m working as a senior engineer at Geico where I’m taking the lead on an organizational challenge problem : achieve 100% compliance coverage and severely reduce regulatory risk for PII data across GEICO’s enterprise data platforms.

    The stakes were high – – I have to meet an aggressive six month regulatory timeline; if unmet, the company would have to cease New York operations, resulting in millions in estimated annual losses.

    But it’s also incredibly ambiguous! I’m limited.

    I’m limited in what I know, what data has the PII, and what the best long-term architecture should even resemble, given enumerable systems.

    Task

    Albeit the uncertainties, I had to make a strategic call. So I set about a task : I developed out a rules engine capable of sensitive data classification. And I made two key assumptions.

    1. Firstly, that the rules are constant.
    2. And secondly, that the data sources are constant.

    By doing so, I could quickly achieve the compliance milestone with a working prototype.

    Actions

    But even though I delivered the first part of the ask in six months, I ran into unexpected issues later when extending the application cross-functionally. My assumptions broke. Workflows that worked with one input set immediately failed on alternative input sets, because the underlying rules and the datasets changed.

    Consequentially, I had to take a few actions. I had to delay future feature extensions, partner with engineering leads and compliance leads, and rearchitect the application for modularity and extensibility. This effort took about 2-3 weeks, but led to the scaling of tool adoption from team level to company level.

    Results

    Now despite these refactor efforts, I was still able to deliver the 100% compliance coverage ask while keeping systems scaleable and adaptable, even after the initial 6 month ask. I also succeed in setting the foundation of internal data governance infrastructure.

    Key Takeaways and Learnings

    If there’s anything I learnt from this experience, it’s to strongly invest upfront time in documenting key assumptions. From now on, I always flag these assumptions as “critical risk areas”; I design with a few caveats and extensions in mind to minimize future organizational pain points.

  • Data Engineering Interview Insights: Key Skills & Challenges

    A Primer

    Hi all,

    So I recently conducted a data engineering mock interview, and there’s a couple of things that I want to touch upon :

    Fore most, I want to point out that unlike most other interviews, data engineering really is the wild west world : there’s few books on the subject and it’s a highly-variadic process across companies. Organizations often emphasize one facet of data engineering over another ; some seek strong SQL skills, others covet strong ETL pipeline designers, some want folks who understand lambda ( RT/nearRT ) versus kappa ( batch )  architectures, and a few really emphasize an understand of performant distributed compute with platforms like Spark and Flink.

    Still, it’s a good idea to try to understand what folks look for at a high level, and luckily, data engineering has some overlap with general system design principles ( in fact, a question from Alex Xu’s Volume 2 book, Chapter 6, Ad-Click Event Aggregation, makes for a perfect data engineering interview question )

    Data Engineering Case Study Questions

    ( some my own, some motivated from online sources 🙂 ):

    ETL Pipelines :

    Case Study : I want you to design me an end-to-end solution. Construct me a data pipeline for near-RT ingestion of Netflix IoT Data : click stream data or playback data. It should be designed for ad-hoc monitoring of select metrics. It operates at Netflix scale and data is geographically-distributed. Your pipeline should be able to populate analytics databases for personas such as BAs ( Business Analysts ) and DAs ( Data Analysts )

    • The metrics are up to you for decisioning.
    • You can either focus on a general solution or you can delve into solutions focused on targeted tools,technologies, and platforms of your choice!.

    Question source = https://www.youtube.com/watch?v=53tcAZ6Qda8&t=603s

    Q2 : 

    Feedback ( Case #1 )
    1. TC made for a flexible design and recognized the business context for justifying specific metrics
    2. TC recognized performance bottlenecks in their pipelines.
      1. They noticed it upstream ( on events collection ) and downstream ( on storage of results to staging databases )
    1. TC provided different types of analytics for OLAP or historical database – Athena
    2. TC recognized use cases of storing events in a Data Lake
    3. TC recognized seperate paths : ingestion and analytics. Kept them seperate to account for performancy
    4. TC developed a data pipeline with minimal components and minimal infrastructure
    5. TC engaged in a solid discussion between the push model versus the pull model in their pipeline’s data capture stage
    6. TC engaged in data modeling.
    7. TC mentioned multiple different technologies across pipeline stages and their associated trade-offs

    Feedback ( Case #2 )
    1. TC asked really good clarifying questions. Understood the types of metrics they’d drill down properly
    2. TC thought of really good customer metrics : onboarded, retained, resurrected, and churn rates.
    3. TC solidly started with data modeling – fact and dimension tables.
      1. TC showed how to create a cummulative fact table to compute 30/90 day rolling averages
    4. TC made a good justification for OLAP database. I could easily segue into the pipeline/ingestion portion of the problem
    5. TC understood how to employ distributed compute engines ( DCEs ) to solution the problem
    6. TC build a multi-stage ingestion pipeline for IoT Telemetry Data and delved into different components upon my ask
    7. TC justified how to extend archtiectures to real-time ; not just batching
    8. TC answered how to handle large volumes and large events cases and how to employ strategies to reduce upstream ingestion
    9. TC had really good discussions on log enrichments and canonical data in different pipeline phases.
    10. TC answered remaining design deep dives solidly
    11. TC really “drove the conversation” and the discussion ; I learned new ways of problem-solving and thinking from them.
    12. TC is really thinking about data quality and internal dashboards they’d present for noticing discrepancies and metrics crossing thresholds
  • INTERVIEWING – System Design – What Does an Evaluation Resemble?
    A Primer

    There’s a lot of solid material out there to teach folks how to interview folks for data structures and algorithms – CTCI, Leetcode, Neetcode. But there’s a dearth of material for how to evaluate candidates in higher-level interviews, such as system design.

    I wanted to share what I think feedback would resemble. I’ve added detailed explanations justifying why the feedback written out “is what it is”.

    I ported the question over from Alex Xu’s System Design Interview, Volume 2, Chapter 1 : Proximity Service. Here, the candidate needs to design a system that emulates how users can search for nearby businesses, restaurants, and other akin offerings in a short blast radius of 5 kilometers to 25 kilometers. It’s similar to conducted searches for businesses on yelp.com, booking.com, or TripAdvisor.com.

    The Candidate’s Strengths/What They Did Well
    • TC recognized need to develop for multiple personas – end customers, internal business administrators.
    • TC understands tradeoffs across geo-spatial approaches – two-dimensional longitude-latitude searches, quad trees, and geo hashing.
    • TC typed out more details in additional to verbal explanations – this built a stronger case.
    • TC understands primary-secondary write-read splits and database replication.
    • TC understands CAP theorem and justified high availability over eventual consistency for a large user base product.
    • TC asked thoughtful clarifying questions and downscoped the product to a first-iteration MVP. 
    • TC justified how pre-computed geohashed tiles can fit in a single server and recognized that 5*10TB of tile data can easily fit across five servers.
    • TC understands object hydration and storing or passing only business IDs across caches and networks.
    • TC understands lazy loading and leveraging freshness to solution the problem.
    • TC recognized how they would change design from a microservice mesh to a nightly cron job if the requirements changed from 5-10 minute updates of business to a once-a-day nightly update.
    • TC justified why they would use two distinct API Gateways – to accomodate for each persona with regards to security, scaleability, and abiding by good pre-emptive design choices .
    Areas of Refinement
    • TC can spend less time on capacity planning.
      • TC can instruct the interviewer to do capacity planning in the end and to focus less on it.
    • TC can avoid “deep diving” to much :
      • Deep diving demonstrates knowledge, but if the interviewer stops you to ask a question, they want you to move forward in the direction of that question.

    Leveling Determination

    • Google L4 : Strong Hire
    • Google L5 : Hire
    • Google L6 ( staff+ ) : Leaning Hire
  • BEHAVIORAL – A Junior Level Story : Server Diff Testing
    A primer

    I’ve prepped behavioral interviews in the past, primarily targeting for senior-level or staff-level positions at companies, but I still think that it’s a good exercise to work through written example at lower engineering levels. What do stories resemble for entry-level new graduates or mid-level engineers? What should they communicate if they wind up in an interview loop at another tech company?

    Alright, let’s begin!!!

    The Situation

    Where to begin? What’s the situation?

    It’s Q3 of Google, and I’m working a junior-level role on the Google Analytics Team ( termed GA for brevity). I’m facilitating on ongoing migration of customers from the old product, Universal Analytics, to GA to help clientele maximize the effectiveness of their advertising campaigns.

    I’m paired up with a senior engineer, my tech lead, on a high-impact task. My tech lead noticed that external teams utilize GA’s server diff testing code, which is hard-coded, space-consuming, and difficult to update. If we update the server diff tests, we not only can refactor down 10,000+ lines of code closer to a 100, but we can also expedite developer velocity and bolster end-to-end infrastructure testing correctness.

    The Task

    I’m tasked to write optimal code to do the refactoring from static hard-coded requests and responses to dynamically updateable requests and responses. My tech lead helps me out from time-to-time – they identify locations to make changes or provide references to useful structures and libraries – but the implementation is my responsibility.

    And there’s two challenges.

    The first challenge being development – we have to update both the requests and the responses. And while responses are intuitive, requests are harder.

    And the second being that the requests and responses are variables held in a sister team’s infrastructure components and workflows. This entails needing to work with difficult lambda functions, moving components, and data flows on another team’s codebase to accurately capture state.

    The Actions

    So I take a couple of actions.

    First, my tech lead and I chart design course. We split a complex multi-week business ask into two-phased development, each lasting 2-3 weeks. Phase one – updateGoldens – entailed updating only the responses. And phase two – useGoldenswould entail updating both requests and responses.

    During both phases, I’m writing code to capture state and write the variables out to an external file. By using a file, I can then read from, or write to, it later as state evolves over time.

    Once I confirm that file I/O works, I remove the long hard-coded strings and reconfirm that every test case scenario still passes the server diff tests. I checked feature correctness by verifying that pairs of expected and actual value comparisons stayed the same, before and after the refactor work.

    The Results

    In the end, I delivered a result : a successful working feature, titled useOrUpdateGoldens. Developers on my team could execute two paths dynamically in a command-line esque style. The first command , updateGoldens, preserved requests but updated responses. The second command, useGoldens, updated both requests and responses.

    My team members strongly appreciated the deliverable; it reduced toil and the number of dev cycles needed to manually copy-paste and verifying the integration testing code correctness. The deliverable also helped us meet annual refactor goals.

    My biggest takeaway was learning the value of phase-based development. Before this project, I used to go in with an all-or-nothing mindset to shipping and delivering to production. But down-scoping an ambiguous, complex ask to smaller, simpler tasks enabled faster execution – end users could immediately test and share feedback on the first feature whilst the second feature went under development, enabling a faster feedback loop in the hiding.

  • INTERVIEWING – OPINE : Navigating Job Loss For Software Engineers.

    Because some of us have been here – once, twice, maybe a few times – either due to our own volition or due to a cornucopia of external factors outside our locus of control.

    DISCLAIMER : This article is and may not be well-received by all audiences. Please do not misinterpret anything written here. Evaluate writings with a heavy grain of salt! Especially for folks with familial commitments, visa, or other extenuating challenges.

    An Intro

    Let’s imagine the situation.

    Okay, so you just got laid off and formally released from your role as a software engineer. You’ve been working hard at products, infrastructure, and upcoming development at your company for a while – a year, five, or ten. And out of nowhere, you see an e-mail titled Reduction of Force. If fortuity rests on your shoulders, you get a severance of base salary a few months, unused PTO days, unused formal leaves, and some publicly-traded equity.

    But you still think to yourself “Oh no. I’ve been laid off, and I have to find a new job immediately.”

    Yep. It’s an easy situation that quickly lends to catastrophic thinking. But rest not, there’s a couple of good ways to think through. Tough times happen, and tough times pass too.

    What Other Cognitive Thinking Strategies Can I Leverage?
    • It’s a time to re-expand your network.
    • It’s a time to practice, brush up, and update your LinkedIn profiles.
    • You’re worked for a year or more and you’ve developed and enhanced your software engineering acumen : coding, DevOps, design, and others.
    • Perhaps you solutioned 100 leetcode mediums, neetcode.io, or Gayle’s CTCI problems during your past years of employment. Maybe you’ve interviewed across a couple of shops. These experiences engender a huge volume of hours of deliberate practice : you never “unsolve” the problems you solved. The intuition and the pattern recognition stays etched as a permanent mental fixture ( which is a good thing ).
    What if I’m not busy enough whilst transiently unemployed?

    This also isn’t exactly true?

    • Average sleep and awake times : Most human beings sleep for 8 hours and do their day-to-day activities for a buffer or 4 hours, leaving 12 hours of dedicated studying time.
    • Time in the loops : Suppose a dev needs to go through four interviewing loops to land their next role ( a very conservative case ), and let’s assume that the on-sites emulate those of big tech companies. Each loop compromises of the following chunks – a 15-minute recruiter conversation, a 1-hour phone screen or Online Assessment ( usually DS&A ), and 4 hours of dedicated interviewing.
    • Deliberate Practice Hours : Multiply the time breakdowns renders us 5 hours * 4 loops = 20 hours of deliberate practice ( with DS&A, system design, and other topics ).
    • Hiring Trends : Companies never interview during every calendar year day : weekends, night times, and federal holidays. Expect slowdowns during periods such as Independence day week or Thanksgiving break.
    • Q4/EOY Slowdowns : Q4 EOY is slower quarter compared to Q1, Q2, and Q3. Hiring typically follows a company’s FY ( Fiscal Year ).
    Let’s Circle Back to the Situation

    So let’s take our individual who faces three months of unemployment but does four technical onsites taking up four dedicated weekdays. These on-sites also don’t include the time folks can invest practicing their skillsets ( e.g. 30 minutes – 1 hour dedicated daily to their craft ). Let’s also tally up the time spent in “softer work” – networking events, LinkedIn updates, posting online job applications, and resume updates. I can imagine this person investing 10-15 minutes a day in this activity during their period of unemployment – activity which also wouldn’t occur during any dedicated focus time in a company’s onsite loop.

    What’re My Takeaways?

    In a very contrived sense, a lot of us on social media quickly catastrophize, but we forgot how well-positioned we can be too.

  • INTERVIEWING – OPINE : How to Leverage Mock Interviews for Valuable Referrals
    A Primer

    Alright, this is another topic on my mind and something I want to cover.

    Networking.

    Network. Network. Network.

    Because experiences running through multiple mock interview sessions – as interviewee and as interviewer – have taught me that many of us struggle and under-utilize their innate networking opportunities.

    And I’ll explain.

    Both the interviewer and the interviewee spend quality time with each other – at least 60 minutes to 90 minutes of their waking hours ( that’s 1/16th of a standard day assuming a person sleeps in for 8 hours ). Occasionally, both sides extend their practice – upwards to 4 mock sessions ( putting us at 4-6 waking hours of our lives spent together ).

    Which has me seriously thinking – whether both parties are working a job ( or are transiently unemployed and actively looking ), the party on either side fits the bill of a person to definitely ask for a referral.

    It’s understandable that asking anyone – co-workers or friends – for referrals is a hard skill. But a person you practiced interviewing with in real-life should be easier. I can write up a a warm referral ( one where I personally know the individual and can strongly attest to their skills in a thoughtfully written paragraph ) versus a cold referral ( one where all I can provide is a resume, contact info, and some inkling of skill ), which enables me to draft up a stronger case.

    Warm Referral : Case Study #1

    I want to strongly consider <insert_interlocutor_name> as a prospective hire for position <insert_position_name> at company <insert_company_name> at level <insert_level_name>. I strongly vouch for their skillset and their hire-ability at the organization- we’ve spent time together in a 1:1 online setting engaged in deliberate practice on <algorithms/system design/insert_skill_name>. I definitely see strong capabilities in their skills across multiple domains : coding, problem solving, requirements gathering, solution-ing, and understanding word problems. I also learnt new ways to tackle problems and how to write up code and tests better by working with them.

    And Please Share LinkedIn/E-mails/Socials too !!!

    I’m also equally surprised how many of us seldom connect over LinkedIn ( and other social media platforms ) or getting each other’s e-mails for persisting long-term communication. As interviewer or as interviewee, I aim to spend the session’s final five minutes exchanging our contact information.

    Conclusion

    So folks out there who’re potentially making life-long friends or professional contacts in their mock practice sessions.

    Keep in strong touch and reach out to each other. A year later. Five. Even Ten!

    Who knows what exciting developments lay abound!!

  • INTERVIEWING – DS&A – How You, the Candidate, Can Help Your Interviewer

    A Primer

    Because oftentimes, the reverse direction is much harder. That, and candidates seldom communicate feedback to their interviewers.

    Alright, so I’ve conducted a fair number of DS&A interviews on interviewing.io, algoexpert.io, and pramp.io, and if there’s anything I’ve personally learnt, it’s that interviewees badly struggle to communicate feedback to their interviewers.

    And that’s understandable. Typically, interviewers are more experienced; they summarize feedback and communicate their findings : verbally or written.

    But a mock setting is a a two-way street : both parties spend quality time to improve. Plus interviewers ( and for that matter, interviewing ) are imperfect. Good, seasoned interviewers know that they should practice and hone their skills in mock settings to bolster their real-world interviewing skills. By doing so, they better ensure that their future candidates are “set up” for success.

    Feedback Case Study #1
    • TI1 asked a question at the right rigor, difficulty, and level.
    • TI focused on asking real-world relevant problem which stress-tested complex requirements gathering.
    • TI had a more conversational, dialouge-esque style in their session.
    • TI took extra time during their session to cogently and coherently explain complex concepts.
    Feedback Case Study #2
    • TI prepared level-appropriate questions : equivalency to Google L4-L5 candidates.
    • TI focused on high-level details and asked questions surrounding the code : the complexity analysis, the high-level approach, and the constraints.
    • TI prepared two leetcode-medium rigor level questions in event of TC2 answering the first question ahead of time.
    • TI provided solid hints to help TC think of alternative approaches and optimal approaches.
    • TI did a good job making TC feel comfortable during the session.
    • TI provided verbal feedback during the last 5 minutes.
    Footnotes
    1. For brevity’s sake, I’m using the acronym TI : The Interviewer. ↩︎
    2. TC – The Candidate ↩︎
  • INTERVIEWING – DS&A – Stress Testing Candidate and Gathering Stronger Signals

    A Primer

    Ok, it’s not as uncommon as you would think – for both interviewees and interviewers. Sometimes, the interviewee exceeds expectations and solutions the problem faster than expected. Now there’s a couple minutes remaining on the clock : 5 minutes to 10 minutes.

    So what do we do? Sure, as an interviewer, I can end the session early and return time back to the candidate, but one thing remains certain – the candidate clearly met a hiring bar’s “bare minimum”. They can successfully move forward to the next steps.

    In this case, ( really good ) interviewers will start asking stress test questions to gather more signals and write up stronger cases on behalf of their candidates. Perhaps theirs multiple candidates who met the hiring bar, and a single individual’s outperformance intimates a stronger case. Or the candidate is interviewing for a higher-level position ( e.g. they’re targeting mid-level/senior/staff+ levels ), and we as interviewers can ask level-setting questions to up-level candiditates or down-level candidates ( and if we’re good, we’ll up-level you ).

    Now don’t get me wrong. Stress-test questions are harder to ask. Firstly, it’s a skill interviewers need to practice, hone, and refine. Secondly, they’re open-ended : there’s no correct answer to these questions, since they’re meant to probe a candidate’s brain and inform interviewers how a prospective hires problem solves.

    Question #1 :

    Prompt : Alright, we arrived at a working solution, but can you quickly walk me through how you would refactor your code for a production setting? Walk me through changes you would incorporate.

    Justification : Let’s gauge how much experience a candidate has working with an large-scale industrial code base or microservice architectures.

    Example Answers :

    • Introducing class defintions and methods to support instantiation and invoke methods across multiple modules
    • Leverages CONSTANT variable expressions and default initialization values
    • Renaming variables for faster searching and readability
    • Incorporate unit testing and stubs to compare EXPECTED versus ACTUAL values
    • Leverage Git actions and allowing code to go to production IFF all unit tests pass
    • Using singleton patterns to avoid multiple instantiation on each execution – save on CPU cycles, memory and optimize performancy

    Question #2 :

    Prompt : Let’s suppose a Product Manager or a Stakeholder comes in and needs statistics on how often scenarios are encountered. They want to collect data to improve their business. What modifications would you make?

    Justification : Has a candidate interacted with business personas in the past who aren’t engineers? Even if they didn’t, do they theoretically know what they would want to show to an such individuals? Can they translate code to business asks? Can they generate and send metrics that inform how to obtain more business value?

    Example Answers :

    1. TC suggests leveraging production grade logging microservices or analytical tools – GA ( Google Analytics ), Sentry, Titan, or Splunk – to capture statistics of scenarios for fuzzy grep/search.
      • TC extends this and mentions using a trade-off analysis of internal 1st-party tooling with external 3rd-party tooling.
    2. TC thinks of events tracking ; can they use a microservice as a producer, send data to a queue asychonously, and have a consumer service listen to the queue to collect stats?
    3. TC thinks of storing or sending time series events : (user_id,customer_segment_group, timestamp,scenario) types of tuples.
    4. TC mentions analyzing customers by cohort groups.

  • INTERVIEWING – Writing Feedback For DS&A Interviews – CASE STUDY #2

    A second example always helps!!!

    In this case, I’m sharing a case study 1 highlighting a unique set of strengths & weaknesses ( since all candidates 2think differently ).

    Strengths & What Went Well :

    • TC arrived at a working solution.
    • TC has strong technical communication skills.
    • TC showed how to walk through their code and executed a dry run of unit tests of a few scenarios.
    • TC ran into logical bugs, but upon my ask to debug, demonstrated effective debugging skills.
    • TC demonstrates thorough understanding of question under ask.
    • TC thought about effective data structures to capture computational state.
    • TC understands Big-O complexity, and on my ask if the complexity could be improved, they strongly justified that they reached an optimal Big-O.
    • TC understood how to leverage problem sparsity, constants, and invariants ahead of time.
    • TC understood case decomposition
    • TC thought about the double counting problem.

    Areas for Improvement :

    • TC can solution faster ( e.g. save 5 minutes ) and dive into coding sooner.
    • TC struggled with array one-off indexing adjustments and calculations.
    • TC can refine their “time-tracking” skills – knowing when to spend less and spend more time on specific sections (e.g. the code, the unit tests )

    Advice for future interviews :

    • TC can type out more of their communication process to assist their interviewer to build a stronger case.

    Open Notes :

    • TC demonstrates signal set in last 5 minutes on the “stress-test” ask questions. Answers strongly intimate YOE ( years of experience ) working with large-scale industrial code bases.
    • TC shows how to evolve code upon shifting customer requirements.

    Leveling Determination :

    • Amazon L4 ( Entry ) : Strong Hire
    • Amazon L5 ( Mid-Level ) : Hire
    • Amazon L6+ ( Senior Level Plus ) : Leaning Hire
    1. Credit is due to to interviewing.io for their template structure/headings here. ↩︎
      ↩︎
    2. Credit to my mock interviewee for taking the time from his day – 1 hour and 5 minutes – to go through rigorous practice. I believe in journalistic integrity, and I’ve redacted all names and other Personally Identifiable Information. ↩︎ ↩︎