Crawl, walk, run emphasizes iterative development as way to progressively build a solution as opposed to building it all-at-once. The phrase originates, as far as I can tell, from child development stages though some directly cite (King n.d.).
Take a hypothetical service built by your team over a year ago. The metrics for this product (latency, resource requirements, etc) were manually calculated and put into some document when the product released. Your team hasn’t updated these metrics in over a year.
It’s clear that some automated system or dashboard would be better as it would stay up to date over time without manual involvement. However, setting such a system up has up-front and ongoing time/resource costs.
An iterative development approach could go:
- Crawl: Re-compute the metrics manually
Fixes the immediate “our metrics are out of date” issue
Useful for evaluating whether additional work is necessary
Answering these follow-up questions ensures you and your team don’t burn time solving a problem that doesn’t need solving; or one that has a simple solution. Perhaps it’s enough to create a recurring reminder to refresh the metrics every quarter. A shared bash script could be all the “automation” necessary in that it simplifies the manual update into a One-click process.
- Have the metrics changed since they were last run? Did we expect them to change? If we expect them to change again in the future, at what cadence?
- How much benefit do we gain by having these metrics up-to-date? Do we get all the benefits by updating them once a quarter? Do we need to have them updated in real time?
- Walk: Implement a monitoring system with a dashboard
- Run: Implement alerts to notify the team automatically if metrics cross meaningful thresholds