Most regional mobility operators 60 to 90 days into their operation open their dashboard to find dozens of metrics: total trips, cumulative revenue, registered drivers, unique passengers, kilometers covered, average rating. That volume of data doesn't produce better decisions — it produces selective paralysis. Operators end up looking at whatever numbers are easiest to find, not the ones that predict whether the operation will survive the next quarter.
This article is for operators already running — 30 or more active drivers and at least 60 days of data — who want to separate metrics that trigger action from metrics that just fill a screen. We'll cover six concrete indicators: those that diagnose passenger experience, supply-side health, financial sustainability, and the direction an operation is heading when no single metric explains it on its own.
Completion rate: the indicator nobody puts in the front row
The metric most operators overlook in the first months isn't sophisticated: it's the proportion of requests that end in a completed trip. A platform with 300 daily requests and 210 completed trips has a 70% completion rate. That number is the most sensitive thermometer in the operation because it captures driver supply health, product functionality and passenger price acceptance all at once. In LATAM regional markets, a healthy completion rate sits above 78%. Below 70%, something is broken — not necessarily everything, but something specific and diagnosable.
The four factors that move it most are driver coverage in high-demand zones, assignment speed — more than 90 seconds before acceptance causes a measurable rise in cancellations —, current price versus what the local passenger expects, and app reliability on mid-range devices. That last point matters more than it looks: in cities of 80,000 to 250,000 residents, most passengers access from mid- or low-range Android devices, and compatibility or performance issues produce drop-offs that don't show up as cancellations in logs but do reduce the completion rate.
First-assignment time: the number that defines the passenger's first second
The time between a passenger confirming a request and a driver accepting it is the most direct indicator of real-time supply-demand balance. In mid-sized cities, the standard that keeps cancellation rates low runs between 45 and 75 seconds average assignment time. Above 90 seconds, pre-arrival cancellations rise non-linearly: between 90 and 120 seconds, 18% to 25% of passengers cancel before waiting; above 120 seconds, that figure can exceed 40%.
What makes this KPI useful isn't the global average — it's the breakdown by zone and time slot. An average of 60 seconds can hide that the city center runs at 35 seconds while outer neighborhoods run at 140. That heterogeneity signals where coverage gaps exist before passengers in those zones stop trying. The operational goal isn't to lower the global number — it's to narrow the gap between the best-served and worst-served zones, and specifically raise the floor in areas with the most unmet demand.
Driver active-hour earnings: the retention metric that isn't obvious
Driver retention has many predictors, but the most consistent one in Latin American markets is effective income per actual hour of work. Not total daily earnings, not per-trip income — earnings divided by the time the driver was online and active, including time between requests. Drivers internally calculate whether the platform is worth their time using exactly that division. If the result falls below $5 to $6 USD per hour in local equivalent, most start looking for alternatives within their first two or three weeks.
The healthy range for mid-cost LATAM cities runs between $7 and $12 USD per active hour. Above that range, drivers bring in contacts and the base grows organically; below it, the operation loses drivers faster than it can onboard them. If earnings per active hour fall without a drop in trip volume, the typical problem is too many drivers online in low-demand zones: they inflate the denominator without growing the numerator. Redistributing active availability toward the right zones fixes this before drivers diagnose the problem themselves and go offline.
Asymmetric ratings: when it's a driver problem and when it's a product problem
Average driver rating is well understood but on its own says little. What says more is the relationship between the rating passengers give drivers and the rating drivers give passengers. When both are above 4.3, the operation is functioning well on both sides. When driver rating is low (below 4.0) while passenger rating is high (4.4 or above), there's a specific conduct issue passengers are detecting and scoring negatively: late request rejections, poor treatment or behavior outside protocol. When the reverse occurs — well-rated drivers, poorly rated passengers — the problem lives in user policy: late cancellations, false requests or vehicle mistreatment.
The most overlooked signal within ratings isn't the average — it's internal dispersion. A 4.3 average can hide that 8% of active drivers sit below 3.8 and account for a disproportionate share of complaints. Identifying that specific subgroup and intervening directly — a conversation, targeted training or removal from the platform — has a larger impact on passenger retention than its relative size suggests. A driver with a 3.7 rating doing 12 trips a day affects more passengers per week than almost any other technical variable you could optimize.
The reactivation ratio: how many passengers who tried the platform come back
Acquiring a new passenger in regional markets costs between $1.50 and $4.00 USD depending on the channel. Reactivating one who already used the platform and stopped costs three to seven times less. The metric that measures whether the platform retains what it acquires is the reactivation ratio: the proportion of passengers who completed at least one trip in the past 30 days against the total who have ever taken a trip. In operations with 6 to 12 months of history, a healthy ratio sits above 35% to 40%.
Below 25%, the operation is losing passengers faster than it retains them. The total trip growth the dashboard shows can be a mirage: each month shows more trips, but from more new passengers offsetting the ones who left. That pattern isn't sustainable because acquisition cost scales while the recurring user base doesn't grow. The reactivation ratio surfaces that dynamic before it shows up in revenue. When it's falling, the most frequent cause in regional markets isn't pricing — it's wait time or a negative experience that wasn't resolved at the moment it happened.
The 10-minute dashboard: what to review each morning and what to ignore
A dashboard that tries to show everything at once teaches nothing at once. The daily review of a regional operation can be completed in 10 minutes if it's organized around four concrete questions. Each has a data source and a possible action — these aren't reporting metrics, they are decision metrics.
The four questions that anchor the daily operational review:
- Completed trips yesterday vs 7-day average — direct trend signal; a drop of 15% or more without a known cause calls for immediate supply and product review
- Active drivers in the past 24 hours vs weekly average — detects supply dropout before it shows up in wait times or completion rate
- Average first-assignment time from the prior day — supply-demand thermometer; any value above 90 seconds calls for zone coverage intervention
- Average driver rating over the past 7 days — early quality signal; a drop of 0.2 points in a week points to a subgroup that needs direct attention
What shouldn't be in that daily review: cumulative gross revenue (it's an output of the other indicators — if those are healthy, revenue follows), registered drivers without an activity signal attached (a headcount with no active denominator has no actionable implication), and app download volume (in regional markets, the download-to-first-trip conversion runs between 18% and 35%, so downloads don't predict operation). Those metrics belong in weekly or monthly growth reviews, not in the operational scan that enables same-day decisions.
When I started I had over forty metrics on the dashboard. I'd open it every morning and leave without knowing what to do. Today I check five numbers — the same five — and in ten minutes I know whether there's something that needs attention.
The six KPIs described here aren't the only useful indicators in a mobility operation — they're the ones that diagnose a problem before the passenger experiences it twice. Completion rate, first-assignment time, driver active-hour earnings, asymmetric ratings, passenger reactivation ratio and the four-question dashboard: each answers a part of the same central question, which is whether the operation is building something sustainable or just accumulating trips that don't predict the following month.
The most common mistake isn't failing to measure — it's measuring everything and reading nothing. An operation with six well-understood metrics reviewed every day makes better decisions than one with sixty metrics consulted when there's time. The gap between operators who reach financial sustainability in the first 12 to 18 months and those who don't traces more often to data-review habits than to any high-level strategic decision. The dashboard that works isn't the most complete one — it's the one the operator opens every day and understands in 10 minutes.


