Back to blog

Product

Your operation's dashboard: reports that change decisions vs the ones that accumulate unread

A dashboard with 20 active metrics is not more visibility — it is noise. Operators who identify the four or five metrics that map directly to actions finish the analysis in ten minutes with a plan.

9 min readEquipo Cabgo · Mobility platform
Isometric illustration of a ride-hailing operator simplifying their dashboard: on the left a monitor packed with 16 overlapping chart panels and an overwhelmed figure; on the right a clean dashboard with four high-contrast alert metrics and a composed operator reviewing a phone notification

A ride-hailing platform in active operation can generate between 20 and 40 distinct reports depending on which modules are enabled. Most operators running between 200 and 700 daily trips consult three to five of those reports consistently. The rest are available, updating in real time, and no one opens them unless a problem becomes visible enough that someone remembers the panel exists. That gap between available data and consulted data is not a discipline or time problem — it is the result of most platform reports answering a question the operator already knows the answer to: they confirm a state with no associated action, or present data at a granularity that doesn't map to any concrete decision the operator can actually make. The right question is not which metrics your dashboard makes available, but which of those metrics, when they change, produce a different decision the next day.

This article is for operators three to twelve months into active operation who already have real platform data but whose teams haven't established clarity about which reports deserve daily review, which deserve weekly attention, which are monthly, and which mostly document what happened without contributing anything that changes what is going to happen. The distinction is practical, not theoretical: an operator who spends 30 minutes each morning reviewing reports that confirm the prior day's status is using that time to not make a decision; one who checks four metrics that map directly to concrete actions finishes that same analysis in ten minutes with a plan for the day.

The full dashboard trap: more panels is not more visibility

Ride-hailing platforms generate data across the entire operation stack because that data can be useful for someone, at some point, for some specific question. Driver position every four seconds, duration of each trip stage, acceptance times by driver and zone, cancellation patterns by hour, rating distributions — all of that lives in the platform's records, and most dashboards make it available in some panel. When the operator gains access to all that data, they activate it because they don't yet know which metrics will matter in their specific operation. The result is a dashboard with 15 or 20 charts that the operations coordinator scans in four seconds before closing the panel and calling a driver directly to ask how the flow is looking.

The structural problem is that platform dashboards are designed to show that the operation exists and is active — not to signal that something needs a decision today. Status metrics (total trips for the month, cumulative average rating, registered active drivers) are necessary for management reports but don't direct attention toward any specific part of the operation that requires intervention. Signal metrics — average wait time in the last hour in the north zone, cancellation rate in the last 20 minutes by type, available drivers at the 18:00 window today compared with the same weekday last week — are what the operator needs to review at the start of the day, because they are the ones that tell you whether the day will end with the numbers you expected or with a problem that already started.

The four metrics that change what you do tomorrow

There is a simple test to identify whether a metric deserves daily review: if the number changes significantly today relative to yesterday, would you do something different tomorrow? If the answer is no, the metric may be useful for monthly analysis but doesn't deserve daily attention. In operations of 200 to 700 daily trips in LATAM markets, four metrics pass that test consistently. They are not four independent indicators — they are four readings of the same central problem: whether driver supply is where demand needs it, at the moment it needs it.

The four metrics that, when they move, produce an operational decision within the next few hours are:

  • Average wait time during the high-demand window: not the full-day average but the last two hours of active operation — the variation against the same window the prior day signals whether there is a supply problem still early enough to correct before the next peak
  • Driver cancellation rate during peak hour: the percentage of assigned trips that the driver cancels before reaching the passenger, separated from the passenger cancellation rate — when it rises more than 3 points in the same window on two consecutive days, something in that zone or window is producing systematic rejection worth investigating
  • Active drivers in the critical zone 30 minutes before peak: how many drivers are available in known high-demand zones before demand arrives — if the number is consistently low, positioning incentive intervention timing is still workable to produce an effect
  • Percentage of requests with no driver available in the last hour: the most direct indicator that supply is not meeting demand in real time — in a healthy operation this should not exceed 8% during peak hours

Driver reports that actually produce concrete actions

Driver reports are the second densest data block in any ride-hailing platform dashboard. The typical set includes the trip-volume ranking, rating distribution by driver, weekly connection times, and cancellation patterns by driver. Most of those reports produce a reaction of 'interesting' and no action, because the driver in position 1 already knows they are doing well and the one in position 40 rarely receives a call to find out why.

Driver reports that produce concrete actions share three characteristics: they identify drivers in a behavioral zone that will produce a problem in the coming days (not ones who already have a documented problem from weeks ago), they present the data in the context of trend (not yesterday's number but the direction over the last seven days), and they group drivers into segments that correspond to distinct interventions. A report showing five specific drivers whose connection hours dropped more than 30% over the last ten days is an action report because it flags the problem before those drivers go inactive. A report showing the rating ranking for all drivers is a status report that documents what is already known.

Status metric vs alert metric: the distinction that changes how the team operates

Status metrics describe the historical average of the operation. They are useful for comparing periods, for management reports, for showing long-term trends. Alert metrics detect a deviation from the normal pattern that requires intervention before it compounds. The practical difference is direct: a status metric is the average wait time over the last four weeks; an alert metric is wait time in the last two hours compared against the historical average for that same weekday and same time window.

Well-configured operations separate those two metric types visually because they require different routines. Status metrics are reviewed in weekly or monthly sessions where the team evaluates whether the operation is moving in the right direction. Alert metrics are reviewed in real time or at a maximum interval of 30 to 60 minutes during operation peaks, because their value decreases exponentially with time: a high wait time alert that the operator sees two hours after it occurred is historical information, not an action signal. A coordinator who configures three or four alert metrics with explicit thresholds and receives a notification when any of them is breached doesn't need to check the dashboard every 20 minutes — the dashboard calls them when something deserves attention, and the rest of the time they can focus on the operation instead of the operation's data.

Weekly reports worth automating

There is a set of analyses that produces value when reviewed weekly but that no operator generates manually with any consistency because it requires back-office time that daily operations don't leave. Automating those reports — which in most modern platforms means configuring a scheduled delivery rule, not additional technical development — produces returns for months without further maintenance. The three that generate the most value in operations of 300 to 700 daily trips are: the report of drivers with reduced activity compared to the prior week (to intervene before they go inactive), the analysis of zones with consistently higher wait times than the city average (to evaluate whether a positioning geofence is producing the expected effect), and the summary of failed trips by time window (to identify unmet demand the operator can resolve with additional availability during that window).

The weekly report most often configured and least often used for decisions is total trip volume by weekday — a status report that confirms Monday has fewer trips than Friday, information the operator already has internalized from experience. The one most often skipped and most often producing concrete actions is cancellation trend by type (driver, passenger, timeout) compared against the same week the prior month. A change in that number typically signals a shift in driver or passenger behavior that the operator can verify quickly and correct before it establishes as a pattern — the type of problem that is relatively straightforward to resolve in week one and significantly harder to reverse in week eight.

When data volume produces worse decisions

When 15 charts are available and there is no prior criterion for which ones matter in that moment, the brain searches for consistency across the data before acting — it waits for several charts to confirm the same problem before intervening. In a situation with 15 available drivers facing 35 incoming requests over the next 20 minutes, that search for confirmation consumes precisely the time window in which activating an incentive and moving drivers to the right location was still possible. Defining three or four metrics with clear thresholds in advance — if wait time exceeds X minutes in window Y, activate zone Z incentive — produces better operational outcomes than any exhaustive dashboard review because it eliminates the cost of deciding which information matters at the moment it matters most.

The same 40 data points that overwhelm attention when reviewed all together each morning are perfectly manageable when organized across three levels: a small set of real-time alert metrics for the coordinators making operational decisions during the day, a set of automated weekly reports for identifying trends before they become problems, and a complete monthly analysis for platform structure decisions. That organization doesn't change the available data — it changes the rhythm and context in which it is consumed, which is what determines whether it produces better decisions or simply more time reviewing dashboards.

In the first year I checked the dashboard like a speedometer: to know if the operation was alive. A year and a half in I simplified it to four metrics with automatic alerts. I didn't reduce the available data — the same reports are still there. But I stopped reviewing the full panel every morning and started receiving a notification when something specific left the normal range. In the following three months, average wait time dropped from 7.2 to 5.8 minutes because interventions arrived on time, not two hours after the peak.
Mobility platform operator with four active cities in northern Mexico

The most useful dashboard in a regional ride-hailing operation is not the one that shows the most metrics — it is the one that has removed everything that doesn't produce a different decision when it changes. That elimination process doesn't happen automatically or through any platform's default configuration: it requires identifying, during the first weeks of real data, which numbers you actually reviewed before acting and which ones you consulted after the fact to confirm what had already happened. The first category belongs at the front of the dashboard. The second belongs in a monthly report the team reviews in a meeting, not at the moment a coordinator is making an operational decision.

An operator six months in who can answer in under two minutes which four or five metrics they review daily and what decision each would change if it moved away from its normal value has a concrete operational advantage over one who says they check 'the general dashboard' each morning. Not because the second operator has bad data — in most cases both have access to the same reports — but because the first has a clear model of their operation that separates signals requiring action from those that document status. That separation is not configured in the platform panel: it is built through weeks of reviewing real data, being wrong about which metrics matter, and adjusting until the set of metrics you review consistently matches exactly the set that produces real decisions.

Topicsride-hailing platform dashboard metricstaxi app operations reports LATAMkey metrics regional mobility operationreal-time KPIs transport platformoperational alerts taxi driver appdriver reports ride-hailing dashboarddaily dashboard review mobility operation