By — Hritika Mishra
Abstract
Digital labour platforms are increasingly governing gig work through rating algorithms, shaping workers’ income, working hours, and continued access to jobs. This article examines how platforms such as Uber, Swiggy, and Zomato use customer ratings and performance metrics as tools for algorithmic management, thereby exerting employer-like control without the corresponding responsibilities. This paper demonstrates that ratings operate not merely as feedback but as disciplinary mechanisms facilitating opaque deactivations, prejudicial evaluations, and conditional access to incentives and welfare benefits. The article further discusses emerging regulatory responses in India while foregrounding the urgent need for transparency, due process, and accountability in algorithmic decision-making.
Introduction
After receiving a delivery, when you select “4 stars” instead of “5”, it often feels like a harmless opinion. For gig workers on food-delivery platforms, this harmless opinion often behaves less like feedback and more like a tool that quietly decides their income, working hours, and even whether they can keep working at all.
Quick service platforms often describe themselves as digital intermediaries that connect consumers with service providers using mobile applications and algorithmic systems. Platforms such as Uber, Swiggy, and Zomato allocate tasks, set prices or pay structures, monitor performance, and enforce their companies’ standards mainly through automated systems instead of human supervisors. Although these platforms classify workers as ‘independent contractors’, they actually exercise significant control over these workers through rating systems, incentive structures, and the algorithmic decision-making process, which dictates these workers’ access to work, earnings, and continued participation.
Even though the legal status of these workers is not of regular employees, these apps function as a bosss traditionally does, by assigning work, monitoring performance, rewarding, or penalizing behaviour. The International Labour Organization describes this system as ‘algorithmic management,’ a system which uses “tracked data and other information to organize, assign, monitor, supervise and evaluate work.”
How Ratings Decide Who Gets Work
Platforms themselves acknowledge that consumer ratings are not just informational or for feedback. Uber, for example, states that drivers can face permanent deactivation if their ratings fall below the city-specific minimum. Ratings expose workers to constant evaluation while granting platforms the unilateral power to exclude them from work. This functions very similarly to a termination, even though these platforms do not classify the workers as ‘employees’.
In practice, this often means that even if workers do everything “right,” they can still be disproportionately impacted by a few customers’ biases or misunderstandings because these ratings are not a neutral measurement. In the case of an Uber driver, a ride can often be influenced by things a driver cannot control, such as traffic, app map errors, etc., which may affect the service the customer receives and lead the customer to leave a negative review. While Uber states that it may exclude negative or biased ratings and those that are beyond a driver’s control, the key argument still stands. The platforms arbitrarily set these rules and determine the minimum threshold and the appeal process. Further, the result of this process may lead to deactivation, which, to a gig worker, can reduce their entire income stream without any safeguards that a termination of employment usually requires.
Not All Work Is Equal: How Ratings Sort Workers
While bad reviews do not always lead to a deactivation, they do affect how much workers earn by shaping their access to incentives, preferred time slots, higher tip areas, platform “tiers” with benefits, etc. A clear example of this is Swiggy’s tiered ranking system, where workers are classified as gold, silver, and bronze. These are updated weekly based on performance, and higher-ranking workers reportedly get benefits such as early access to book next week’s shift and other benefits, which enable better earnings.
Even if the platforms classify these workers as independent, they continue to maintain significant control over working hours, which directly impacts the earnings of the workers. If your tier determines whether you can book the best slots first, your “flexibility” becomes conditional on keeping the algorithm happy.
How these ratings become coercive
Often, these ratings are tied not only to earnings but also to social protection. In Swiggy’s case, health insurance access has also been tied to the tiered rankings. Gold-rated workers often get broader coverage, while silver and bronze workers get reduced coverage. When access to welfare is tied to algorithmic review of performance, workers are at an even higher risk because the score can now also determine basic security.
Further, these ratings rarely operate alone. They are often tied into a larger system where platforms constantly track the workers and measure acceptance and cancellation patterns, determine “efficiency,” and then determine an internal score. Research on India’s food delivery sector describes how platforms use algorithmic management to manage workers and often maintain employer-like control while avoiding employer responsibilities.
Workers often feel pressurized due to this algorithmic management. If you do not accept enough orders, or if you deliver “late” due to a delay at the restaurant or various other reasons that are often beyond your control, it can negatively affect your future work.
Who holds the algorithm accountable?
Presently, accountability for algorithmic management systems is fragmented and underdeveloped. The primary oversights are currently based on internal platform policies, which give companies broad discretion over how ratings are calculated, how performance thresholds are set, and when workers are deactivated. While platforms such as Uber, Swiggy, and Zomato offer in-app grievance systems and customer support, workers commonly report these options as lacking transparency, relying on automated replies with little space for real human review. Suspensions or deactivations are more often justified by vague appeals to “quality standards” or “policy violation,” without explaining anything to the workers, and leave them without any real remedies.
Beyond the platforms, collective action and unionization have emerged as crucial-but limited-forms of accountability. Groups like the Indian Federation of App-Based Transport Workers have raised alarms time and again over unfair deactivations, arbitrary rating cutoffs, and a lack of due process. Protests, petitions, and dialogue with policymakers by these groups push for recognition of gig workers’ exposure to opaque algorithmic control. Yet, without strong statutory backing, their leverage depends on political pressure rather than on enforceable rights.
More recently, state governments have begun to intervene through specific legislation. Laws such as the Rajasthan Platform-Based Gig Workers Act and the Karnataka Platform-Based Gig Workers Act mark the beginning of taking gig workers as a separate category of labour that needs welfare protection. Significantly, Karnataka has gone beyond welfare registration and placed obligations regarding transparency in automated monitoring and decision-making. These laws do not prohibit rating-based deactivations but represent an early attempt at bringing algorithmic governance under public regulation rather than leaving it to corporate discretion.
Even so, these interventions are limited. In Indian law, there is still no guarantee of the right to explanation, no mandatory human review of automated decisions, and no obligation for independent audits of rating systems for bias or arbitrariness. Consequences are that platform algorithms remain private governance tools shaping livelihoods across the economy, with limited external oversight. The state-level initiatives maturing into robust and enforceable accountability structures will spell the difference between algorithmic management continuing to be an untouchable display of private power in India or evolving into meaningful democratic oversight.
Conclusion
Ratings began as a way to build trust between strangers. However, in the gig economy, they have evolved into a mechanism that governs workers with very little accountability. So, when we ask, “who is accountable,” we’re really asking: who has the power to set the rules of work? Right now, the answer is largely the platform, because the platform unilaterally determines the metrics, the thresholds, the incentives, the explanations, etc.
As India begins to adopt state-level protections and algorithmic transparency, the next phase will determine whether these systems remain privately managed or become publicly accountable infrastructure for millions of workers whose jobs now depend on an invisible score. Given that algorithmic management directly affects livelihood, there is a need for a fair system that guarantees a right to an explanation, right to contest automated decisions, set evidentiary standards, and independent audits. To prevent arbitrariness and bias, regardless of who is held accountable.
About the Author
Hritika Mishra is a second-year student in the five-year integrated law programme at Jindal Global Law School. Her research interests lie in law, policy, and social justice, with a particular focus on gender rights, child and youth justice, and the role of legal systems in promoting equitable outcomes.

