How to Get More App Store Reviews (Without Breaking the Rules)
An app with a 4.7-star rating and 2,000 reviews will outperform an identical app with a 4.7-star rating and 50 reviews in almost every measurable way: higher search ranking, better conversion rate, more trust from users who have never heard of either product. Reviews are the closest thing the App Store has to a compounding asset. Each new review makes the next download slightly easier to earn. If you are still building your overall ASO strategy from the ground up, reviews should be a central pillar -- not an afterthought.
Yet most indie developers treat review acquisition as an afterthought. They ship their app, maybe add a generic "Rate us!" alert on day one, and wonder why their review count barely moves. The problem is not that users refuse to leave reviews. The problem is that developers ask at the wrong time, in the wrong way, or not at all.
This guide covers the mechanics of review acquisition on both iOS and Android: what the platform rules actually prohibit, how to time your prompts for maximum impact, how to handle the negative reviews that inevitably arrive, and how to build a sustainable review velocity that compounds over months rather than spiking once and flattening.
Why Reviews Matter: The Triple Impact
Reviews affect your app in three distinct ways, and understanding all three explains why they deserve strategic attention rather than a single line of code added as an afterthought.
Ranking signal. Both Apple and Google use review count and average rating as inputs to their search ranking algorithms. The exact weighting is not public, but the pattern is clear from observation: apps with more reviews and higher ratings consistently rank higher for competitive keywords than apps with fewer reviews and lower ratings, all else being equal. A 2023 study by Phiture found that apps in the top 10 search results for competitive keywords had an average of 8x more reviews than apps ranked 11-50.
Conversion multiplier. Your star rating appears on search result cards before users even tap into your listing. Users scanning a list of search results make split-second decisions based on three things: icon, title, and star rating. A 3.8-star app sitting next to a 4.6-star app loses that decision almost every time. Research from Apptentive found that the difference between a 3-star and a 4-star rating can increase conversion by up to 89%. Between 4 stars and 5 stars, the effect is smaller but still meaningful -- roughly 15-20% depending on the category.
Product intelligence. Reviews are unfiltered user feedback written in the user's own language. Negative reviews surface bugs your crash reporting missed. Positive reviews reveal which features users actually value (often different from what you expected). And the specific words users choose when describing your app are keyword research gold -- they tell you how real people talk about the problem your app solves.
These three effects compound. More reviews improve ranking. Better ranking increases impressions. Higher conversion from a strong rating turns more impressions into downloads. More downloads produce more reviews. The flywheel is real, but it only spins if you actively generate review velocity.
The Rules: What Apple and Google Actually Prohibit
Before implementing any review strategy, you need to understand the boundaries. Both platforms have explicit policies, and the consequences of violation range from review removal to account termination. The rules are not identical, so strategies must be platform-aware.
What is banned on both platforms:
- Incentivized reviews: offering in-app currency, premium features, discounts, or anything else in exchange for a review. "Rate us 5 stars and unlock a free theme" will get your app removed.
- Purchased reviews: buying fake reviews from third-party services. Both Apple and Google have detection systems, and the consequences extend beyond review removal to developer account bans.
- Review gating: showing a "Do you like our app?" pre-screen, then only directing users who say "yes" to the actual review prompt while routing unhappy users to a feedback form. Apple banned this explicitly in 2017, and Google followed. The rationale is that it manipulates the rating by filtering out negative sentiment before it reaches the store.
- Custom review prompts that mimic the system dialog: on iOS, you must use Apple's
SKStoreReviewControllerAPI. Building your own review dialog that looks like the system prompt is a guideline violation.
What is allowed:
- Using the official platform APIs to trigger review prompts at appropriate moments.
- Asking users to leave a review through in-app messaging, as long as the message does not gate based on sentiment and does not offer incentives.
- Responding to reviews publicly through App Store Connect and Google Play Console.
- Linking to your App Store page from your website or emails with a "Leave a review" call to action.
The key principle is that you can ask users to review your app, but you cannot filter who gets asked based on their likely sentiment, and you cannot offer anything in return.
iOS: SKStoreReviewController and the Three-Prompt Limit
Apple provides one official method for prompting reviews: SKStoreReviewController.requestReview() (or the SwiftUI equivalent requestReview from the environment). This API displays Apple's native review prompt within your app. You call the API; Apple decides whether to actually show the prompt based on its own frequency logic.
The hard limit: Apple will show the prompt a maximum of three times per app, per user, per 365-day period. Your code can call requestReview() more than three times, but Apple silently ignores calls beyond the limit. This makes each prompt precious. You have three shots per year per user to generate a review. Wasting one on a bad moment -- during onboarding, after a crash, or when the user is clearly frustrated -- is a strategic error you cannot undo for that user until next year.
There is a subtlety here that many developers miss: Apple does not guarantee the prompt will appear every time you call the API, even if you are under the three-call limit. Apple applies its own heuristics about user context and timing. This means you should place your requestReview() calls at the three best moments in your user journey, knowing that Apple may suppress some of them. More on timing in the next section.
One important behavior: during development, the prompt always appears when called. In production, it follows the frequency rules. Test your timing logic in production, not just in debug builds.
Android: The In-App Review API
Google's In-App Review API serves a similar purpose but works differently in practice. The review flow appears as a bottom sheet overlay within your app rather than as a system dialog, and users can complete their review without leaving the app. This lower friction generally produces higher completion rates per prompt shown.
The critical difference: Google does not publicly document its frequency limit. Apple tells you "three per year." Google says effectively "we will show it when we think it is appropriate." Your code requests a review flow, and Google's system decides whether to show it based on undisclosed quotas and frequency logic. This means you cannot plan around a specific number of annual prompts on Android.
In practice, Google's system is more generous than Apple's for engaged users -- developers report the prompt appearing more than three times per year for active users. But the unpredictability means your timing strategy matters even more. Place your review triggers at high-satisfaction moments, and let Google's system manage the frequency.
Another Android-specific consideration: the ReviewManager API returns a ReviewInfo object that you must use immediately. You cannot pre-fetch the review info and show it later. The flow must be triggered in a single request-then-launch sequence. Plan your UX around this constraint.
When to Ask: The Timing Formula
Timing is the single most controllable variable in your review strategy, and it has the largest impact on both the quantity and quality of reviews you receive. The goal is to catch users at their most satisfied -- the moment when they are most likely to leave a review and most likely to leave a positive one.
The best approach is a multi-condition trigger that requires several criteria to be true simultaneously before firing the review prompt:
Condition 1: Minimum session count. The user has opened your app at least N times. This ensures they have enough experience to form a genuine opinion. For a daily-use app like a habit tracker, N might be 5. For a weekly-use app like an expense report tool, N might be 3.
Condition 2: Minimum days since install. At least M days have passed since the user installed the app. This prevents prompting users who are still in the novelty phase and have not yet determined whether the app provides lasting value. A reasonable default is 7 days, adjusted based on your app's usage cadence.
Condition 3: Positive action trigger. The prompt fires immediately after the user completes a core action successfully. Not when they open the app. Not when they browse. After they accomplish something. For a fitness app, after completing a workout. For a note-taking app, after saving a note. For a photo editor, after exporting an edited image. The user should be feeling accomplishment at the exact moment the prompt appears.
All three conditions must be true simultaneously. This triple gate ensures you are prompting an engaged user (session count), who has had time to evaluate the app (days since install), at a moment of satisfaction (positive action). Implementing this requires tracking session count and install date locally, then checking both values before calling the review API at a positive action moment.
pseudocode:
if sessionCount >= 5
AND daysSinceInstall >= 7
AND userJustCompletedCoreAction
AND hasNotBeenPromptedRecently
then
requestReview()
Moments to Avoid
Never trigger a review prompt in these situations:
- During onboarding or the first session. The user has not experienced value yet, and early prompts feel like spam.
- After a crash or error recovery. The user is frustrated. A review prompt at this moment is tone-deaf.
- During a paywall or upgrade prompt. Mixing monetization with review solicitation creates resentment.
- Immediately after a push notification opens the app. The user arrived with a specific intent; a review prompt is an interruption.
- After the user failed at something. Failed logins, validation errors, or task failures put users in a negative mental state.
The principle is simple: the user's emotional state at the moment of the prompt determines the review's tone. Engineer that moment to be positive.
Handling Negative Reviews
Negative reviews are inevitable. Even apps with 4.8-star averages receive 1-star reviews regularly. How you respond to them affects both your reputation with potential users and your relationship with the reviewer.
Respond to every negative review. This is visible to anyone browsing your reviews. A listing where negative reviews go unanswered looks abandoned. A listing where the developer responds promptly and constructively to every complaint signals that someone cares about the product and its users.
Structure your responses consistently:
- Acknowledge the specific issue the user raised. Do not use generic copy-paste responses.
- Apologize for the experience without being defensive.
- Explain what you are doing about it (if it is a bug, say you are investigating; if it is a missing feature, say it is on your roadmap -- but only if it actually is).
- Invite them to contact you directly through your support email for further help.
When a negative review identifies a genuine bug that you subsequently fix, go back and update your response: "This issue was fixed in version 2.3.1. Please update and let us know if you experience any further issues." This creates a visible resolution narrative. Potential users who read the review see not just the complaint but the fix. Some reviewers will update their rating after seeing that you addressed their issue.
What not to do in review responses:
- Do not argue with the reviewer or suggest the problem is their fault.
- Do not be dismissive ("Works fine for everyone else").
- Do not use your response as a marketing pitch.
- Do not copy-paste identical responses to different reviews. Users notice, and it signals that you are going through the motions rather than actually reading feedback.
Rating Velocity: Why Steady Beats Spiky
Rating velocity -- the rate at which new reviews arrive -- is a distinct signal from total review count. Both Apple and Google weight recent reviews more heavily than historical ones in their ranking algorithms. An app receiving 30 new reviews per week outranks an app with 50,000 total reviews but only 2 new reviews per week, all else being equal.
This has practical implications for your strategy. A burst of reviews from a Product Hunt launch or a press mention is valuable, but it is temporary. Within weeks, your velocity drops back to baseline, and the ranking boost fades. Sustainable review velocity -- a steady flow of reviews week after week -- provides a consistent ranking advantage.
Apple's App Store compounds this effect through version-specific rating display. In many markets, the App Store shows the rating for your current version by default. When you push an update, your visible review count can drop dramatically unless you have consistent review velocity to replenish it. If your last burst of reviews was three months ago and you just shipped an update, your listing may show "Not Enough Ratings" until new reviews accumulate.
Design your review strategy for steady-state performance, not spikes. The timing formula described above naturally produces consistent velocity because it triggers for every user who meets the criteria, not just users who arrive during a marketing push.
Recovering from a Low Rating
If your app currently sits below 4.0 stars, you have a conversion problem that no amount of keyword optimization can fix. A low rating compounds other ASO mistakes that silently kill your downloads, making every other optimization less effective. Users filter by rating, either consciously or instinctively. Below 4.0, your listing bleeds potential downloads.
Recovery requires two parallel efforts: fixing the issues that caused the low rating, and generating enough new positive reviews to shift the average.
Step 1: Diagnose. Read every negative review from the past six months. Group complaints by theme: crashes, missing features, confusing UX, pricing objections, performance issues. Rank these themes by frequency. The top two or three themes are your fix priorities.
Step 2: Fix. Ship an update that addresses the most common complaints. Do not try to fix everything at once. Target the issues that generated the most negative reviews. In your What's New text, explicitly mention the fixes so that returning users know the problems have been addressed.
Step 3: Solicit new reviews. With the fixes shipped, your review prompt timing formula begins collecting reviews from users who are having a better experience. The new reviews coming in should be predominantly positive because the issues that were generating negatives have been resolved.
The math of recovery. If you have 500 reviews averaging 3.5 stars, reaching 4.0 requires approximately 250 new reviews averaging 4.5 stars (the exact number depends on your rating distribution). Reaching 4.5 from 3.5 with 500 existing reviews requires roughly 1,000 new reviews at 5 stars. This is not fast. Depending on your download volume, recovery can take months. But the trajectory matters as much as the destination -- users can see that your recent reviews are positive even while your overall average is still climbing.
On iOS, you can use version-specific ratings strategically. When you ship the update that fixes the major issues, the version-specific rating display resets, showing only ratings from users on the new version. If the fixes are effective, the version-specific rating should be noticeably higher than the all-time average, signaling improvement to potential users.
Review Monitoring and Response Workflow
At scale, manual review monitoring becomes unsustainable. Even at modest volumes -- 10-20 reviews per week across two platforms -- the overhead of reading, categorizing, and responding to each review adds up.
App Store Connect and Google Play Console both provide native review management interfaces, but they lack alerting, trend analysis, and cross-platform aggregation. If your app is on both iOS and Android, you are checking two separate dashboards daily.
Set up alerts for:
- All 1-star and 2-star reviews (respond within 24 hours).
- Reviews containing words like "crash," "bug," "broken," "freeze," or "scam" (these indicate urgent issues).
- Any review longer than 200 characters (detailed reviews, positive or negative, often contain the most actionable feedback).
Third-party tools like AppFollow, Appfigures, or App Radar provide email or Slack alerts for new reviews matching your criteria. The investment is modest and the time savings are significant, especially as your review volume grows.
Build a weekly habit: every Monday, review the past week's new reviews. Respond to any you missed. Identify recurring themes. Update your bug tracker with issues surfaced in reviews. This 30-minute weekly routine keeps you responsive and informed without consuming your development time.
Using StoreLit's Rating Breakdown for Competitive Intelligence
Knowing your own review metrics is necessary but not sufficient. You also need to know how you compare to your competitors. An app with 500 reviews and a 4.3 rating might feel adequate until you realize the top five competitors in your category all have 5,000+ reviews and 4.7+ ratings.
StoreLit's ASO Audit includes a rating breakdown analysis that goes beyond your star average. It examines your rating distribution across 1-5 stars, compares your review velocity to your direct competitors, and identifies whether your rating is a competitive advantage or a liability relative to your category. This context transforms a raw number into an actionable insight: not just "your rating is 4.3" but "your rating is 0.4 points below your category's top 5 average, and your review velocity is 3x slower than the category median."
That kind of competitive context is what turns a review strategy from guesswork into a targeted effort with measurable goals.
The Review Strategy Checklist
If you are starting from scratch or overhauling a neglected review strategy, here is the concrete sequence:
- Implement the platform review API using the timing formula: session count + days since install + positive action trigger. Ideally, this should be part of your pre-launch ASO checklist rather than something you add months later.
- Remove any existing review gating. If you have a "Do you like our app?" pre-screen, remove it immediately. It violates both platform guidelines.
- Respond to your last 20 negative reviews. Even old responses signal to browsing users that you are an active, responsive developer.
- Set up review alerts for 1-2 star reviews and reviews mentioning critical keywords.
- Build a weekly review cadence. Every Monday, review, respond, categorize, and extract insights from the past week's reviews.
- Track your velocity. Monitor reviews per week over time, not just your cumulative total. Velocity is the leading indicator; total count is the lagging one.
- Iterate your timing. After 4-6 weeks, evaluate which trigger moments produce the most reviews and the highest average rating. Adjust your thresholds and trigger points based on real data.
Reviews are not a feature you ship once. They are a system you maintain -- and the developers who maintain it systematically are the ones whose ratings climb while their competitors stagnate.
