You're on Zillow at 11pm. You found a house you can almost afford in a neighborhood you've never been to. You click the GreatSchools rating: 8/10. Okay, good school. You move on.
But what does 8/10 actually mean? What's it based on? And who decided that number should be sitting right there next to the listing price, quietly shaping one of the biggest decisions you'll ever make?
We started asking those questions. The answers bothered us enough to build something different. But building something different comes with an obligation: to be honest about what we can actually fix, and what we can't.
The incentive problem nobody talks about
GreatSchools is a nonprofit that partners with Zillow and other real estate platforms, where its ratings appear alongside property listings. This distribution model is powerful — but it also means the ratings serve a dual purpose: helping parents and helping sell houses.
Niche generates revenue in part from schools and colleges that pay for enhanced profiles on the platform. This is disclosed, but it's worth knowing: the schools you're evaluating may also be Niche's paying customers.
Neither platform is funded by parents. We think that matters.
SchoolScope is built to be funded by the people who use it — parents. No real estate partnerships. No school advertising. When we add paid features, they'll be paid for by the people making the decisions, not the people trying to influence them.
What the data hides — and what we surface
Here's the thing that made us build this.
California publishes detailed CAASPP test results every year. Most rating sites take those results and report a single number: the percentage of students who "met or exceeded" the state standard. Sounds reasonable. It's not.
Picture two California elementary schools. Both report that 75% of students met or exceeded the state standard in math. On most rating sites, they look identical. On SchoolScope, they look nothing alike — because one has 54% of students exceeding the standard while the other has only 11%.
School A has more than half its students clearing the bar by a wide margin. School B has most students barely clearing it. These are different schools serving students very differently, and that difference disappears when you collapse everything to a single proficiency rate.
We haven't found this split surfaced prominently in other rating sites' public-facing scores. Whether other tools account for it internally, we can't say — their methodologies aren't fully public. What we can say is that we surface it explicitly, because we believe the ceiling matters, not just the floor.
For the full breakdown of why this split changes everything — including what each level actually means and how to read it at your school — read our deep dive on the exceeded vs. met distinction.
Beyond the exceeded split, we also surface grade 3-to-5 growth trajectory — whether 5th graders are outperforming 3rd graders at the same school. A school where 40% of 3rd graders exceed the standard but 55% of 5th graders do is adding value. A school where scores decline from 3rd to 5th — despite high overall proficiency — may be coasting on kids who arrived already ahead. We also show chronic absenteeism (the percentage of students missing 10% or more of school days, one of the stronger predictors of academic outcomes) and suspension rates (a window into a school's discipline philosophy). Neither of these appears in most public-facing rating scores.
We have opinions. Here they are.
The Scope Score isn't a neutral aggregation of everything California reports. It reflects specific judgments about what matters.
We believe exceeded scores deserve extra weight. A school where half the kids are exceeding the state standard is doing something different than a school where half are barely meeting it — even if the headline number looks the same. The exceeded rate gets 43% of the Scope Score for elementary schools, the highest weight of any single dimension. Our reasoning: it measures what a school does for kids who are already keeping up, and it's much harder to fake than raw proficiency.
We believe trajectory matters more than raw scores. We weight grade 3-to-5 growth at 15%. We'd weight it higher if the data let us track individual students over time. It doesn't, so we're transparent about that limit.
We believe chronic absenteeism is a culture signal, not just a statistic. A school where 20% of students miss more than 10% of days has a problem that test scores alone won't show you. That problem might be health, might be safety, might be that kids don't want to come. Any of those possibilities matters. Chronic absenteeism gets 10% of the elementary Scope Score — inverted, so lower is better.
We believe suspension rates reveal discipline philosophy. A school with high suspension rates is making choices about how it responds to kids who struggle. Those choices matter for everyone in the building, not just the ones getting suspended. We include suspension rate at 5%, inverted. We also include ELPAC proficiency — the rate of English learners making progress toward fluency — at 5%, because language development is both an academic and a climate signal.
These are opinions. Reasonable people could weight things differently. You can read the exact weights and all our reasoning at /methodology. We'll soon let you adjust the weights yourself.
Three lenses, not one verdict
The Scope Score is built around three lenses:
Academic Performance — exceeded rates, met+above rates, grade 3→5 growth trajectory, and (for high school) graduation and college readiness. Growth is part of academic performance in our view: a school that moves kids forward is doing something different from one that maintains a level.
School Climate — chronic absenteeism, suspension rates, and ELPAC proficiency. This measures whether the school is a place kids show up for, a place that treats them with care when they fall short, and a place that actively supports its multilingual students.
Community Profile — demographics, equity gaps across student groups, and per-pupil spending. This lens is context, not a scored input. We show it alongside the score so you can interpret what the numbers mean given who walks in the door.
On per-pupil spending specifically: California districts spend anywhere from $12,000 to over $30,000 per student in current operating costs (NCES, FY2023). That range is enormous — and it doesn't correlate with scores the way you'd expect. Some of the highest-scoring districts in LA County spend below the state average of ~$17,500 per student. A district spending $23,000 per student on a school scoring 40 is a different story from a district spending $14,000 per student on a school scoring 40. Spending data isn't a judgment — it's context.
The Scope Score combines the first two lenses into a single number because single numbers are useful for comparison. But the number should always be a doorway, not a verdict. Every school profile shows the component breakdown because we don't want the summary to hide the story underneath.
What we can't see
Here's what the data genuinely doesn't capture, and where our score will mislead you if you rely on it alone.
Teaching quality. The single most important variable in a child's school experience is whether their specific teacher is skilled, warm, and good at reaching kids like them. Test scores correlate loosely with average teaching quality over time, but they say nothing about the teacher your kid will actually have. There is no substitute for talking to parents whose kids are already at the school.
Arts, music, athletics, and extracurriculars. These don't show up in CAASPP data. A school with a strong arts program or a legendary running team offers something real that our score can't see. If these matter to your family — and for some kids they matter enormously — the Scope Score won't help you here.
IEP support and special education quality. For families with kids who have IEPs or need learning accommodations, the difference between schools is often enormous in ways that are completely invisible to test-based scoring. Special education outcomes aren't captured in CAASPP at the school level the way we'd need to use them. We don't have a good answer here. We're honest about that.
Parent and community culture. Whether the PTA is engaged, whether new families get welcomed, whether the school community feels warm or competitive — this is learnable through talking to current parents and attending a school event. It doesn't fit in a database.
Your specific kid. A school that's exceptional for a kid who needs structure might be wrong for a kid who needs freedom to explore. A school with high exceeded scores might have an intense, high-pressure culture. A school with middling scores might have a warm, unhurried classroom culture that's exactly right for your child. Only you know your kid, and only a visit will show you the fit.
How to use the data well
The Scope Score is most useful as a starting point that narrows the field, not a destination.
Look at patterns, not snapshots. A school that has been in the 75th percentile for multiple years is telling you something more reliable than a school that spiked this year after a run of average scores.
Compare schools that are comparable. A Scope Score of 65 in a high-income district where most schools score 70-80 means something different than a Scope Score of 65 in a district where most schools score 45-55. We show district and state percentile ranks specifically for this reason — use those, not just the raw number.
Use the data to generate visit questions, not eliminate visits. If the absenteeism rate is high, ask current parents why. If growth is strong but exceeded scores are low, ask what the school does for kids who are already ahead. If suspension rates are high, ask how the school handles conflict. Let the data make you a smarter visitor, not a more anxious one.
Trust your tour. When you walk into a school, you're picking up real information — how the front office staff greet you, how kids move through hallways, whether classrooms look alive or exhausted. That information is real. Our score isn't more reliable than it.
Why we start with one state, not fifty
National rating sites face an impossible task: compare schools across 50 states with 50 different testing systems. The only way to do that is to flatten everything into a single proficiency threshold. That works for broad comparisons but loses the detail that matters when you're choosing between two schools in your zip code.
We take the opposite approach — we show you the most useful data your state actually produces, in its native form, without distilling it into a number that's comparable to a school in Ohio but meaningless for your decision.
California's CAASPP data is unusually rich. It publishes four performance levels, not just pass/fail. It reports by grade, by subgroup, by school. That's what lets us show you the exceeded vs. met split, the grade 3-to-5 growth trajectory, and all the detail that disappears when you compress it into a score designed to work in every state at once.
Starting with one state isn't a limitation we're embarrassed about. It's a choice that lets us go deeper. When we expand, we'll do the same thing: show you what that state's data actually says, using the metrics most meaningful locally.
Hold us accountable
We checked something in building this that we want to share honestly: we looked for whether the exceeded-vs-met split and growth trajectory were correlated strongly enough to justify weighting them separately rather than combining them. They're correlated — schools with high exceeded rates also tend to have stronger growth — but not perfectly. There are meaningful schools in our data where growth is strong and exceeded is weak, and vice versa. That pattern is what justifies tracking them as distinct signals rather than collapsing them.
We also looked for whether absenteeism was meaningfully independent from test scores. It is, but not as independently as we'd hoped. Schools with very high test scores tend toward lower absenteeism. But the variation is real: there are high-scoring schools with serious absenteeism problems, and lower-scoring schools with strikingly strong attendance. Those are the cases where absenteeism adds the most information.
We're not claiming our weights are optimal. We're claiming they're reasonable and transparent.
The biggest platforms in education data are funded by real estate companies or by schools themselves. We think parents deserve a tool funded by parents. If our score doesn't match what you know about a school from experience — if the data says one thing and parents in the neighborhood say another — we want to know. That divergence is information. It might mean our methodology is missing something.
If you're a parent trying to understand California schools, start here. If you want to understand exactly how we score schools, read our methodology. If you think we're wrong about something, tell us. We mean it.
Data source: California Department of Education (CAASPP 2024-25, Chronic Absenteeism 2024-25, Suspension Rates 2024-25). Scope Score methodology described in full at /methodology.
By SchoolScope — Published March 26, 2026
