The short answer
The Growth Readiness Scorecard is a thirteen-question diagnostic that measures a solo professional practice against the five phases of the Growth Infrastructure Method: scaling product design, positioning, infrastructure, launch, and ongoing operation. It scores each phase individually, finds the weakest one, and returns a personalised report naming the single gap to close first. It is designed to reject rather than qualify. Most practices that land on it hear the same thing: you are not ready yet, here is why, here is what to fix first. That honesty is the point.
Why this matters
Every growth consultancy selling to solo professionals has a scorecard, an audit, or a quiz. Almost all of them are lead-generation tricks. You answer seven questions. The quiz tells you your practice is in desperate need of help. The help happens to be the service the consultancy sells. The same result would have shown up regardless of what you answered.
The Growth Readiness Scorecard exists for the opposite reason. We take two new clients a month. When we are full we waitlist. We cannot afford to run fifty discovery calls a week with practices that are not ready for a thirty-day build. The scorecard is the filter that saves our time and the prospect's. The honest output matters more than the conversion rate.
This essay explains what the five phases are, what the thirteen questions actually measure, what a good score looks like for each, and what the report does with the signal.
The five phases and what each one measures
A Growth Engine for a solo professional is five phases shipped in order. Each phase produces a specific asset that the next phase depends on. The scorecard measures whether the inputs to each phase are ready.
Phase one measures whether the practice has a scaling product that is ready to build. Not a wish, not a slide. A defined outcome, a named audience segment, and a price point the market has already validated in conversation or in a test. A practice that cannot name the product, the audience, or the price is not at the start of a thirty-day build. It is at the start of a week of product design before the build can begin. That is fine, but the report will say so.
Phase two measures voice clarity and positioning. Can you name your ideal client in one sentence without hedging. Do you have a point of view on the core problem that is different from the three competitors next to you. Is there a body of writing, speaking, or client content we can train a voice file from. Most practices score low here. The asset exists. It is fragmented across twenty client emails, eight podcast appearances, and a LinkedIn archive nobody has read in two years. The diagnostic checks whether the raw material is rich enough to extract from.
Phase three measures current infrastructure and what will have to come out. Most practices have a website nobody touches, a mailing list half-exported to Mailchimp, and an ad account that ran for three weeks in 2022. The scorecard asks specifically what exists, who owns it, and whether it is worth keeping. Sometimes the existing stack accelerates a build. More often it is a tax on the build because the previous tenant left data scattered.
Phase four measures launch readiness, not launch capability. The question is whether the practice has the time, the capacity to approve at speed, and the willingness to put something public that is alive before it is finished. Practices that insist the launch must be perfect score poorly here, not because perfectionism is wrong but because thirty days to live is a discipline and perfectionism is its opposite.
Phase five measures ongoing-operation fit. Is the practice set up to run the machine after it ships, or will it need a Track B managed arrangement. This is not a failure mode. It is a structural question. Some practitioners want accounts in their name and will not touch them after week eight. Fine. The scorecard flags this so the proposal matches the preference rather than arguing against it.
What the thirteen questions actually do
There are thirteen questions across the five phases. Each question targets one specific signal the build needs to calibrate. Most questions are three options, not five, because a three-way forced choice extracts a more honest answer than a five-point scale where everyone picks the middle.
Three questions cover the scaling product. Three cover positioning. Two cover infrastructure state. Two cover launch readiness. Two cover ongoing operation. One final question captures budget and timeline so the report can match tier without a separate conversation.
The scoring is weighted. The scaling product phase scores highest because nothing downstream works without it. A practice that scores zero on the scaling product question cannot be saved by a great website. The scoring enforces this truth rather than hiding it.
What a good score looks like for each band
The report places each practice into one of four bands. Each band has a specific next step attached.
Not ready means two or more phases scored in the bottom third. The honest advice is not to hire anyone yet. The practice needs to spend two to four weeks on pre-work before a build is worth commissioning. The report names the specific pre-work. No discovery call is offered. We would rather lose you to good timing than sign you into a build that will stall at week two.
Foundation means one or two phases are weak but the practice has the raw material to catch them up during the build. The thirty-day build is in scope at Foundation tier, Track A only, three-month minimum. Typical Foundation practice is a solo professional in year five or six with a defined scaling product and weak positioning.
Growth Engine means all five phases are passable and the build can ship in thirty days without pre-work. Growth Engine is the sweet spot for the Imperium Growth Partners product. Most practices that fit the Ideal Client Profile score in this band. Track A or Track B are both available.
Signature means the practice is past the point where a standard build is enough. Multiple scaling products, a complex product ladder, a mature voice, a large existing audience. These builds take a twelve-month minimum and sit on Track B only because the complexity requires operator-managed infrastructure. A small portion of practices score this high, and most of those came from Linda Paige's US network.
Why the scorecard rejects most practices
The reject rate is not a marketing stance. It is a design choice. Two new clients a month means roughly twenty-four new clients a year across both SA and US entities. The scorecard receives more submissions than that in a bad week. Without a filter that genuinely rejects, the discovery-call pipeline becomes a triage queue and the build quality suffers.
The rejections are honest. A practice that is ten years in but has no scaling product gets told so. A practice that cannot name its ideal client after two sessions gets told so. A practice whose all-in budget is under fifteen thousand rand or nine hundred dollars a month gets told so. The report does not soften the message, because the message needs to land for the practice to act on it.
About six in ten scorecards return a not-ready or a pre-work verdict. About three in ten return Foundation or Growth Engine. About one in ten returns Signature or an adjacent recommendation that does not match a standard tier and triggers a custom conversation. Those ratios are the honest output of a diagnostic built as a filter.
Where to take this next
The scorecard is live at imperiumgrowthpartners.com/scorecard. It takes three minutes. The report generates immediately, by email. There is no sales call unless you ask for one. There is no pressure sequence. If it says you are not ready, it says so directly, with the specific gap and the specific pre-work. If it says you are Growth Engine band, the next step is an intake that drafts your Statement of Work against your answers. Either way, three minutes is the most efficient thing you will do this week if you are wondering whether your practice is at the ceiling the Growth Infrastructure Method is built to break.
