Weighted scoring translates strategy into numbers so portfolios can be ranked transparently. Confirm how to set criteria, weights, and scales without bias and when to combine with other tools.
In a weighted scoring model, what must be true of the criteria weights before scoring projects?
They must change after scoring to fit the ranking
They should be equal unless budgets are unlimited
They are optional if a GE‑McKinsey matrix is used
They sum to 100% (or 1.0) to reflect relative importance
What scoring practice helps reduce bias and makes results easier to audit?
Hide raw scores and publish only final ranks
Score only on a pass/fail basis without weights
Switch scales per criterion to maximize spread
Use a consistent numeric scale (e.g., 1–10) for each criterion before applying weights
Which pair defines the axes of the GE‑McKinsey nine‑box used alongside scoring models?
Employee count and asset turnover
Market share and absolute revenue
Industry attractiveness and business unit strength
Cost of capital and payback period
When should you add a qualitative gate after the numeric ranking?
Never—numbers must decide everything
Only for projects from external vendors
Only when all projects tie on total score
When strategic fit or risk is not captured well by the chosen criteria
What is a common pitfall when creating criteria for a weighted scoring model?
Double‑counting the same effect across correlated criteria
Using no more than five criteria
Documenting definitions and examples
Normalizing scores to a common scale
How are project scores typically aggregated in a weighted scoring model?
Use only qualitative MoSCoW labels
Multiply each criterion score by its weight and sum to a total
Pick the maximum single‑criterion score
Divide cost of delay by job size for each item
What role does scenario analysis play in portfolio scoring?
It replaces weighting entirely
It guarantees the ranking will never change
It forces all weights to be equal
It tests weight and score assumptions under different strategies or budgets
Which step keeps the scoring model aligned with company strategy as it evolves?
Lock criteria for three years to avoid drift
Remove strategy‑linked criteria in downturns
Revisit criteria and weights on a cadence (e.g., quarterly) with governance approval
Let teams edit weights per project
What is the primary use of weighted scoring in R&D and innovation pipelines per 2025 best practices?
Replace market testing entirely
Set salaries for product managers
Estimate tax credits from patents
Objectively prioritize ideas and projects against strategic criteria before funding
When comparing weighted scoring to the GE‑McKinsey matrix, which is true?
Weighted scoring offers granular criteria, while GE‑McKinsey summarizes position on two axes
Both require market share on the X‑axis
GE‑McKinsey is only for single products, not SBUs
Weighted scoring cannot be used with financial metrics
Starter
You know the basics—tighten criteria definitions and be wary of double‑counting.
Solid
Good structure—now iterate weights with governance and scenario tests.
Expert!
Excellent—your model stays strategy‑true and audit‑ready through change.