Test how predictive CLV models turn historical behavior into forward value estimates. From leakage control to discounting, see what separates a demo from production.
In predictive CLV, a common target is discounted cumulative margin over a fixed horizon; the discount factor is usually based on ______.
gross revenue growth rate
a per‑period cost of capital or hurdle rate
impression share
ad platform ROAS
To avoid leakage when training CLV, features should be derived from data available ______.
after the first renewal
at the end of the horizon
whenever data pipelines finish
no later than the model’s prediction timestamp
For gradient boosting CLV, what validation scheme best reflects deployment?
out‑of‑time folds that respect customer start dates
shuffle split with replacement
stratified by product only
random k‑fold across transactions
Which loss is most aligned with dollar‑denominated CLV error?
MAE on discounted margin (L1)
AUC
cosine similarity
logloss on churn
Monotonic constraints in gradient boosting are useful when ______.
class labels are imbalanced
we need faster SQL extraction
domain knowledge demands a one‑direction effect (e.g., higher tenure → higher CLV)
we must remove multicollinearity
Calibrating a churn model before converting to CLV helps because ______.
well‑calibrated survival probabilities improve expected value aggregation
it increases tree depth automatically
it removes the need for discounting
it guarantees lower RMSE
A practical way to blend purchase frequency and value in CLV features is ______.
dropping value to avoid skew
one‑hot encoding every product image
recency‑frequency‑monetary embeddings or aggregates at customer level
using only last click channel
For sparse high‑cardinality categories (e.g., product IDs), gradient boosting commonly uses ______.
full one‑hot of every item
dropping the feature entirely
raw IDs as integers
target encoding with out‑of‑fold leakage control
When deploying, prediction freshness matters because ______.
the optimizer needs GPU RAM
paid search CPCs always fall over time
CSV exports get larger
CLV decays as behavior changes; stale features degrade targeting yield
Which statement about horizons is correct?
Horizon choice doesn’t affect features
Open‑ended horizons eliminate the need for discounting
Using a fixed horizon (e.g., 12 months) eases validation and aligns to budgeting cycles
Shorter horizons always raise ROI
Starter
Revisit leakage checks, label definitions, and discounting basics.
Solid
Nice momentum—tune validation and monotonic features next.
Expert!
Stellar grasp of CLV modeling tradeoffs and production controls.