Segmentation-Targeting-Positioning (STP)

K-Means vs. Hierarchical

Test how well you can distinguish popular clustering algorithms. See if you know when to use centroid‑based versus tree‑based approaches.

K‑means clustering requires the analyst to pre‑specify ______.

the distance metric

a dendrogram

the number of clusters

the linkage method

K‑means partitions data around K centroids, so K must be known beforehand. Knowing K controls the optimisation loop.

Hierarchical clustering produces a ______ that visualizes cluster merges at each step.

dendrogram

elbow plot

heatmap

scree plot

A dendrogram shows the nested structure as observations are merged or split, revealing cluster hierarchy.

Unlike agglomerative clustering, divisive hierarchical methods start with ______.

a fixed partition

random seed centroids

all data in one cluster

each point as its own cluster

Divisive algorithms recursively split a single root cluster, the opposite of bottom‑up agglomeration.

The popular elbow method in K‑means examines variation explained versus ______.

iteration count

the number of clusters K

sample size

silhouette score

Plotting total within‑cluster SSE against K often shows an 'elbow' where adding clusters yields diminishing returns.

Ward’s method in hierarchical clustering minimises increases in ______ within clusters.

total squared error

Euclidean distance

average silhouette width

entropy

Ward linkage merges the pair of clusters whose union has the smallest increase in SSE, preferring compact groups.

K‑means optimises ______ distance to the centroid.

Manhattan distance

cosine similarity

within‑cluster sum of squares

between‑cluster distance

The algorithm iteratively assigns points to the nearest centroid to minimise aggregated squared Euclidean distance.

K‑means struggles with clusters that are ______ shaped.

balanced

numeric

non‑spherical

small

Centroid‑based partitioning assumes convex, spherical structures; elongated or concentric shapes violate this assumption.

Standardising variables before K‑means prevents dominance by ______.

noise points

missing values

categorical fields

large‑scale features

Variables with greater numeric range can outweigh others in distance calculations; scaling evens their influence.

Hierarchical clustering can use complete, single, or ______ linkage definitions.

k‑median

average

centroid

random

Average linkage measures the mean distance between all observation pairs across two clusters when deciding merges.

Computational cost of agglomerative hierarchical clustering is roughly ______.

O(n)

O(n^3) time

O(n log n)

O(k n)

Pairwise distance updates across all merges produce cubic‑time growth, limiting scalability for large n.

Starter

Good start—review k-means basics.

Solid

Strong grasp—polish a few finer points.

Expert!

You’ve mastered k-means segmentation.

What's your reaction?

Related Quizzes

1 of 10

Leave A Reply

Your email address will not be published. Required fields are marked *