What is "Dimension Reduction"?
- Gamze Bulut
- Mar 17
- 3 min read

Today I am trying to understand the logic behind dimension reduction (DR). I wanted to read and write about some simple examples to illustrate for myself and maybe help others learn as well.
The initial example that comes to my mind is: taking a photograph. In a broad, physical sense, when you take a photograph, you’re mapping a three-dimensional scene onto a two-dimensional image. This is essentially a projection from 3D to 2D, so in that way, it is a form of dimensionality reduction: you’re reducing reality’s three spatial dimensions down to two.
Other daily life examples (AI curated) that convey the core idea, taking something with many “dimensions” (or aspects) and representing it in fewer dimensions while preserving essential information are:
Flattening a Globe into a Map
What happens: A 3D Earth is “projected” onto a 2D map.
Why it’s dimensionality reduction: We lose one dimension (from the globe), but we keep key relationships like continents’ relative positions.
Caveat: Maps can distort distance or area (like Greenland appearing huge), which parallels how some DR methods distort global relationships while preserving local ones.
Summarizing a Restaurant’s Quality into a Single Rating
What happens: You might consider multiple dimensions of a restaurant’s quality—food taste, service, ambiance, cleanliness, price, etc.—but you often see a single star rating (like 4.5 out of 5).
Why it’s dimensionality reduction: Many different criteria (dimensions) get “compressed” into one number.
Caveat: You lose which specific aspect is strong or weak—just like DR methods sometimes lose nuance about individual features.
Compressing an Image to a Smaller Size
What happens: Suppose you have a photo with a resolution of 4000×3000 pixels (12 million dimensions if each pixel is one dimension!). If you resize it to 400×300, that’s only 120,000 pixels.
Why it’s dimensionality reduction: You’ve drastically reduced the number of “features” while still preserving a recognizable version of the image.
Caveat: You lose details and can’t zoom in without seeing pixelation.
Using Principal Components to Summarize Test Scores
What happens: Imagine a student has scores in 10 subjects (Math, English, History, etc.). You could run Principal Component Analysis (PCA) on those scores and reduce the 10 dimensions to just 2 or 3.
Why it’s dimensionality reduction: You capture the main “patterns” (e.g., a “STEM aptitude” axis vs. a “Humanities aptitude” axis), letting you plot students in 2D.
Caveat: You lose the specific breakdown per subject, but you gain a simpler, bird’s-eye view of performance.
Buying a Laptop: Distilling Many Specs into One Decision
What happens: When shopping for a laptop, you look at many specs—CPU speed, RAM, screen size, weight, battery life, price.
Why it’s dimensionality reduction: Often, a reviewer or your own personal assessment might roll all of those into a single overall rating or “best value pick.”
Caveat: You lose the nuance of each spec’s details, but gain an easy decision metric.
Just like the restaurant rating or the laptop buying example, we compress multiple measures into fewer measures or a single score. The key is doing so in a way that preserves the most important relationships—whether that’s the overall “quality,” the essential structure of an image, or the local clustering of data points in a complex dataset.
Any time you reduce the number of descriptors while trying to retain essential information, you’re performing a kind of dimensionality reduction.
I hope this was helpful! I will write on popular dimension reduction techniques next.
Comments