- An approach to non-linear dimensionality reduction (DR) tasks.
- Basic idea: in many cases, data dimensionality is
*artificially*high - which adds to visualization difficulty. - The simplest method is by taking a
*random projection*of the data. This may aid visualization, but you're likely losing interesting structures within the data. **Manifold learning**is an extension of*linear*DR techniques (ex: PCA) to*nonlinear*data.

In [1]:

```
from time import time
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import offsetbox
from sklearn import (manifold, datasets, decomposition, ensemble,
discriminant_analysis, random_projection, neighbors)
```

In [2]:

```
digits = datasets.load_digits(n_class=6)
X,y = digits.data, digits.target
n_samples, n_features = X.shape
n_neighbors = 30
```

In [3]:

```
def plot_embedding(X, title=None):
x_min, x_max = np.min(X, 0), np.max(X, 0)
X = (X - x_min) / (x_max - x_min)
plt.figure()
ax = plt.subplot(111)
for i in range(X.shape[0]):
plt.text(X[i, 0], X[i, 1], str(y[i]),
color=plt.cm.Set1(y[i] / 10.),
fontdict={'weight': 'bold', 'size': 9})
if hasattr(offsetbox, 'AnnotationBbox'):
# only print thumbnails with matplotlib > 1.0
shown_images = np.array([[1., 1.]]) # just something big
for i in range(X.shape[0]):
dist = np.sum((X[i] - shown_images) ** 2, 1)
if np.min(dist) < 4e-3:
continue # don't show points that are too close
shown_images = np.r_[shown_images, [X[i]]]
imagebox = offsetbox.AnnotationBbox(
offsetbox.OffsetImage(digits.images[i],
cmap=plt.cm.gray_r),
X[i])
ax.add_artist(imagebox)
plt.xticks([]), plt.yticks([])
if title is not None:
plt.title(title)
```

In [4]:

```
# Plot images of the digits
n_img_per_row = 20
img = np.zeros((10*n_img_per_row,
10*n_img_per_row))
for i in range(n_img_per_row):
ix = 10*i+1
for j in range(n_img_per_row):
iy = 10*j+1
img[ix:ix + 8,
iy:iy + 8] = X[i * n_img_per_row + j].reshape((8, 8))
plt.imshow(img, cmap=plt.cm.binary)
plt.xticks([]); plt.yticks([]); plt.title('From the 64-D digits dataset')
```

Out[4]:

Text(0.5, 1.0, 'From the 64-D digits dataset')

In [10]:

```
t0 = time()
rp = random_projection.SparseRandomProjection(n_components=2, random_state=42)
X_projected = rp.fit_transform(X)
plot_embedding(X_projected, "Sparse Random Projection (time %.3fs)" % (time()-t0))
```

In [8]:

```
t0 = time()
X_pca = decomposition.TruncatedSVD(n_components=2).fit_transform(X)
plot_embedding(X_pca,
"Principal Components (Truncated SVD) (time %.3fs)" %
(time() - t0))
```

In [11]:

```
X2 = X.copy(); X2.flat[::X.shape[1] + 1] += 0.01 # Make X invertible
t0 = time()
X_lda = discriminant_analysis.LinearDiscriminantAnalysis(n_components=2).fit_transform(X2, y)
plot_embedding(X_lda, "LDA (time %.3fs)" % (time() - t0))
```

- Can be viewed an extension of MDS (Multi-dimensional Scaling) or Kernel PCA.
- Isomap finds a lower-D embedding which
*maintains geodesic distances between all points.* - Three algorithm steps:
**Nearest neighbor search**using a BallTree.**Shortest-path graph search**:`path_method`

selects either*Dijkstra's*or*Floyd-Warshall*algorithm. The code chooses a method, based on the dataset, if not specified.**Partial eigenvalue decomposition**:- The embedded model is stored in the eigenvectors corresponding to the $d$ largest eigenvalues of the $NxN$ isomap kernel.
`eigen_solver`

controls the solver method & looks for a best method, based on the data, if not specified. The computation cost can be improved if`ARPACK`

is used.

In [12]:

```
t0 = time()
X_iso = manifold.Isomap(n_neighbors=n_neighbors, n_components=2).fit_transform(X)
plot_embedding(X_iso, "Isomap (time %.3fs)" % (time() - t0))
```

- Finds a lower-D data projection while
*preserving distances between local neighborhoods*. - Algorithm description:
**Nearest neighbor search**: same as above.**Local neighborhood weight matrix construction**: $kxk$, for each of $N$ local neighborhoods.**Partial eigenvalue decomposition**: same as above.

In [13]:

```
clf = manifold.LocallyLinearEmbedding(n_neighbors=n_neighbors, n_components=2,
method='standard')
t0 = time()
X_lle = clf.fit_transform(X)
plot_embedding(X_lle, "LLE (time %.3fs)" % (time() - t0))
print("Reconstruction error: %g" % clf.reconstruction_error_)
```

Reconstruction error: 2.11987e-06

- LLE's problem is with regularization - if #neighbors > #dimensions, then matrix that defines each local neighborhood is rank deficient.
- Standard LLE solves this with a regularization parameter $r$ which is chosen due to the
*trace of the local weight matrix*(def?). The solution*should*converge to a desired embedding as r->0, but there's no guarantee that it will do so. This can result in distortions. - MLLE uses
*multiple weight vectors*in each neighborhood to overcome this weakness. `method="modified"`

controls this feature. It requires`n_neighbors`

>`n_components`

.- Algorithm description:
**Nearest neighbor search**: same as above.**Local neighborhood weight matrix construction**: similar to above, except for adding the weight matrix from multiple weights.**Partial eigenvalue decomposition**: same as above.

In [14]:

```
clf = manifold.LocallyLinearEmbedding(
n_neighbors=n_neighbors, n_components=2, method='modified')
t0 = time()
X_mlle = clf.fit_transform(X)
plot_embedding(X_mlle, "LLE (modified) (time %.2fs)" % (time() - t0))
print("Reconstruction error: %g" % clf.reconstruction_error_)
```

Reconstruction error: 0.360724

- Another method of solving the LLE regularization problem.
- Uses a
*hessian-based quadratic form*at each neighborhood to recover locally linear structures. `method="hessian"`

controls this feature. It requires`n_neighbors`

>`n_components*(n_components+3)/2`

.- Algorithm description:
**Nearest neighbor search**: same as above.**Local neighborhood weight matrix construction**: similar to standard LLE, plus a*QR decomposition*of a local Hessian estimator.**Partial eigenvalue decomposition**: same as above.

In [15]:

```
clf = manifold.LocallyLinearEmbedding(
n_neighbors=n_neighbors, n_components=2, method='hessian')
t0 = time()
X_hlle = clf.fit_transform(X)
plot_embedding(X_hlle, "LLE (Hessian) (time %.3fs)" %
(time() - t0))
print("Reconstruction error: %g" % clf.reconstruction_error_)
```

Reconstruction error: 0.212673

- reference (2004)
- Algorithmically similar to LLE, but technically not a variant.
- LTSA finds a description of each neighborhood geometry via its tangent space
- Aligns local tangent spaces to learn the low-D embedding.
- Algorithm description:
**Nearest neighbor search**: same as above.**Weight matrix construction**: (TBD)**Partial eigenvalue decomposition**: same as above.

In [22]:

```
clf = manifold.LocallyLinearEmbedding(n_neighbors=n_neighbors, n_components=2, method='ltsa')
t0 = time()
X_ltsa = clf.fit_transform(X)
plot_embedding(X_ltsa, "LTSA (time %.3fs)" % (time() - t0))
print("Reconstruction error: %g" % clf.reconstruction_error_)
```

Reconstruction error: 0.212677

- wikipedia
- Technique for analyzing data similarity. Models similarity as distances in geometric space. Data can be ratings, interaction frequencies, or other similar indices.
- $S$ is the similarity matrix; $X$ = input coordinates of $n$ points. The
*disparities*($\hat{d}_{ij}$) are transforms of the similarities, chosen in an optimal manner. - The objective (aka "
**stress**") is defined as $\sum_{i < j} d_{ij}(X) - \hat{d}_{ij}(X)$. - Two variants:
**metric**: distances between two output points are set to as close together as possible to the similarity metric. (def?)**non-metric**: algorithm tries to preserve order of distances - therefore looks for a*monotonic relation*between the embedded-space distances.

In [17]:

```
clf = manifold.MDS(n_components=2, n_init=1, max_iter=100)
t0 = time()
X_mds = clf.fit_transform(X)
plot_embedding(X_mds, "MDS (time %.2fs)" % (time() - t0))
print("Stress: %f" % clf.stress_)
```

Stress: 140185087.993700

In [23]:

```
# metric vs non-metric MDS - reconstructed points on noisy data
# (plots slightly shifted to avoid complete overlap)
import numpy as np
from matplotlib import pyplot as plt
from matplotlib.collections import LineCollection
from sklearn import manifold
from sklearn.metrics import euclidean_distances
from sklearn.de
```