LDA and PCA are types of linear dimensionality reduction algorithms. T-distributed stochastic neighbor embedding (t-SNE), however, is a form of non-linear dimensionality reduction (or, manifold learning). In aiming to principally preserve model variance, LDA and PCA focus on retaining distance between dissimilar datapoints in their lower dimensional representations. In contrast, t-SNE aims to preserve the local data structure with reducing model dimensions. t-SNE further differs from LDA and PCA in that the latter two may produce models with more than three dimensions, so long as their generated model has less dimensions than the original data. t-SNE, however, visualizes all datasets in either three or two dimensions.
As a non-linear transformation methods, t-SNE foregoes data matrices. Instead, t-SNE utilizes a Gaussian kernel to calculate pairwise similarity of datapoints. Points near one another in the original dataset have a higher probability of being near one another than those further away. t-SNE then maps all of the datapoints onto a three or two-dimensional space while attempting to preserve data pairs.9
There are a number of additional dimensionality reduction methods, such as kernel PCA, factor analysis, random forests, and singular value decomposition (SVD). PCA, LDA, and t-SNE are among the most widely used and discussed. Note that several packages and libraries, such as scikit-learn, come preloaded with functions for implementing these techniques.