Monday, 29 June 2015

What are the advantages of different classification algorithms?

For instance, if we have large training data set with approx more than 10000 instances and more than 100000 features, then which classifier will be best to choose for classification

Answer:                                                                                                                                                      
There are a number of dimensions you can look at to give you a sense of what will be a reasonable algorithm to start with, namely:
  • 1.       Number of training examples
  • 2.       Dimensionality of the feature space
  • 3.       Do I expect the problem to be linearly separable?
  • 4.       Are features independent?
  • 5.       How is graph of the data plotted
  • 6.       Are features expected to be linearly dependent with the target variable?
  • 7.       Is overfitting expected to be a problem?
  • 8.       What are the system’s requirements in terms of speed/performance/memory usage...?
  • 9.       In which domain the problem is being solved? It means what is expectation of accuracy of model?
  • 10.    Does it require variables to be normally distributed?
  • 11.    Does it suffer multicollinearity issue? 
  • 12.    Does it do as well with categorical variables as continuous variables?
  • 13.    Does it calculate CI without CV?
  • 14.    Does it conduct variables selection without stepwise?
  • 15.    Does it apply to sparse data?
  1.   tHIS list can go on and on with more constraints of the approach. I would like mention here that best ways of deciding the situation is that start with the simple and small model and approach and move towards complicated. 


Logistic Regression

As a general rule of thumb, I would recommend to start with Logistic Regression. Logistic regression is a pretty well-behaved classification algorithm that can be trained as long as you expect your features to be roughly linear and the problem to be linearly separable. You can do some feature engineering to turn most non-linear features into linear pretty easily. It is also pretty robust to noise and you can avoid overfitting and even do feature selection by using l2 or l1 regularization. Logistic regression can also be used in Big Data scenarios since it is pretty efficient and can be distributed using, for example, ADMM (see logreg). A final advantage of LR is that the output can be interpreted as a probability. This is something that comes as a nice side effect since you can use it, for example, for ranking instead of classification.  Even in a case where you would not expect Logistic Regression to work 100%, do yourself a favor and run a simple l2-regularized LR to come up with a baseline before you go into using "fancier" approaches.

Ok, so now that you have set your baseline with Logistic Regression, what should be your next step? I would basically recommend two possible directions: (1) SVM's, or (2) Tree Ensembles. If I knew nothing about your problem, I would definitely go for (2), but I will start with describing why SVM's might be something worth considering. 

Support Vector Machines

Support Vector Machines (SVMs) use a different loss function (Hinge) from LR. They are also interpreted differently (maximum-margin). However, in practice, an SVM with a linear kernel is not very different from a Logistic Regression. The main reason you would want to use an SVM instead of a Logistic Regression is because your problem might not be linearly separable. In that case, you will have to use an SVM with a non linear kernel (e.g. RBF). The truth is that a Logistic Regression can also be used with a different kernel, but at that point you might be better off going for SVMs for practical reasons.

Another related reason to use SVMs is if you are in a highly dimensional space. For example, SVMs have been reported to work better for text classification. 

Unfortunately, the major downside of SVMs is that they can be painfully inefficient to train. So, I would not recommend them for any problem where you have many training examples. I would actually go even further and say that I would not recommend SVMs for most "industry scale" applications. Anything beyond a toy/lab problem might be better approached with a different algorithm.

Tree Ensembles

This gets me to the third family of algorithms: Tree Ensembles. This basically covers two distinct algorithms: Random Forests and Gradient Boosted Trees. I will talk about the differences later, but for now let me treat them as one for the purpose of comparing them to Logistic Regression.

Tree Ensembles have different advantages over LR. One main advantage is that they do not expect linear features or even features that interact linearly. Something I did not mention in LR is that it can hardly handle categorical (binary) features. Tree Ensembles, because they are nothing more than a bunch of Decision Trees combined, can handle this very well. The other main advantage is that, because of how they are constructed (using bagging or boosting) these algorithms handle very well high dimensional spaces as well as large number of training examples.

As for the difference between Random Forests (RF) and Gradient Boosted Decision Trees (GBDT), I won't go into many details, but one easy way to understand it is that GBDTs will usually perform better, but they are harder to get right. More concretely, GBDTs have more hyper-parameters to tune and are also more prone to overfitting. RFs can almost work "out of the box" and that is one reason why they are very popular.

Deep Learning

Last but not least, this answer would not be complete without at least a minor reference to Deep Learning. I would definitely not recommend this approach as a general-purpose technique for classification. But, you might probably have heard how well these methods perform in some cases such as image classification. If you have gone through the previous steps and still feel you can squeeze something out of your problem, you might want to use a Deep Learning approach. The truth is that if you use an open source implementation such as Theano, you can get an idea of how some of these approaches perform in your dataset pretty quickly. 

Summary

So, recapping, start with something simple like Logistic Regression to set a baseline and only make it more complicated if you need to. At that point, tree ensembles, and in particular Random Forests since they are easy to tune, might be the right way to go. If you feel there is still room for improvement, try GBDT or get even fancier and go for Deep Learning.

You can also take a look at the Kaggle Competitions. If you search for the keyword "classification" and select those that are completed, you will get a good sense of what people used to win competitions that might be similar to your problem at hand. At that point you will probably realize that using an ensemble is always likely to make things better. The only problem with ensembles, of course, is that they require maintaining all the independent methods working in parallel. That might be your final step to get as fancy as it gets.
  
Answer to the Question before Applying Machine Learning

How large is your training set?

If your training set is small, high bias/low variance classifiers then Naive Bayes have an advantage over low bias/high variance classifiers like kNN or logistic regression), since the latter will overfit. But low bias/high variance classifiers start to win out as your training set grows (they have lower asymptotic error), since high bias classifiers aren't powerful enough to provide accurate models. 

Advantages of some particular algorithms

Advantages of Naive Bayes: Super simple, you're just doing a bunch of counts. If the NB conditional independence assumption actually holds, a Naive Bayes classifier will converge quicker than discriminative models like logistic regression, so you need less training data. And even if the NB assumption doesn't hold, a NB classifier still often performs surprisingly well in practice. A good bet if you want to do some kind of semi-supervised learning, or want something embarrassingly simple that performs pretty well.

Advantages of Logistic Regression: Lots of ways to regularize your model, and you don't have to worry as much about your features being correlated, like you do in Naive Bayes. You also have a nice probabilistic interpretation, unlike decision trees or SVMs, and you can easily update your model to take in new data (using an online gradient descent method), again unlike decision trees or SVMs. Use it if you want a probabilistic framework (e.g., to easily adjust classification thresholds, to say when you're unsure, or to get confidence intervals) or if you expect to receive more training data in the future that you want to be able to quickly incorporate into your model.

Advantages of Decision Trees: Easy to interpret and explain (for some people -- I'm not sure I fall into this camp). Non-parametric, so you don't have to worry about outliers or whether the data is linearly separable (e.g., decision trees easily take care of cases where you have class A at the low end of some feature x, class B in the mid-range of feature x, and A again at the high end). Their main disadvantage is that they easily overfit, but that's where ensemble methods like random forests (or boosted trees) come in. Plus, random forests are often the winner for lots of problems in classification (usually slightly ahead of SVMs, I believe), they're fast and scalable, and you don't have to worry about tuning a bunch of parameters like you do with SVMs, so they seem to be quite popular these days.

Advantages of SVMs: High accuracy, nice theoretical guarantees regarding overfitting, and with an appropriate kernel they can work well even if you're data isn't linearly separable in the base feature space. Especially popular in text classification problems where very high-dimensional spaces are the norm. Memory-intensive and kind of annoying to run and tune, though, so I think random forests are starting to steal the crown.

To go back to the particular question of logistic regression vs. decision trees (which I'll assume to be a question of logistic regression vs. random forests) and summarize a bit: both are fast and scalable, random forests tend to beat out logistic regression in terms of accuracy, but logistic regression can be updated online and gives you useful probabilities. And since you're at Square (not quite sure what an inference scientist is, other than the embodiment of fun) and possibly working on fraud detection: having probabilities associated to each classification might be useful if you want to quickly adjust thresholds to change false positive/false negative rates, and regardless of the algorithm you choose, if your classes are heavily imbalanced (as often happens with fraud), you should probably resample the classes or adjust your error metrics to make the classes more equal.

The Crux

Recall, though, that better data often beats better algorithms, and designing good features goes a long way. And if you have a huge dataset, your choice of classification algorithm might not really matter so much in terms of classification performance (so choose your algorithm based on speed or ease of use instead).
 
And if you really care about accuracy, you should definitely try a bunch of different classifiers and select the best one by cross-validation. Or, to take a lesson from the Netflix Prize and Middle Earth, just use an ensemble method to choose them all!
  

Random Forests:
1. Almost always have lower classification error and better f-scores than decision trees.
2. Almost always perform as well as or better than SVMs, but are far easier for humans to understand.
3. Deal really well with uneven data sets that have missing variables.
4. Give you a really good idea of which features in your data set are the most important for free.
5. Generally train faster than SVMs (though this obviously depends on your implementation).


Logistic regression: 
No distribution requirement.
Perform well with few categories categorical variables.
Compute the logistic distribution.
Good for few categories variables.
Easy to interpret
Compute CI and suffer multicollinearity

Decision Trees: 
Decision trees and rule based algorithms are good because you can understand the model
No distribution requirement.
Heuristic and good for few categories variables
Not suffer multicollinearity (by choosing one of them)
Decision Trees are fast to train and easy to evaluate and interrupt.

Naïve Bayes: 
Generally no requirements
Good for more biased and low variance
Good for few categories variables,
Compute the multiplication of independent distributions
Suffer multicollinearity (Highly independent correlated data)
Bayesian classifiers are easy to understand.   
Naive Bayes mechanism is very simple to understand, it has also a high performance and is also easy to implement


LDA(Linear discriminate analysis not latent Dirichlet allocation):
Require normal and not good for few categories variables.
Compute the addition of Multivariate distribution.
Compute CI and Suffer multicollinearity

  

SVM: 
No distribution requirement.
Support vector machine gives good accuracy, power of flexibility from kernels.
Linear SVMs are very easy to interpret.
It is the non-linear case that is a bit tricky. 
Although any non-linear problem is going to require some work to interpret in that you would need to find a compact basis set to represent the non-linearity. 
Heck, If you know the basis set a-priori, you can just project your data onto this basis and run a linear SVM and the problem is trivial.
Compute hinge loss.
Flexible selection of kernels for nonlinear correlation
Not suffer multicollinearity.
Hard to interpret the results in non linearity
Support Vector Machines work very well in many circumstances and perform very good with large amounts of data.


Lasso:
No distribution requirement
Compute L1 loss.
Perform variable selection
Suffer multicollinearity


Comment

Above all, Logistic regression is still the most widely used for its good features, but if the variables are normally distributed and the categorical variables all have 5+ categories, you may be surprised by the performance of LDA, and if the correlations are mostly nonlinear, you can't beat a good SVM, and if sparsity and multicollinearity are a concern, I would recommend Adaptive Lasso with Ridge(weights) + Lasso, this would suffice for most scenarios without much tuning. And in the end if you need one fine tuned model, go for ensemble methods.  PS: Just see the sub question, with 10000 instances and more than 100000 features, the quick answer will be Lasso. The 2 generally accepted high performance methods are SVMs , Random Forests, and Boosted Trees. Neural network are slow to converge and hard to set parameters but if done with care it work wells


Important References

An Empirical Comparison of Supervised Learning Algorithms
Page on cornell.edu

An Empirical Comparison of Supervised Learning Algorithms Using Different Performance Metrics
Page on niculescu-mizil.org

and the video lecture
Which Supervised Learning Method Works Best for What? An Empirical Comparison of Learning Methods and Metrics


MultiClass or MultiLabel:
SVMs are very good at basic and multiclass classification, and Random Forests are probably better for multilabel classification.  

There are specific SVM implementations for Multiclass (Cramer & Singer algo) and Structural (SvmLight) problems, and even MultiLabel SVMs (M3L).  These are a bit esoteric but it is important to be able to solve more than just binary classification problems.  

 Multi-Label Learning with Millions of Labels: Recommending Advertiser Bid Phrases for Web Pages
Page on microsoft.com

Appendix for the Discussion

Incomplete Information:
Machine Learning methods do very poorly when we don't have enough labels for our data, or the labels are wrong.
SVMs can even be applied in situations when the labels are only partially or weakly known, but we have additional information about the global statistics. And this works particularly well for text classification when choosing an extended basis set, such as using a word2vec or glove.

Kernels and your nature features:
Using an SVM or any Kernel Method requires choosing a regularizer, and maybe a Kernel.  Generally you should not use an RBF Kernel unless the underlying problem is like a signal processing problem:

A recent extension to SVMs Kernels is Factorization Machines (FMs), and while this method is typically thought of as recommender system, it can be used for SVM like classification as well. FMs would be useful for picking up very weak correlations between sparse, discrete features

Regularization:
Most modern methods include some basic regularizer like L1, L2, or some combination of the 2 (elastic net). Why would one choose, say, an L1 SVM over an L2 SVM?  The L1 SVM pre-supposed the features are very, very sparse.  For example, I have used an L1 SVM to reduce a model with 50K features to a few hundred.  L2 SVM can capture models that require many tiny features

Numerical performance:
SVMs, like many convex methods, have been optimized for the past 15 years and these days scale very well. I routinely use linear SVMs to solve classifications problems with tens of millions of instances and say half a million features.  I can do this on my laptop.   I am bit puzzled why others would claim these are hard to train (unless they are trying to use a non-linear / RBF kernel and have a poor implementation)  And in my experience in using them over 15 years in production, they are very easy train at very large scale for most commercial applications.  They are convex methods with only a 1-2 adjustable parameters.  

I have routinely trained linear SVMs with nearly 10M instances and 1M features on my laptop in near real time.  

For more than say 10M instances and 1M features, the training can be done in parallel using lock free stochastic coordinate descent, or in online mode using.  Modern open source implementations include libsvm/liblinear, Graphlab/Dato, and Vowpal Wabbit.  There are even GPU accelerated versions like BidMach

Page on gputechconf.com

 and there is distributed memory, parallel implementations running on upto 1000 nodes.

Interpretation
I'm currently working on these kinds of algorithms for newswire classification into about 10 categories. I'm comparing kNN, tweaked Naive Bayes and Rocchio's algorithm. I wanted very simple algorithms since my dataset is quite unlimited and because SVM, for instance, seems a pain to implement myself. 

- kNN should be avoided in my case since the evaluation is quite heavy if your training dataset contains several thousand elements ; although it gives really good results.

- Naive Bayes is very simple and quicky to evaluate but I had to tweak it to handle unbalanced classes.

- Rocchio seems very naive but it works surprisingly well and is very efficient.

Finally, I use a combination of Naive Bayes and Rocchio to gain accuracy on the same principal than boosting (linear mixing obtained by cross-validation). You can also use EM on NB or Rocchio, since the formulation is very simple in these cases. This could help.

All in all, I'd say that this is very data-dependent. Best techniques depends on the data and what accuracy/efficiency trade off you is expecting.
  
http://qph.is.quoracdn.net/main-qimg-b284e0b9d5c3302d7033294450133b85?convert_to_webp=true
  
  

Sunday, 28 June 2015

Important Machine Learning Algorithms and their specifications

A decision tree is a decision support tool that uses a tree-like  graph or model of decisions and their possible consequences, including  chance event outcomes, resource costs, and utility. It is one way to  display an algorithm.

Overfitting and its Solution

Overfitting is when you're trying to model how a system works, and you build the model using some data.  But then you realize your model was so tuned to the quirks of your data that it didn't really explain anything about the system.

Suppose you have some data that's linearly related, but has a little bit of noise in it.  You try to fit your sample to a linear model, and see that it doesn't match exactly right:


But you really want to match each datapoint perfectly, so you end up with a model like this:


This model performs great on the data you have, but when you ask it to make predictions about the rest of the data, it falls flat on its face.  That's because you made your model so flexible that it tuned itself to the random noise in your dataset.  It would have been better to take the linear model, which didn't perform as well on your small sample but would have made better predictions on new data points.