Feature selection

From DrugPedia: A Wikipedia for Drug discovery

(Difference between revisions)
Jump to: navigation, search
(New page: Feature selection is the process of choosing a subset of original features so that the feature space is optimally reduced according to some evaluation criterion.This is used when the numbe...)
Current revision (07:09, 18 August 2008) (edit) (undo)
 
Line 4: Line 4:
#Filter model model relies on general characteristics of the training data to select some features without involving any learning algorithm; therefore, it does not inherit any bias of a learning algorithm. When the number of features becomes very large, the ‘filter’ model is usually a choice due to its computational efficiency.
#Filter model model relies on general characteristics of the training data to select some features without involving any learning algorithm; therefore, it does not inherit any bias of a learning algorithm. When the number of features becomes very large, the ‘filter’ model is usually a choice due to its computational efficiency.
-
 
#wrapper model model requires a pre-determined learning algorithm in feature selection and uses its performance to evaluate and determine which features get selected. The ‘wrapper’ model tends to give superior performance as it finds features better suited to the pre-determined learning algorithm, but it also tends to be computationally more expensive.
#wrapper model model requires a pre-determined learning algorithm in feature selection and uses its performance to evaluate and determine which features get selected. The ‘wrapper’ model tends to give superior performance as it finds features better suited to the pre-determined learning algorithm, but it also tends to be computationally more expensive.

Current revision

Feature selection is the process of choosing a subset of original features so that the feature space is optimally reduced according to some evaluation criterion.This is used when the number of descriptors is very large in comparison to the number of compounds, a learning algorithm is faced with the problem of selecting a relevant subset of features (or descriptors).

Feature selection algorithms broadly catagorised into two part

  1. Filter model model relies on general characteristics of the training data to select some features without involving any learning algorithm; therefore, it does not inherit any bias of a learning algorithm. When the number of features becomes very large, the ‘filter’ model is usually a choice due to its computational efficiency.
  2. wrapper model model requires a pre-determined learning algorithm in feature selection and uses its performance to evaluate and determine which features get selected. The ‘wrapper’ model tends to give superior performance as it finds features better suited to the pre-determined learning algorithm, but it also tends to be computationally more expensive.