IMI-BAS BAS
 

BulDML at Institute of Mathematics and Informatics >
IMI >
IMI Periodicals >
Serdica Journal of Computing >
2014 >
Volume 8 Number 2 >

Please use this identifier to cite or link to this item: http://hdl.handle.net/10525/2434

Title: A Taxonomy of Big Data for Optimal Predictive Machine Learning and Data Mining
Authors: Fokoue, Ernest
Keywords: Massive Data
Taxonomy
Parsimony
Sparsity
Regularization
Penalization
Compression
Reduction
Selection
Kernelization
Hybridization
Parallelization
Aggregation
Randomization
Sequentialization
Cross Validation
Subsampling
Bias-Variance Trade-off
Generalization
Prediction Error
Issue Date: 2014
Publisher: Institute of Mathematics and Informatics Bulgarian Academy of Sciences
Citation: Serdica Journal of Computing, Vol. 8, No 2, (2014), 111p-136p
Abstract: Big data comes in various ways, types, shapes, forms and sizes. Indeed, almost all areas of science, technology, medicine, public health, economics, business, linguistics and social science are bombarded by ever increasing flows of data begging to be analyzed efficiently and effectively. In this paper, we propose a rough idea of a possible taxonomy of big data, along with some of the most commonly used tools for handling each particular category of bigness. The dimensionality p of the input space and the sample size n are usually the main ingredients in the characterization of data bigness. The specific statistical machine learning technique used to handle a particular big data set will depend on which category it falls in within the bigness taxonomy. Large p small n data sets for instance require a different set of tools from the large n small p variety. Among other tools, we discuss Preprocessing, Standardization, Imputation, Projection, Regularization, Penalization, Compression, Reduction, Selection, Kernelization, Hybridization, Parallelization, Aggregation, Randomization, Replication, Sequentialization. Indeed, it is important to emphasize right away that the so-called no free lunch theorem applies here, in the sense that there is no universally superior method that outperforms all other methods on all categories of bigness. It is also important to stress the fact that simplicity in the sense of Ockham’s razor non-plurality principle of parsimony tends to reign supreme when it comes to massive data. We conclude with a comparison of the predictive performance of some of the most commonly used methods on a few data sets.
URI: http://hdl.handle.net/10525/2434
ISSN: 1312-6555
Appears in Collections:Volume 8 Number 2

Files in This Item:

File Description SizeFormat
sjc-vol8-num2-2014-p111-p136.pdf349.04 kBAdobe PDFView/Open

 



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

 

Valid XHTML 1.0!   Creative Commons License