MODEL COMPLEXITY, REGULARIZATION AND SPARSITY IEEE CIM, NOVEMBER ISSUE
             
  IEEE CIM SPECIAL ISSUE ON
“MODEL COMPLEXITY:
REGULARIZATION AND SPARSITY”
             
IEEE Computational Intelligence Magazine, November 2016
Special Issue on: “Model Complexity, Regularization and Sparsity”

The effective management of solution complexity is one of the most important issues in addressing Computational Intelligence problems. Regularization techniques control model complexity by taking advantage of some prior information regarding the problem at hand, represented as penalty expressions that impose these properties on the solution. Over the past few years, one of the most prominent and successful types of regularization has been based on the sparsity prior, which promotes solutions that can be expressed as a linear combination of a few atoms belonging to a dictionary. Sparsity can in some sense be considered a “measure of simplicity” and, as such, is compatible with many nature-inspired principles of Computational Intelligence. Nowadays, sparsity has become one of the leading approaches for learning adaptive representations for both descriptive and discriminative tasks, and has been shown to be particularly effective when dealing with structured, complex and high-dimensional data.

Regularization, including sparsity and other priors to control the model complexity, is often the key ingredient in the successful solution of difficult problems; it is therefore not surprising that these aspects have also recently gained a lot of attention in big-data processing, due to unprecedented challenges associated with the need to handle massive datastreams that are possibly high-dimensional and organized in complex structures.

This special issue aims at presenting the most relevant regularization techniques and approaches to control model complexity in Computational Intelligence. Submissions of papers presenting regularization methods for Neural Networks, Evolutionary Computation or Fuzzy Systems, are welcome. Submissions of papers presenting advanced regularization techniques in specific, but relevant, application fields such as data/datastream-mining, classification, big-data analytics, image/signal analysis, natural-language processing, are also encouraged.

Topics of Interest

  • Regularization methods for big and high-dimensional data;
  • Regularization methods for supervised and unsupervised learning;
  • Regularization methods for ill-posed problems in Computational Intelligence;
  • Techniques to control model complexity;
  • Sparse representations in Computational Intelligence;
  • Managing model complexity in data analytics;
  • Effective priors for solving Computational Intelligence problems;
  • Multiple prior integration;
  • Regularization in kernel methods and support vector machines.

Download the Call for Papers in pdf format