MODEL COMPLEXITY, REGULARIZATION AND SPARSITY IEEE CIM, NOVEMBER ISSUE
         
MODEL COMPLEXITY, REGULARIZATION AND SPARSITY IEEE CIM, NOVEMBER ISSUE
             
IEEE Computational Intelligence Magazine:
Special Issue on Model Complexity, Regularization and Sparsity


One of the most important issues in computational intelligence is how to effectively control the complexity of the solution. Regularization techniques control the model complexity by taking advantage of some a priori information about the problem at hand. Priors are converted in expressions able to steer the computation towards solutions enjoying some desirable properties. Over the past few years, one of the most prominent and successful type of regularization has been based on the sparsity prior, which promotes solutions that can be expressed by means of few atoms belonging to a dictionary. In a way, sparsity can be considered a measure of simplicity and, as such, is compatible with many nature-inspired principles proper of computational intelligence. Nowadays, sparsity has become one of the leading approaches for learning adaptive representations for both descriptive and discriminative tasks, in particular, when dealing with structured, complex, high-dimensional datasets.

Regularization, including sparsity and other priors to control the model complexity, is often the key ingredient to successfully solve complex problems; it is not surprising that these aspects have recently gained a lot of attention also in big-data processing, due to unprecedented challenges associated with the need to handle massive, possibly high-dimensional and organized in complex structures, datastreams.

This special issue aims at presenting the most relevant regularization techniques and approaches to control model complexity in computational intelligence. Submissions of papers presenting regularization methods for Neural Networks, Evolutionary Computation or Fuzzy Systems, are welcome. Submission of papers presenting advanced regularization techniques in specific, but relevant, application fields such as data/datastream-mining, classification, big-data analytics, image/signal processing, natural-language processing, are also encouraged.
  • Regularization methods for big and high-dimensional data;
  • Regularization methods for supervised and unsupervised learning;
  • Regularization methods for ill-posed problems in computational intelligence;
  • Techniques to control the model complexity;
  • Sparse representations in computational intelligence;
  • Managing model complexity in data analytics;
  • Effective priors for solving computational-intelligence problems;
  • Multiple prior integration;
  • Regularization in kernel methods and support vector machines
Download the Call for Papers in pdf format