The workshop will cover original contributions, theory as well as applications, on optimizing, combining, and transferring machine learning models. The main objective remains the opportunity for inspiring discussion about recent progress and future directions. For example, learning models that are based on different paradigms can be combined and optimized for improving their accuracy. Thus, each learning method imposes specific data-driven modeling which translates to a set of constraints. However, such assumptions may lead to weak and non adapted learners if they are not fulfilled. In many cases, the ill-posedness of learning processes and partiality of observations make the optimization methods converge to different solutions and subsequently fail under various circumstances.
The workshop will be a good opportunity to discuss recent advances in optimizing and learning models. Furthermore, the effectiveness of these methods will be discussed considering the concepts of diversity and selection of these approaches.
Relevant topics
The following is a partial list of relevant topics (not limited to) for the workshop:
Transfer learning, metric learning, and domain adaptation
Optimization of cost functions for ML
Bagging and boosting techniques
Collaborative clustering and learning
Mixtures of distributions or experts
Modular approaches
Multi-task learning
Multi-view learning
Task decomposition …
Guidlines
Submitted papers will not be published in the WCCI proceedings, but will be produced as a ATLM proceedings specifically for the Advances in Optimizing and Transfer Learning Models. Authors of selected papers will be invited to submit an extended version of their work to a Special Issue of the Computational Intelligence journal.