[Comp-neuro] CFP: ICML/UAI/COLT 2008 workshop on Sparse Optimization and Variable Selection

Irina Rish rish at us.ibm.com
Wed Mar 26 17:39:51 CET 2008


                  Call for Papers:
ICML/UAI/COLT 2008  workshop on Sparse Optimization and Variable Selection

                  July 9, 2008, Helsinki, Finland
            http://irina.rish.googlepages.com/icml08sparse
                  Submission deadline: May 5

------------------------

Overview:

Variable selection is an important issue in many applications of machine
learning and statistics where the main objective is discovering predictive
patterns in data that would enhance our understanding of underlying
physical, biological and other natural processes, beyond just building
accurate 'black-box'  predictors. Common examples include biomarker
selection in biological applications [1],  finding brain areas predictive
about 'brain states' based on fMRI data [2], and identifying network
bottlenecks best explaining end-to-end performance [3,4], just to name a
few.

Recent years have witnessed a flurry of research on algorithms and theory
for variable selection and estimation involving sparsity constraints.
Various types of convex relaxation, particularly L1-regularization, have
proven very effective: examples include the LASSO [5], boosted LASSO [6],
Elastic Net [1], L1-regularized GLMs [7], sparse classifiers such as sparse
(1-norm) SVM [8,9], as well as sparse dimensionality reduction methods
(e.g. sparse component analysis [10], and particularly sparse PCA [11,12]
and sparse NMF [13,14]). Applications of these methods are wide-ranging,
including computational biology, neuroscience, graphical model selection
[15], and the rapidly growing area of compressed sensing [16-19].
Theoretical work has provided some conditions when various relaxation
methods are capable of recovering an underlying sparse signal, provided
bounds on sample complexity, and investigated trade-offs between different
choices of design matrix properties that guarantee good performance.

We would like to invite researchers working on the methodology, theory and
applications of sparse models and selection methods to share their
experiences and insights into both the basic properties of the methods, and
the properties of the application domains that make particular methods more
(or less) suitable. We hope to further explore connections between variable
selection and related areas such as dimensionality reduction, optimization
and compressed sensing.

Suggested Topics:

We would welcome submissions on various aspects of  sparsity in
machine-learning,from theoretical results  to novel algorithms and
interesting applications.
Questions of interest include, but are not limited to:
- Does  variable selection provide a  meaningful interpretation of interest
to domain experts?
- What type of method (e.g., combination of regularizers) is best-suited
for a particular application and why?
- How robust is  the method with respect to various type of  noise in the
data?
- What are the theoretical guarantees on the reconstruction ability of the
method? consistency? sample complexity?
 Comparison of different variable selection and dimensionality reduction
methods with respect to  their accuracy, robustness, and interpretability
is  encouraged.

Paper Submission

Please submit an extended abstract (1 to 3 pages in two-column ICML format)
to  the  workshop  email  address  sparse.ws at gmail.com. The abstract should
include
author  names,  affiliations,  and  contact  information.  Papers  will  be
reviewed by at least 3 members of the program committee.,

Format:

We are planning on having one tutorial, 4-5 invited talks (30-40 min each)
and shorter contributed  talks (15-20 min)  from researches in industry and
academia, followed by 10 min discussion,  as well as a  panel discussion at
the end of the workshop. The workshop is intended to be accessible to the
broader ICML-COLT-UAI community and to encourage  communication between
different fields.


Workshop Organizers/Program Committee:

Irina Rish† (primary contact),  Guillermo Cecchi†, Rajarshi Das†, Tony
Jebara*,  Gerald Tesauro†, Martin Wainwright∗


{rish,gcecchi,rajarshi, gtesauro}@us.ibm.com    † IBM Watson Research
jebara at cs.columbia.edu                          * Columbia U.
wainwrig at eecs.berkeley.edu                ∗ UC Berekeley


Past related workshops:

NIPS 2006 workshop on Causality and Feature Selection
NIPS 2003 workshop on feature extraction and feature selection challenge
NIPS 2001 workshop on Variable and Feature Selection

Related Work

[1] H. Zou and T. Hastie. Regularization and Variable Selection via the
Elastic Net (pdf). JRSSB (2005) 67(2) 301-320.
[2] G. Cecchi, I. Rish, R. Rao, R. Garg. Prediction of Brain Activity based
on Elastic Net Algorithm, in PBAIC workshop at Human Brain Mapping 2007
conference. Extended version is under submittion.
[3] G. Chandalia and I. Rish. Blind Source Separation Approach to
Performance Diagnosis and Dependency Discovery, to appear in Internet
Measurement Conference (IMC-07).
[4] A. Beygelzimer, J. Kephart and I. Rish. Evaluation of Optimization
Methods for Network Bottleneck Diagnosis, in ICAC 2007.
[5] R. Tibshirani(1996). Regression shrinkage and selection via the lasso.
J. Royal. Statist. Soc B., Vol. 58, No. 1, pages 267-288.
[6] Zhao, P. & Yu, B. (2004), Boosted lasso, Technical report, University
of California, Berkeley, USA.
[7] M. Park and T. Hastie. An L1 Regularization-path Algorithm for
Generalized Linear Models. Stanford Technical Report;  to appear in JRSSB.
[8] J. Zhu, S. Rosset, T. Hastie and R. Tibshirani. 1-Norm Support Vector
Machines, NIPS 2003.
[9] A. Chan, N. Vasconcelos and G. Lanckriet. (2007). Direct Convex
Relaxations of Sparse SVM. ICML-07.
[10] Y. Li, A. Cichocki, S. Amari, S. Shishkin, J. Cao, and F. Gu. Sparse
representation and its applications in blind source separation. NIPS-03.
[11] H. Zou, T. Hastie, and R. Tibshirani. Sparse Principal Component
Analysis. JCGS 2006 15(2): 262-286.
[12] d'Aspremont, A., El Ghaoui, L., Jordan, M.I., Lanckriet, G.R.G.
(2004). A Direct Formulation for Sparse PCA using Semidefinite Programming.
NIPS-04.
[13] P. O. Hoyer. Non-negative Matrix Factorization with sparseness
constraints. JMLR 5:1457-1469, 2004.
[14] H. Kim and H. Park. Sparse non-negative matrix factorizations via
alternating non-negativity-constrained least squares for microarray data
analysis. Bioinformatics, 23(12), 1495-1502, 2007
[15] M. Wainwright, P. Ravikumar and J. Lafferty.High-Dimensional Graphical
Model Selection Using l1-Regularized Logistic Regression. NIPS-06
[16] D. Donoho. Compressed sensing. (EEE Trans. on Information Theory, 52
(4), pp. 1289 - 1306, 2006.
[17] E. Candès, Compressive sampling. Proc. International Congress of
Mathematics, 3, pp. 1433-1452, Madrid, Spain, 2006
[18] R. Baraniuk, A Lecture on Compressive Sensing. IEEE Signal Processing
Magazine, July 2007.
[19] S. Ji and L. Carin, Bayesian Compressive Sensing and Projection
Optimization. ICML 2007
[20] T. Jebara. "Multi-Task Feature and Kernel Selection for SVMs".
International Conference on Machine Learning, ICML, July 2004.
[21] T. Jebara and T. Jaakkola. "Feature Selection and Dualities in Maximum
Entropy Discrimination". In 16th Conference on Uncertainty in Artificial
Intelligence, UAI 2000. July 2000.


More information about the Comp-neuro mailing list