Science of Machine Learning
Exhibition Program 2
Learning feature combinations from multiple tasks
MOFM: low-rank regression for learning common factors
Abstract
Multi-output Factorization Machines (MOFM) are an extension of Convex Factorization Machines that can learn the model of several tasks simultaneously. MOFM can find combinations of factors that are predictive across tasks.
MOFM decompose the potentially very large weight matrix associated with each task using a small number of common basis vectors. Hence, MOFM are able to scale to very high-dimensional data. In addition, we propose a convex formulation for learning this decomposition with optimality guarantee.
MOFM find applications to numerous real-world problems, including medical diagnosis, recommendation systems and genomic selection of plants. In future work, we plan to further study the theoretical properties of MOFM.
Reference
- [1] M. Blondel, A. Fujino, N. Ueda, “Convex Factorization Machines,” in Proc. European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2015.
[2] M. Blondel, V. Niculae, T. Otsuka, N. Ueda, “Multi-output Polynomial Networks and Factorization Machines,” in Proc. Neural Information Processing Systems, 2017.
Poster
Photos
Presenters
Mathieu Blondel
Ueda Research Laboratory
Oral Presentations:
Takeshi Yamada (Head's Talk) |
Yasuhiro Takahashi |
Junji Watanabe |
Masaaki Nishino |
Sadao Hiroya |
Exhibition:
1
|
2
|
3
|
4
|
5
|
6
|
7
|
8
|
9
|
10
|
11
|
12
|
13
|
14
|
15
|
16
|
17
|
18
|
19
|
20
|
21
|
22
|
23
|
24
|
25
|
26
|
27
|
28
|
29
Prev |
Next