Searched for: author%3A%22Loog%2C+M.%22
(1 - 20 of 27)

Pages

document
Kouw, W.M. (author), Loog, M. (author)
Consider a domain-adaptive supervised learning setting, where a classifier learns from labeled data in a source domain and unlabeled data in a target domain to predict the corresponding target labels. If the classifier’s assumption on the relationship between domains (e.g. covariate shift, common subspace, etc.) is valid, then it will usually...
journal article 2021
document
Mey, A. (author), Loog, M. (author)
We investigate to which extent one can recover class probabilities within the empirical risk minimization (ERM) paradigm. We extend existing results and emphasize the tight relations between empirical risk minimization and class probability estimation. Following previous literature on excess risk bounds and proper scoring rules, we derive a...
conference paper 2021
document
Yildiz, B. (author), Hung, H.S. (author), Krijthe, J.H. (author), Liem, C.C.S. (author), Loog, M. (author), Migut, M.A. (author), Oliehoek, F.A. (author), Panichella, A. (author), Pawełczak, Przemysław (author), Picek, S. (author), de Weerdt, M.M. (author), van Gemert, J.C. (author)
We present ReproducedPapers.org : an open online repository for teaching and structuring machine learning reproducibility. We evaluate doing a reproduction project among students and the added value of an online reproduction repository among AI researchers. We use anonymous self-assessment surveys and obtained 144 responses. Results suggest...
conference paper 2021
document
Schmahl, Katja Geertruida (author), Viering, T.J. (author), Makrodimitris, S. (author), Naseri Jahfari, A. (author), Tax, D.M.J. (author), Loog, M. (author)
Large text corpora used for creating word embeddings (vectors which represent word meanings) often contain stereotypical gender biases. As a result, such unwanted biases will typically also be present in word embeddings derived from such corpora and downstream applications in the field of natural language processing (NLP). To minimize the effect...
conference paper 2020
document
Viering, T.J. (author), Mey, A. (author), Loog, M. (author)
Learning performance can show non-monotonic behavior. That is, more data does not necessarily lead to better models, even on average. We propose three algorithms that take a supervised learning model and make it perform more monotone. We prove consistency and monotonicity with high probability, and evaluate the algorithms on scenarios where...
conference paper 2020
document
Mey, A. (author), Viering, T.J. (author), Loog, M. (author)
Manifold regularization is a commonly used technique in semi-supervised learning. It enforces the classification rule to be smooth with respect to the data-manifold. Here, we derive sample complexity bounds based on pseudo-dimension for models that add a convex data dependent regularization term to a supervised learning process, as is in...
conference paper 2020
document
von Kügelgen, Julius (author), Mey, Alexander (author), Loog, M. (author)
Current methods for covariate-shift adaptation use unlabelled data to compute importance weights or domain-invariant features, while the final model is trained on labelled data only. Here, we consider a particular case of covariate shift which allows us also to learn from unlabelled data, that is, combining adaptation with semi-supervised...
journal article 2020
document
Loog, M. (author), Viering, T.J. (author), Mey, Alexander (author), Krijthe, J.H. (author), Tax, D.M.J. (author)
journal article 2020
document
Viering, T.J. (author), Krijthe, J.H. (author), Loog, M. (author)
Active learning algorithms propose what data should be labeled given a pool of unlabeled data. Instead of selecting randomly what data to annotate, active learning strategies aim to select data so as to get a good predictive model with as little labeled samples as possible. Single-shot batch active learners select all samples to be labeled in a...
journal article 2019
document
Loog, M. (author), Viering, T.J. (author), Mey, A. (author)
Plotting a learner’s average performance against the number of training samples results in a learning curve. Studying such curves on one or more data sets is a way to get to a better understanding of the generalization properties of this learner. The behavior of learning curves is, however, not very well understood and can display (for most...
conference paper 2019
document
Mourragui, S.M.C. (author), Loog, M. (author), van der Wiel, Mark A. (author), Reinders, M.J.T. (author), Wessels, L.F.A. (author)
Motivation: Cell lines and patient-derived xenografts (PDXs) have been used extensively to understand the molecular underpinnings of cancer. While core biological processes are typically conserved, these models also show important differences compared to human tumors, hampering the translation of findings from pre-clinical models to the human...
journal article 2019
document
Yang, Y. (author), Loog, M. (author)
Logistic regression is by far the most widely used classifier in real-world applications. In this paper, we benchmark the state-of-the-art active learning methods for logistic regression and discuss and illustrate their underlying characteristics. Experiments are carried out on three synthetic datasets and 44 real-world datasets, providing...
journal article 2018
document
Yang, Y. (author), Loog, M. (author)
Active learning aims to train a classifier as fast as possible with as few labels as possible. The core element in virtually any active learning strategy is the criterion that measures the usefulness of the unlabeled data based on which new points to be labeled are picked. We propose a novel approach which we refer to as maximizing variance...
journal article 2018
document
Krijthe, J.H. (author), Loog, M. (author)
For semi-supervised techniques to be applied safely in practice we at least want methods to outperform their supervised counterparts. We study this question for classification using the well-known quadratic surrogate loss function. Unlike other approaches to semi-supervised learning, the procedure proposed in this work does not rely on...
journal article 2017
document
Loog, M. (author), Lauze, François (author)
We start by demonstrating that an elementary learning task—learning a linear filter from training data by means of regression—can be solved very efficiently for feature spaces of very high dimensionality. In a second step, firstly, acknowledging that such high-dimensional learning tasks typically benefit from some form of regularization and,...
conference paper 2017
document
Gudi, A.A. (author), van Rosmalen, N.C. (author), Loog, M. (author), van Gemert, J.C. (author)
In the face of scarcity in detailed training annotations, the ability to perform object localization tasks in real-time with weak-supervision is very valuable. However, the computational cost of generating and evaluating region proposals is heavy. We adapt the concept of Class Activation Maps (CAM) [28] into the very first weakly-supervised ...
conference paper 2017
document
Loog, M. (author)
Improvement guarantees for semi-supervised classifiers can currently only be given under restrictive conditions on the data. We propose a general way to perform semi-supervised parameter estimation for likelihood-based classifiers for which, on the full training set, the estimates are never worse than the supervised solution in terms of the log...
journal article 2016
document
Calana, P.Y. (author), Cheplygina, V. (author), Duin, R.P.W. (author), Garcia-Reyes, E. (author), Orozco-Alzate, M. (author), Tax, D.M.J. (author), Loog, M. (author)
Nearest-neighbor (NN) classification has been widely used in many research areas, as it is a very intuitive technique. As long as we can defined a similarity or distance between two objects, we can apply NN, therefore making it suitable even for non-vectorial data such as graphs. An alternative to NN is the dissimilarity space [2], where...
conference paper 2013
document
Calana, Y.P. (author), Cheplygina, V.V. (author), Duin, R.P.W. (author), Garcia-Reyes, E. (author), Orozco-Alzate, M. (author), Tax, D.M.J. (author), Loog, M. (author)
Nearest-neighbor (NN) classification has been widely used in many research areas, as it is a very intuitive technique. As long as we can defined a similarity or distance between two objects, we can apply NN, therefore making it suitable even for non-vectorial data such as graphs. An alternative to NN is the dissimilarity space [2], where...
conference paper 2013
document
Loog, M. (author)
This BNAIC compressed contribution provides a summary of the work originally presented at the First IAPR Workshop on Partially Supervised Learning and published in [5]. It outlines the idea behind supervised and semi-supervised learning and highlights the major shortcoming of many current methods. Having identified the principal reason for their...
conference paper 2012
Searched for: author%3A%22Loog%2C+M.%22
(1 - 20 of 27)

Pages