TPAMI 2017
Active Self-Paced Learning for Cost-Effective and Progressive Face Identification
Liang Lin, Keze Wang, Deyu Meng, Wangmeng Zuo, and Lei Zhang
TPAMI 2017

Framework


By naturally combining two recently rising techniques: active learning (AL) and self-paced learning (SPL), our framework is capable of automatically annotating new instances and incorporating them into training under weak expert re-certification. We first initialize the classifier using a few annotated samples for each individual, and extract image features using the convolutional neural nets. Then, a number of candidates are selected from the unannotated samples for classifier updating, in which we apply the current classifiers ranking the samples by the prediction confidence. In particular, our approach utilizes the high-confidence and low-confidence samples in the self-paced and the active user-query way, respectively. The neural nets are later fine-tuned based on the updated classifiers. Such heuristic implementation is formulated as solving a concise active SPL optimization problem, which also advances the SPL development by supplementing a rational dynamic curriculum constraint. The new model finely accords with the “instructor-student-collaborative” learning mode in human education. The advantages of this proposed framework are two-folds: i) The required number of annotated samples is significantly decreased while the comparable performance is guaranteed. A dramatic reduction of user effort is also achieved over other state-of-the-art active learning techniques. ii) The mixture of SPL and AL effectively improves not only the classifier accuracy compared to existing AL/SPL methods but also the robustness against noisy data. We evaluate our framework on two challenging datasets, and demonstrate very promising results.

aspl_framework

Figure: Illustration of our proposed cost-effective framework. The pipeline includes stages of CNN and model initialization; classifier updating; high-confidence sample labeling by the SPL, low-confidence sample annotating by AL and CNN fine-tuning, where the arrows represent the workflow. The images highlighted by blue in the left panel represent the initially selected samples.

we aim at designing a cost-effective and progressive learning framework, which is capable of automatically annotating new instances and incorporating them into training under weak expert recertification. In the following, we discuss the advantage of our ASPL framework in two aspects: “Cost-less” and “Earn-more”.

(I) Cost less: Our framework is capable of building effective classifiers with less labeled training instances and less user efforts, compared with other state-of-the-art algorithms. This property is achieved by combining the active learning and self-paced learning in the incremental learning process. In certain feature space of model training, samples of low classification confidence are scattered and close to the classifier decision boundary while high confidence samples distribute compactly in the intra-class regions. Our approach takes both categories of samples into consideration for classifier updating. The benefit of this strategy includes: i) High-confidence samples can be automatically labeled and consistently added into model training throughout the learning process in a self-paced fashion, particularly when the classifier becomes more and more reliable at later learning iterations. This significantly reduce the burden of user annotations and make the method scalable in large-scale scenarios. ii) The low-confidence samples are selected by allowing active user annotations, making our approach more efficiently pick up informative samples, more adapt to practical variations and converge faster, especially in the early learning stage of training.

(II) Earn more: The mixture of self-paced learning and active learning effectively improves not only the classifier accuracy but also the classifier robustness against noisy samples. From the perspective of AL, extra high confidence samples are automatically incorporated into the retraining without cost of human labor in each iteration, and faster convergence can be thus gained. These introduced high-confidence samples also contribute to suppress noisy samples in learning, due to their compactness and consistency in the feature space. From the SPL perspective, allowing active user intervention generates the reliable and diverse samples that can avoid the learning been misled by outliers. In addition, utilizing the CNN facilitates to pursue a higher classification performance by learning the convolutional filters instead of hand-craft feature engineering

The entire algorithm can then be summarized into Algorithm 1. It is easy to see that this solving strategy for the ASPL model finely accords with the pipeline of our framework.

framework