Published December 2019 | Version public
Book Section - Chapter

Multi-View, Generative, Transfer Learning for Distributed Time Series Classification

  • 1. ROR icon University of North Carolina at Charlotte
  • 2. ROR icon University at Buffalo, State University of New York
  • 3. ROR icon California Institute of Technology
  • 4. ROR icon City of Scientific Research and Technological Applications

Abstract

In this paper, we propose an effective, multi-view, generative, transfer learning framework for multivariate timeseries data. While generative models are demonstrated effective for several machine learning tasks, their application to time-series classification problems is underexplored. The need for additional exploration is motivated when data are large, annotations are unbalanced or scarce, or data are distributed and fragmented. Recent advances in computer vision attempt to use synthesized samples with system generated annotations to overcome the lack or imbalance of annotated data. However, in multi-view problem settings, view mismatches between the synthetic data and real data pose additional challenges against harnessing new annotated data collections. The proposed method offers important contributions to facilitate knowledge sharing, while simultaneously ensuring an effective solution for domain-specific, finelevel categorizations. We propose a principled way to perform view adaptation in a cross-view learning environment, wherein pairwise view similarity is identified by a smaller subset of source samples that closely resemble the target data patterns. This approach integrates generative models within a deep classification framework to minimize the gap between source and target data. More precisely, we design category specific conditional, generative models to update the source generator in order for transforming source features so that they appear as target features and simultaneously tune the associated discriminative model to distinguish these features. During each learning iteration, the source generator is conditioned by a source training set represented as some target-like features. This transformation in appearance was performed via a target generator specifically learned for targetspecific customization per category. Afterward, a smaller source training set, indicating close target pattern resemblance in terms of the corresponding generative and discriminative loss, is used to fine-tune the source classification model parameters. Experiments show that compared to existing approaches, our proposed multiview, generative, transfer learning framework improves timeseries classification performance by around 4% in the UCI multiview activity recognition dataset, while also showing a robust, generalized representation capacity in classifying several largescale multi-view light curve collections.

Additional Information

© 2019 IEEE. Funding for this research was provided by the National Science Foundations (NSF) Data Infrastructure Building Blocks (DIBBs) Progam under award #1640818.

Additional details

Identifiers

Eprint ID
101648
DOI
10.1109/bigdata47090.2019.9005452
Resolver ID
CaltechAUTHORS:20200302-105109375

Related works

Funding

NSF
OAC-1640818

Dates

Created
2020-03-02
Created from EPrint's datestamp field
Updated
2021-11-16
Created from EPrint's last_modified field

Caltech Custom Metadata

Caltech groups
Astronomy Department