Alvin  Li

Alvin Li

Scholar Title

MIT EECS Undergraduate Research and Innovation Scholar

Research Title

Adversarial Training in Continuous-Time Models and Irregularly Sampled Time-Series




Electrical Engineering and Computer Science

Research Areas
  • Artificial Intelligence and Machine Learning

Daniela L. Rus


Liquid time-constant networks (LTCs) have recently emerged as a promising method for processing complex time-series data. The current state of the art recurrent neural networks (RNNs) operate in a discrete time setting, where data points are sampled at a set uniform interval. Discrete sampling methods often fail to capture the continuous nature of real-world data, such as traffic patterns, where the ongoing flow of vehicles occurs smoothly over time. LTCs aim to address this limitation by employing nonlinear gates that facilitate more complex computations and transformations of the input data. These gates enable the network to dynamically adjust the time constants of each neuron as data traverses through the network. As a result, LTCs have a unique continuous architecture that empowers them to effectively process time-series data with irregular sampling intervals.

To enhance the robustness of LTCs against adversarial attacks, we propose several novel adversarial training pipelines. These training pipelines introduce disruptive elements, such as perturbations to the data itself, alterations to the model’s weights, and the manipulation of the temporal dimension. We evaluate the performance of an LSTM network, a standard LTC network, and various adversarially trained LTC models on their forecasting abilities across a combination of uniformly sampled, non-uniformly sampled, adversarially perturbed, and fully unaltered time series data. Our research findings underscore the effectiveness of the proposed adversarial training pipelines in addressing LTC vulnerabilities, enhancing performance, and protecting against potential adversarial attacks.


I am an ML researcher at MIT CSAIL under the guidance of Professor Daniela Rus, Alexander Amini, and Mathias Lechner. My work primarily revolves around advancing the performance of state of the art neural networks – particularly in the realm of adversarial learning and complex time-series models. I aim to leverage cutting edge AI tech to either enhance existing products within established sectors or develop innovative solutions for startup ventures.

Back to Scholars