Deep Learning for Dynamical Systems Group
Our research focuses on deep learning methods for anomaly detection in and prediction of structured, imbalanced, and often sparsely labeled time series data such as climate data, environmental data, and structured data such as data from Enterprise Ressource Planning (ERP) systems.
Current Research
Different research disciplines in natural sciences use different types of mechanistic models to create mathematical models from first principles: climate systems and electromagnetic fields are modeled using partial differential equation systems, bee colonies are modeled using agent-based systems and process models, and pollutants for air quality are modeled using diffusion models, transport and dispersion models. However, when observing the different systems through sensor networks, the differences between the systems disappear from a machine learning and data science point of view. From this viewpoint, the datasets pose very similar challenges:
- The observed data is incomplete, e.g. not all variables have been or can be observed;
- The observations are the result of the superposition of processes that operate on different time scales;
- Observations are only available at discrete points in space and time;
- Data may not have been observed on a grid;
- Data sets are small compared to the complexity of the modeling task, e.g. in climate modeling, there is only one year of climate observations per year making it impossible to observe large amounts of data in shorter amounts of time.
Together with my research group, I address these challenges by researching and adapting complementary modeling algorithms that learn representations of the data: NeuralPDEs, Transformers, and specific deep learning models to learn maps from sparse observations. NeuralPDEs represent data as a partial differential equation (PDE) with or without additional knowledge about the system that is being modeled. They aim to create models that can be used in combination with or as a replacement for standard PDE systems in simulations. Transformers learn general data representations based on the attention mechanism. The resulting models can be used flexibly in different downstream tasks such as prediction or as feature extractors in larger settings. Deep Learning models for learning maps from sparse observational data combine sparse observations with data from different domains and leverage prior knowledge. Additionally, we research hierarchical algorithms that combine observational data of varying quality.
Projects
We are currently involved in the following projects:
We have successfully complete the following projects: