Dynamic Mode Decomposition and Koopman Operator

2021-02-20 我是小ww

This is a very interesting topic I learned recently, which uses a data-driven method to analyze the dynamic systems. All thanks to my advisor Dr. Liu who recommended a paper which really draws my attention and interest:

Avila, A. M., and I. Mezić. "Data-driven analysis and forecasting of highway traffic dynamics." Nature Communications 11.1 (2020): 1-16.

This paper utilized Koopman mode decomposition to analyze the highway dynamics in a fully data-driven fashion, which in some sense can be interpretable.

This article will introduce the techniques used by the paper including dynamic mode decomposition (DMD) and Koopman analysis. I strongly recommend the following websites and YouTube channels and I really appreciate Professor Steve Brunton and his collaborators who have explained their works clearly, easy to understand through all kinds of techniques including their amazing YouTube channels.

YouTube channels of Steve Brunton: https://www.youtube.com/channel/UCm5mt-A4w61lknZ9lCsZtBw

YouTube channels of Nathan Kutz: https://www.youtube.com/channel/UCoUOaSVYkTV6W4uLvxvgiFA

Website for DMD book: http://dmdbook.com/

Website for the online course (inferring structures of complex systems): https://faculty.washington.edu/kutz/am563/page1/page15/am563.html

These are amazing materials, very easy to understand for engineering students.

Dynamic Systems

Almost all real-world systems evolving with time can be regarded as a dynamic system. Let

where

Noise or even missing data

Therefore, sometimes a data-driven method might be preferable to the traditional method by establishing models, solving, and analyzing the models. Dynamic mode decomposition (DMD) is such a method that does not require any prior domain knowledge but only the data to analyze the dynamic systems using a linear approximation.

Before we go to the details of the DMD and Koopman analysis, here we recap the analysis for a linear system.

The discrete linear system can be written as:

With an initial state

Apply the eigenvalue decomposition to the matrix

where the matrix we know almost everything about linear systems, especially, how to control the systems including optimal control (LQR), etc.

Later in DMD, we will show that what DMD does is essentially to approximate the dynamic system using a linear system and decompose the state to a series of modes associated with a spatial-temporal pattern and eigenvalue indicating the evolving of the pattern.

Dynamic Mode Decomposition

As we stated before, DMD uses a linear approximation (with respect to least square error) for a dynamic system and then decompose the state to a series of modes associated with a particular spatial-temporal pattern under a fixed frequency and delay or growth rate.

Let

Then the transition

where

However, if the dynamic system has a very high dimension, then matrix

Here is the algorithm:

Find the singular value decomposition of matrix

Comments for the algorithm:

The first step is to conduct an SVD for the matrix

Then the

where

Then if we look at the decomposition results, DMD can somehow be regarded as a combination of singular value decomposition or principal component analysis (they are very closely related) and the Fourier analysis. The results include different modes associated with spatial-temporal pattern (mode:

Examples of DMD

Here is an example of the dynamic mode decomposition and comparison with PCA (SVD). Here is a python package for the PyDMD: https://github.com/mathLab/PyDMD associated with a document of brief introduction: https://dsweb.siam.org/Software/pydmd-python-dynamic-mode-decomposition.

This example is a problem that is tailored for DMD; it will show you how DMD can find the hidden structure of a linear dynamic system.

As shown in this figure:

Input signal modes

We have two mixed linear signals

They have different profiles given by the first figure and dynamics with different frequencies given by the second figure. The last two figures are 3D plots of how the two modes evolve with time. The following figure is the mixed signal spatial-temporal plot:

Mixed signal

Use this mixed signal as the input, we will try to find the hidden structure of the dynamic system. Since the dynamics of both signals are circle functions (sine, cosine), this can be regarded as the state generated by a linear system. This is the exact problem that DMD can deal with. Here are the results of the DMD, including the profiles of the two modes and the corresponding dynamics. We can see that DMD can split the different modes clearly and recognize their frequencies.

DMD results

Here we compare DMD with the SVD or PCA analysis. The following figures are the results for the singular value decomposition of the mixed signals. Firstly, when we look at the third figure, SVD can successfully identify the rank of the systems. However, it failed to separate the two modes as shown in the first two figures.

Singular value decomposition for the mixed signal

One of the interpretations of DMD, which I really like, is that DMD is like a baby of the principal component analysis and the Fourier analysis.

Koopman Theory

We have seen that DMD can find the hidden structure of a linear system. In this section, we will try to answer what if the dynamic system is not a linear system.

Date back to 1931, Koopman has introduced the Koopman operator, which is an infinite linear operator that lift the state of the dynamic systems to an observable space where the dynamic becomes linear. This field does not draw the attention of researchers for decades until 2005, Igor Mezić brought the Koopman back to mainstream focus, in which the Koopman operator was introduced to analyze the fluid dynamics.

Here is a brief introduction to the Koopman operator. For a general dynamic system:

or discrete version:

The

For discrete time:

where observable of the state representation. What we want is that when we map the

What we want is to find a finite approximation to the Koopman operator. We want to find finite numbers of observable and get a finite Koopman invariant subspace.

It could be very easy to find the Koopman operator sometimes. For example,

If we use the augment state variable

However, in most cases, it would be extremely hard to find the Koopman operator. For example:

Although this seems to be a very simple dynamic system, we will find that we cannot find the Koopman operator easily like the previous example. If we use augmented representation

which means that we will need to involve another

For this system that cannot find the finite Koopman operator easily, the method is to find the eigenfunctions that the Koopman operator could have finite dimensions on these eigenfunctions. When we look at the example

which means that on this observable

The question is how to find the bases for such eigenfunctions. Actually, the eigenfunctions satisfy a closed analytical form as a solution of a PDE. The Koopman eigenfunctions expand invariant subspaces and the PDE is written as:

The magic function

Although we have a very simple analytical expression for the Koopman eigenfunctions, it is not easy to solve the PDE directly. In most cases, it is impossible. There are different methods proposed to approximate the eigenfunctions including neural networks. Here I will briefly talk about two of those methods: neural networks and sparse linear regression.

For the neural network method, it is very intuitive that we can use an auto-encoder and auto-decoder to learn a function

Neural Network to find the Koopman Eigenfunctions

For the sparse linear regression method, we can utilize the PDE of the Koopman eigenfunctions:

We can choose any bases of the functions, for example, Taylor series, Fourier series, and conduct a sparse linear regression to choose a sparse set of these functions. This idea was come up with professor Steve Brunton and his collaborators and they also used this similar idea to identify the dynamic of the nonlinear systems in their SINDy (sparse identification of nonlinear dynamics) paper.

Finding the Koopman eigenfunctions is very difficult especially for real-world complex systems. But once we get a good Koopman operator and Koopman eigenfunctions, we will benefit a lot since we will have a linear approximation for the original complex systems.

Time-Delay Embedding for Nonlinear Systems

Besides finding the Koopman eigenfunctions, another widely-used and powerful method for the nonlinear especially the chaotic system is the time-delay embeddings.

Here we will use Lorentz attractor as an example to show how the time-delay embedding can be applied to analyze the nonlinear systems and how a "high-dimensional" time-delay embedding can naturally work as a finite approximation to the Koopman operator.

This figure shows a trace of the Lorentz attractor, which is a well-known chaotic system. We usually say a system is chaotic if two states diverge quickly over time even they are very close to each other at the beginning. This is usually named as the butterfly effect.

Lorentz attractor

Here we will use Lorentz attractor as an example to show how the traditional method can find the hidden structure of the chaotic system using the time-delay embedding. Assume that we can only observe one of the dimensions of the Lorentz attractor:

x of Lorentz attractor

It looks very messy and we can see little connection between this signal to the original Lorentz attractor. The traditional method to analyze the chaotic or nonlinear system is to construct the Hankel matrix and perform an SVD over it. The Hankel matrix is defined as:

where

If we choose the embedding dimension as

Time-delay embedding with r=15

All these results come from very classic analysis for the nonlinear or chaotic system. What is the relationship with Koopman operator?

If we increase the embedding dimension to

Time-delay embedding with r=300

We can see one of the differences is that we have a more continuous singular value spectrum: more than three ranks are significantly greater than zero now. If we zoom in the vectors

Comparison of low-rank delay embedding (left) and high-rank delay embedding (right)

相關焦點

  • Safety in logic gate ASIL decomposition
    Today we can discuss the safety consideration in logic gate and ASIL decomposition.First: ISO26262 related ASIL decomposition or ASIL highlight methodologyif in FTA there are AND Gate, then it means, the N+1 layer event can be ASIL decomposed by N layer
  • Dynamic Wallpaper:Mac上體驗最棒的動態壁紙
    「最好用的動態壁紙軟體,其他軟體沒得比,簡直弱爆了,dynamic的壁紙很多,非常方便選擇,功能很強大,外加上團隊感覺是很用心的,每次升級都能給軟體更好更方便的使用體驗,對於問題的回覆也很用心很實在,非常棒。」 —— Zodiac Sheldon"Works like a charm.