# Linear Dynamical Systems (LDS)
- Continuous-time ODE:
- Discrete-time difference eq:
# Stability of
A linear system is stable if .
# Integration and the Matrix Exponential
For a simple linear system :
# Matrix Exponential and Discrete Dynamics
For a linear continuous-time system :
- Solution: , where is the Matrix Exponential.
- Discrete Step:
- is the State Transition Matrix.
- Stability Condition: The system is stable if (all eigenvalues lie inside the unit circle in the complex plane).
# Linear Systems with Input (ZOH)
Given and assuming a Zero-Order Hold (ZOH) where :
We can augment the state:
The discrete-time matrices are found via:
- Linear Discrete Dynamics:
- Matrix Exponential Series:
# Derivative Formalisms
When dealing with multivariate functions :
- Jacobian Matrix:
- Gradient: For a scalar function where :
- is a row vector ().
- is a column vector ().
- Hessian Matrix: (Symmetric).
# Taylor Series for Dynamics
To linearize nonlinear dynamics about a nominal point :
- Define Jacobians:
- Approximation:
- Delta Coordinates: Let and
# Root Finding and Fixed Points
Goal: Given , find such that .
- Continuous-time: Root finding finds the equilibrium.
- Discrete-time: Closely related to finding a fixed point where .
# 1. Fixed Point Iteration
- Method:
- Constraint: Only works for stable fixed points and typically has slow convergence.
# 2. Newton's Method
- Intuition: Fit a linear approximation to and solve for the root of the approximation.
- Linearization:
- Step: Set approximation to zero and solve for :
# Optimization and Minimization
# Root Finding with Newton’s Method (Continued)
- Algorithm:
- Apply correction:
- Repeat until convergence.
- Example: Backward Euler
- Newton’s method provides very fast convergence for implicit integration.
- Take-Away Messages:
- Quadratic Convergence: Errors decrease quadratically near the solution.
- Can reach machine precision very quickly.
- Cost: The most expensive part is solving the linear system . This can be improved by taking advantage of problem structure (sparsity).
# Minimization
Goal: , where
- First-Order Condition: If is smooth, at a local minimum.
- This transforms minimization into a root-finding problem for .
- Newton's Step for Minimization:
Linearize the gradient: - Intuition: Fit a quadratic approximation to and exactly minimize that approximation.
- Sufficiency: is a necessary condition, but not sufficient.
- In the scalar case: "descent direction" "learning rate/step size."
# Regularization and Robustness
- Definiteness:
- descent (minimizing).
- ascent (maximizing).
- In , we require (strictly positive definite matrix).
- Strong Convexity: If everywhere, the problem is strongly convex and easily solved with Newton.
- Regularization: A practical solution when is not positive definite:
- While not pos. def:
- This is called "Damped Newton." It guarantees a descent direction and shrinks the step size.
# Newton Optimization Variants
- Least Squares:
- Solution via pseudo-inverse: .
- Gauss-Newton Step: Often used in robotics/least squares where the full Hessian is approximated by to save computation.
# Line Search (Globalization)
Often, a full Newton step overshoots the minimum. To fix this, we use a Line Search strategy:
- Strategy: Check and "backtrack" by reducing until a "good" reduction is found.
- Armijo Rule:
- (initial step length).
- While :
- .
- Intuition: Ensure the step agrees with the linearization within some tolerance .
Takeaway: Newton’s method with simple modifications (regularization and line search) is extremely effective at finding local optima.
# Equality Constraints
Goal: s.t.
First-Order Necessary Conditions (KKT):
- Need in "free" directions (directions tangent to the constraint).
- Need .
- Geometric Intuition: At the optimum, any non-zero component of the gradient must be normal to the constraint surface . This means must be a linear combination of the constraint gradients .
