Topics in Nonlinear Systems

1. Why Nonlinear Systems Matter

Linear models are powerful because they are analyzable and often accurate near an operating point. But physical systems are nonlinear by default.

Nonlinearity enters through:

That is why nonlinear control is not a niche afterthought. It is the broader setting in which linear control is a local approximation.

2. Nonlinear State Models, Equilibria, and Linearization

A general nonlinear control system is written as

\[\dot x=f(x,u,t), \qquad y=h(x,u,t).\]

For a constant input $u_e$, an equilibrium $x_e$ satisfies

\[f(x_e,u_e,t)=0.\]

Different equilibria can have different stability properties, so the operating point matters.

2.1 Jacobian linearization

Define perturbations

\[\delta x=x-x_e, \qquad \delta u=u-u_e, \qquad \delta y=y-y_e.\]

Then near the equilibrium,

\[\delta \dot x \approx A \delta x + B \delta u, \qquad \delta y \approx C \delta x + D \delta u,\]

with Jacobians

\[A=\left.\frac{\partial f}{\partial x}\right|_{(x_e,u_e)}, \quad B=\left.\frac{\partial f}{\partial u}\right|_{(x_e,u_e)},\] \[C=\left.\frac{\partial h}{\partial x}\right|_{(x_e,u_e)}, \quad D=\left.\frac{\partial h}{\partial u}\right|_{(x_e,u_e)}.\]

This is the bridge from nonlinear physics to linear design. It is powerful, but only local.

2.2 What linearization can and cannot say

Linearization can reveal:

It cannot guarantee:

3. Phase Plane and Geometric Intuition

For second-order systems, phase-plane plots turn dynamics into geometry:

This viewpoint is helpful because it makes nonlinear motion visual:

Phase-plane intuition is one of the cleanest ways to feel the difference between local linear reasoning and global nonlinear behavior.

4. Lyapunov Stability: Direct Method, Invariance, and Barbalat

For nonlinear systems, poles are not enough. Stability must be defined directly in state space.

4.1 Core definitions

Common notions are:

4.2 Lyapunov direct method

Take a scalar function $V(x)$ that behaves like an energy:

\[V(x)>0 \text{ for } x \neq 0, \qquad V(0)=0.\]

If

\[\dot V(x)=\nabla V^\top f(x) \le 0,\]

then trajectories cannot move uphill in $V$.

4.3 A standard lemma

Lemma. Suppose $V \in C^1$ satisfies

\[\alpha_1(\|x\|) \le V(x) \le \alpha_2(\|x\|)\]

for class-$\mathcal{K}_\infty$ functions $\alpha_1,\alpha_2$, and

\[\dot V(x) \le -\alpha_3(\|x\|)\]

for another class-$\mathcal{K}_\infty$ function $\alpha_3$. Then the origin is globally asymptotically stable.

Proof sketch. Positive definiteness and properness imply bounded level sets, so trajectories remain bounded. Since $V$ decreases strictly away from the origin, the state must move toward the largest invariant set inside ${\dot V=0}$, which here is only the origin. That gives global attractivity and stability.

This is the main pattern behind many nonlinear proofs: invent an energy-like quantity and show it decreases.

4.4 LaSalle and Barbalat

Two recurring tools are:

LaSalle helps when $\dot V \le 0$ but not strictly negative. Barbalat helps when we know a signal is bounded, integrable, and sufficiently regular, so it must converge to zero.

These are the workhorses behind many nonlinear and adaptive stability arguments.

5. Gain Scheduling and Hybrid Linear-Nonlinear Design

Many real plants operate over wide envelopes where one linear controller is not enough. Gain scheduling uses a family of local linear controllers indexed by operating condition:

This is not a fully nonlinear synthesis method, but it is often the most practical compromise between fidelity and complexity.

The design risks are:

This is why gain scheduling is often paired with a hybrid linear-nonlinear analysis mindset rather than treated as a pure plug-and-play recipe.

6. Feedback Linearization and Dynamic Inversion

If the nonlinear structure is known accurately enough, we can try to cancel it explicitly.

6.1 Basic idea

Suppose

\[\dot x = f(x)+g(x)u.\]

If the input enters in a suitable way, choose

\[u=\alpha(x)+\beta(x)v\]

so the transformed dynamics become approximately linear in the new input $v$.

This is the core of feedback linearization and dynamic inversion.

6.2 Why it is powerful

If it works well:

6.3 Why it is fragile

If the model is wrong:

So dynamic inversion is powerful when the model is reliable and the bandwidth is realistic. It is dangerous when it pretends uncertainty is negligible.

7. Backstepping

Backstepping is one of the most systematic nonlinear-design methods for strict-feedback systems.

Consider the two-state template

\[\dot x_1 = f_1(x_1) + g_1(x_1)x_2,\] \[\dot x_2 = f_2(x) + g_2(x)u.\]

The idea is recursive:

  1. treat $x_2$ as a virtual control for the $x_1$-subsystem,
  2. design a stabilizing virtual control $\alpha_1(x_1)$,
  3. define the error
\[z_2 = x_2 - \alpha_1(x_1),\]
  1. design the real input $u$ so both $z_1$ and $z_2$ decay.

7.1 A representative theorem

Theorem. For a strict-feedback system with smooth known nonlinearities and nonvanishing input gains, a recursive backstepping design can produce a control law and Lyapunov function such that the origin is asymptotically stable.

Proof sketch. Start with a Lyapunov candidate $V_1(z_1)$ for the first subsystem. Introduce a virtual control to make $\dot V_1$ negative up to a residual involving $z_2$. Then augment the Lyapunov function to

\[V_2 = V_1 + \frac{1}{2} z_2^2\]

and choose $u$ so the cross terms cancel and $\dot V_2$ becomes negative definite. The same pattern extends recursively.

Backstepping is attractive because it builds the controller and the proof together.

7.2 Engineering reading of backstepping

Backstepping is especially useful when:

This is why it reappears in adaptive control, underactuated systems, and nonlinear robotics.

8. Sliding Mode Control

Sliding mode control is a variable-structure method that drives the system onto a chosen sliding surface and then keeps it there.

Choose a surface such as

\[s(x)=0.\]

A typical reaching law aims for

\[\dot V = \frac{d}{dt}\left(\frac{1}{2}s^2\right)=s \dot s \le -\eta |s|\]

for some $\eta>0$.

This yields two phases:

Strengths:

Weaknesses:

Practical designs often smooth the sign function or use higher-order sliding modes to reduce chattering.

9. Bang-Bang Control

Bang-bang control uses extreme actuator values:

\[u(t) \in \{u_{\min}, u_{\max}\}.\]

This appears naturally when:

Bang-bang laws sit at the boundary between nonlinear control and optimal control:

For the optimal-control version, see Adaptive, Optimal, Robust, and Learning Control.

10. Fuzzy Control

Fuzzy control is attractive when the system is hard to model precisely but expert rules are available.

The logic is:

Strengths:

Weaknesses:

Fuzzy control can work very well in industry, but it is not a substitute for structural stability analysis when guarantees matter.

11. Active Disturbance Rejection Control

ADRC treats unknown plant mismatch and disturbances as an aggregated disturbance to be estimated and cancelled online.

The usual ingredients are:

11.1 Why ADRC is appealing

11.2 What to be careful about

Conceptually, ADRC sits between model-based feedback linearization and robust disturbance-observer design.

12. Nonlinear Controllability and Observability

The linear rank tests do not fully settle nonlinear controllability or observability.

At the nonlinear level, useful ideas include:

For many engineering purposes, linearization still gives the first useful answer. But nonlinear systems force more precise language: “controllable near which point, in what sense, and under what input constraints?”

13. Reading Map Across the Control Series

This note is the nonlinear-design bridge in the control series:

14. Compact Recall Map

The shortest useful nonlinear-control workflow is:

  1. write the nonlinear model and identify the operating point,
  2. separate local questions from global ones,
  3. use Jacobian linearization when a local controller is enough,
  4. use Lyapunov geometry when guarantees matter,
  5. choose a nonlinear design method that matches the structure:
    • gain scheduling for wide but mostly linear regimes,
    • feedback linearization for accurately known nonlinearities,
    • backstepping for cascaded strict-feedback systems,
    • sliding mode for matched uncertainty,
    • ADRC when disturbance estimation is central.

Nonlinear systems are not exceptions to control theory. They are the full problem. Linear theory is the local window that makes pieces of the full problem manageable.