White Paper

Beyond PID: Exploring Alternative Control Strategies for Field-Oriented Controllers

Introduction

The first practical use of an electric motor was recorded in 1834 by Thomas Davenport to power a railway car on a short section of track. Today, motors are the prime motivator in electrified transportation, industrial automation, and commercial and consumer products. A study from the International Energy Agency (IEA) estimates that 40–45% of the world’s generated electricity is consumed by systems driven by motors.

In recent decades, brushless motors became more popular due to their higher efficiency, power density, and reliability. As brushless motors became popular, control techniques were developed to bring about precise control for these motors and further improve their efficiency.

Field-oriented control (FOC) is one such control technique that provides precise control over the full range of torque and speed for brushless motors.

Block diagram showing the current and and speed PI controller loops that makes up the field-oriented control architecture for a PMSM.

Field-oriented controller architecture of a PMSM with PI controllers for current and speed loops.

As the previous schematic shows, field-oriented control relies on PI controllers for speed, Iq, and Id control loops. PI controllers are simple and easy to implement but might be challenging to tune in situations where there are uncertainties and external disturbances present. Some examples are:

  • Uncertainties in motor parameters and system dynamics
  • Changes in motor parameters (resistance, inductance, back EMF, etc.) with wear, aging, and operating temperature
  • Load torque and input voltage fluctuations
  • Changes in operating region and hysteresis in motor behavior

Apart from accounting for these factors, one must also consider the need to retune controllers if motors are resized for your application. This process entails significant effort. To address these challenges, advanced control algorithms can be used to design field-oriented controllers that can account for these factors while improving motor control accuracy, response time, and efficiency even in challenging environments.

After reading this white paper, you will have an understanding of designing field-oriented controllers. The paper will discuss the appropriate tools in MATLAB® and Simulink® to use when working with the following control techniques:

  • Active disturbance rejection control (ADRC)
  • Model predictive control (MPC)
  • Reinforcement learning (RL)

The following table provides an overview of how these advanced control methods compare with each other and PID.

  ADRC MPC RL PID Control
How does it work? Uses an extended state observer (ESO) to estimate and compensate for uncertainties and disturbances in real time Uses model predictions to optimize control actions over a prediction horizon Learns optimal control policies directly from data through trial and error Computes control signal based on proportional, integral, and derivative actions on the error signal
How does it perform in handling system nonlinearities, uncertainties, and external disturbances?

     — Well
 — Poorly

   

     

     

 
How easy is it to get started and get good results?

     — Easy
— Difficult

     

   

 

     

Can performance be verified against standard linear metrics such as gain and phase margins? Yes No No Yes
This technique can be the better alternative over PID when: Robust disturbance rejection is desired in the presence of uncertain dynamics, unknown disturbances, and varying motor parameters without requiring a detailed system model. Dealing with constraints/operating limits of motors and/or prediction-based control is needed. It is difficult to characterize motor dynamics and operating conditions, and learning control policies directly from data is more practical.

Comparison of advanced control methods relative to PID control.

section

Active Disturbance Rejection Control

Active disturbance rejection control extends PID control and offers the significant advantage of handling a wider range of uncertainties, including unknown dynamics and disturbances, while maintaining controller performance.

The algorithm uses a model approximation of known system dynamics and lumps unknown dynamics and disturbances as an extended state of the plant. An extended state observer is used to estimate this state and implement disturbance rejection control. This is achieved by reducing the effect of the estimated disturbance on the system and driving the system toward the desired behavior.

Schematic showing the setup of active disturbance rejection control with an error feedback controller and extended state observer for a motor control application.

Schematic for active disturbance rejection control (ADRC).

In high-speed applications of industrial robotic arms, precise control of brushless motors that drive the robot’s joints and links is crucial to achieving accurate motion and positioning. However, the structural members of many robots exhibit small amounts of flex, introducing additional dynamics that cause undesirable oscillations or vibrations.

PID controllers may struggle to deal with these flexible dynamics and may require complex modeling and tuning to maintain stability and performance. Alternatively, ADRC is an effective solution for handling the dynamics of flexible joints and links. This is achieved by estimating and compensating for the disturbance caused by the additional dynamics in real time, without relying on an explicit model of the system.

Block diagram showing the current and speed controller loops using active disturbance rejection controllers making up the field-oriented control architecture for a PMSM.

Field-oriented control architecture for PMSM using active disturbance rejection controllers (orange).

Simulink Control Design™ provides the Active Disturbance Rejection Control block, which enables users to design the controller. Users test the controller in a system-level simulation by including an inverter, motor, and other electrical and mechanical dynamics. Once the controller is tested in simulation, C/C++ code can be generated from this prebuilt block using Embedded Coder®. With similar memory and throughput requirements as a PID controller, the code for the ADRC can be deployed to existing motor controller hardware. This provides a straightforward way to implement ADRC, especially for those who are new to the technique.

Simulink subsystem showing ADRC design for inner current loop.

Active disturbance rejection controller (ADRC) architecture in Simulink for d-axis and q-axis current.

The following graph compares the speed reference tracking performance of an ADRC (blue) and PID controller (orange). The PID gains were tuned using the conventional method with estimated motor parameters. The ADRC exhibits smoother transients and less overshoot than the PID. Additionally, the ADRC shows better disturbance rejection performance at 2 seconds when there is a load change on the motor (5 to 50% of rated torque). It should be noted that the simulation model used does not model d- and q-axis cross-coupling.

 Graph showing performance comparison of ADRC and PID controllers for active disturbance rejection control over a PI controller for the outer speed loop.

Comparison of speed reference tracking performance of ADRC (blue) and PID controllers (orange).

Controller type Execution time
PI controller as the current controller 13.1 μsec
ADRC controller as the current controller 14.65 μsec

Profiling results on Texas Instruments™ C2000™.

section

Model Predictive Control

Model predictive control is an optimization-based control technique that was first developed for use in process industries such as chemical plants and refineries during the 1980s. Since then, the advancements in microcontroller technology, digital signal processing, and optimization algorithms have made it possible to apply MPC in power electronics. As a result, the adoption of MPC is expected to increase in the coming years.

Schematic showing the model predictive controller setup for a PMSM.

Schematic of model predictive control.

The fundamental principle of MPC involves using a mathematical prediction model to forecast the future states of the controlled system within a prediction horizon. The controller then calculates a sequence of optimal control actions to track desired reference trajectory while satisfying constraints. The algorithm does it by solving a real-time optimization problem. The first control action is applied to the system, and subsequent actions are ignored. This process is repeated in the next time step.

Diagram showing the working principle of a model predictive controller, illustrating concepts such as control and prediction horizon.

Working principle of model predictive control.

MPC offers a significant advantage over PID for field-oriented control by explicitly handling operational limits and constraints of the motor while accounting for cross-coupling between loops. This means that the controller can consider physical limits, such as torque saturation, current and voltage limits, and rate-of-change limits. By incorporating these constraints into the optimization problem, MPC can prevent violations of these constraints while minimizing a cost function that represents the control objectives. In applications such as electric vehicle traction motor control, constraints such as motor torque limits, battery current limits, and thermal limits are crucial to ensure safe operation and prevent damage to components. PID controllers lack an explicit way to handle constraints, which may lead to undesirable tracking behavior, such as overshoot, speed or torque saturation, or instability in some cases.

Block diagram showing the current controller loop using a model predictive controller making up the inner loop of the field-oriented control architecture for a PMSM.

Model predictive controller for inner current loop.

MPC has a preview capability, allowing it to optimize control actions based on knowledge of the future reference signal, resulting in improved responsiveness to tracking references. In contrast, PI controllers are limited to responding to current system state errors. Additionally, the integral control component in PI controllers can introduce a delay that slows the dynamic response of the control loop.

Model Predictive Control Toolbox™ simplifies the process of setting up a model predictive controller for FOC applications in MATLAB by providing built-in Simulink blocks and algorithms. By using built-in MPC blocks, you can set up the inner loop of a FOC. This inner loop control will involve calculating the d-axis and q-axis stator voltages to drive the motor at the desired speed while minimizing the cost function that represents the tradeoff between control objectives.

Simulink subsystem showing the model predictive controller block being used in the inner current loop of the field-oriented controller.

MPC controller block (blue) in Simulink functioning as a current controller for PMSM.

You can evaluate the MPC controller performance by simulating it in a closed loop with the motor plant in MATLAB or Simulink. After the initial evaluation, you can refine the controller design by adjusting the parameters and testing different simulation scenarios.

Once the controller has been tested in a simulation, you can use Simulink Coder™ to generate C and C++ code from the MPC block to deploy on your embedded controller hardware.

Controller type Execution time
PI controller as the current controller 13.1 μsec
MPC controller as the current controller (running at 5 KHz) 134 μsec

Profiling results from Speedgoat® hardware.

Although MPC offers several advantages for field-oriented control, there are some drawbacks to consider. One of the primary challenges is the computational complexity and real-time implementation of the algorithm. MPC can be memory or computationally intensive, making it challenging to run on hardware with limited resources. Moreover, the accuracy of the prediction model is crucial for its performance, and the model may need to be updated or reidentified if there are changes to the motor or load dynamics. These factors should be taken into account when designing an MPC-based motor control system.

section

Reinforcement Learning

Reinforcement learning is a machine learning technique that enables a computer agent to learn how to make decisions by interacting with an environment and receiving rewards or penalties based on its actions. The objective for the agent is to learn a policy that maximizes cumulative rewards over time. This is accomplished through trial and error, with the policy being updated based on the feedback received. The learning occurs without human intervention and relies solely on the agent’s observations of the environment.

Schematic showing the reinforcement learning architecture for a PMSM control application.

Schematic of reinforcement learning.

Reinforcement learning provides an alternative to linear control when complex nonlinear dynamics and uncertain operating environments make it challenging to achieve satisfactory tracking performance. This is particularly useful when it is difficult to characterize the motors and their operating conditions to tune controllers.

For instance, agricultural machinery incorporating PMSMs operates in diverse and challenging environments, encountering uneven terrain, variable soil types, fluctuating moisture levels, and differing compaction. These environmental variations are difficult to characterize, which presents challenges in tuning PI-based field-oriented controllers to provide satisfactory torque tracking performance. A suitably trained reinforcement learning policy can adapt to these variations and deliver the necessary tracking performance for such applications.

Reinforcement learning offers several advantages. For instance, a single controller can be used to regulate motor speed and currents, rather than having to tune separate PID controllers for each of these loops at various operating points. Furthermore, reinforcement learning can handle multiple inputs and outputs from various sensors and actuators.

Using MATLAB and Reinforcement Learning Toolbox™, you can configure a reinforcement learning controller for field-oriented control. The toolbox provides functions and a reinforcement learning agent Simulink block for implementing reinforcement learning control, as well as built-in and custom algorithms to train the controller.

Block diagram showing the current controller loop using a reinforcement learning agent making up the inner loop of the field-oriented control architecture for a PMSM.

Reinforcement learning controller for the inner current loop of the field-oriented controller. The reinforcement learning controller block regulates the d-axis and q-axis currents and generate the corresponding stator voltages required to drive the motor at a specified speed.

Simulink subsystem showing the reinforcement learning agent block being used in the inner current loop of the field-oriented controller.

Reinforcement learning–based PMSM current controller architecture in Simulink showing the RL agent (blue).

Once trained, you can use Embedded Coder to generate C++ code to deploy the optimal policy on embedded platforms.

Controller type Execution time
PI controller as the current controller 13.1 μsec
Reinforcement learning controller as the current controller (running at 5 kHz) 85 μsec

Profiling results with TD3 agent on Speedgoat hardware.

It should be noted that although reinforcement learning can be a powerful alternative to traditional controllers like PID controllers, it is computationally expensive and requires time and data to train the controller. It is essential to consider these tradeoffs when selecting reinforcement learning, and the decision should depend on the specific needs of the application, considering factors such as available resources, time, and data. In certain cases, combining reinforcement learning with PI controllers can be advantageous. By integrating the two approaches, a reinforcement learning agent can generate correction signals that complement the control signals from the PI controllers. This combination allows the system to handle complex, nonlinear, or unforeseen conditions that fall outside the nominal range of the PI controllers.

section

Conclusion

In summary, this white paper discussed alternative control strategies for field-oriented controllers in electric motors, focusing on active disturbance rejection control, model predictive control, and reinforcement learning. These advanced control techniques offer improved motor control accuracy, response time, and efficiency, even in challenging environments.

MATLAB, Simulink, and associated toolboxes provide an accessible platform to design and implement these advanced control techniques for motor control applications. However, it is essential to consider the tradeoffs of computational complexity, real-time implementation, and data requirements when selecting an appropriate control strategy for a specific application.