Results for
Hello: your help is greatly appreciated, I am trying to solve the following pde numerically with ode45 (if possible). There are three equations in total. I keep getting an error. know parameters: v, D(j), KL(j), cs(j) The IC and BC are provided in the m.file. pde:
% code
% Initial, final values of independent variable
tspan = [0 7];
P=1, F2=2, F3=3;
c_initial_1= zeros(N+1,1);
c_initial_2= zeros(N+1,1);
c_initial_3= zeros(N+1,1)
if t == 0
c_initial_2(1,2) = cs(2)
c_initial_3(1,3) = cs(3)
c_initial_1(1,1) = cinj
end
[t, c, kF2, kF3, kP, n2, m2, n3, m3, q, r] = ode45(@ode, tspan, c_initial_1,c_initial_2, c_initial_3);
function [dcdt] = ode(t, c, k2, k3, kP, n2, m2, n3, m3, q, r)
global N dx dxs
dcdt = zeros(N,N)
for i = 1:N
for j = P:F3 % j== 1:3 1=persulfate, 2 Fraction 2, and 3 Fraction 3
if j == 1
epsilon = 1
else
epsilon = 0
end
dcdt(i, j) = (-v/(2*dx))*(c(i+1,j)-c(i-1,j))+(D(j)/dxs)*(c(i+1,j)...
-2*c(i,j)-c(i-1,j))+(1-epsilon)*kL(j)*(cs(j)-c(i,j))- ...
(1-epsilon)*k(j)*(c(i,j)^n(j)*(c(i,1)^m(j))- ...
epsilon*kP*(c(i,j+epsilon)+c(i,j+2*epsilon))^q *c(i,1)^r
dcdt(N+2,j) = dcdt(N-1,j)
dcdt(0,j) = dcdt(2,j)
if t == 0
dcdt(1,P) = cinj
end
end
end
end
enddc(i,j)/dt=-v*(dc(i,j)/dx)+D(j)*d/dx(dc(i,j)/dx)+(1-epsilon)*kL(j)*(cs(j)-c(i,j))+(1-epsilon)*k(j)*c(i,j)^n(j) * c(i,1)^m(j) + epsilon*kp*[c(i,j+epsilon)+c(i,j+2*epsilon)]^r * c(i,1)^q
Good day everyone,
I'm trying to simulate a single phase transformator by using Simulink. I've got the current values: R1 = 3 Ohm R2 = 0.03 Ohm X1 = 6.5 Ohm X2 = 0.07 Ohm Rc = 100k Ohm Xm = 15k Ohm f = 60 Hz Uprim = 2400V Usec = 240V S = 29kVA cos Phi = 0.8
And I've made the current calculations: L1 = X1/2*pi*f = 17,24 mH L2 = X2/2*pi*f = 185.68 uH Lm = Xm/2*pi*f = 39.79 H S = Urms*Irms => Irms = 120.83 A P = Urms*Irms*cos Phi = 23200 W Q = Urms*Irms*sin Phi = 17400 VAr Q>0 so Q = ohms-inductive => QL = 17400, QC = 0
I've made the current circuit and simulated it, but somehow my secondairy output voltage is only at 225Vrms. Can someone explain to me why that's the case? Did do something wrong in my calculations or in my simulation?
i am confused about how to run as a closed loop. how to do closed-loop control of phase shift using matlab. i am using c2000 embedded by Matlab.
Hi, I am intrigued by the idea of using the simbiology stochastic solvers for a project that I have so far coded in the idnlgrey framawork.
Some of the ODE right hand sides (ionic fluxes) in my model are given via fitobjects.
My question is: can I use a fit object in simbiology? Or should I figure out an analytical form and use it as a custom reaction rate?
Thanks, Francesco
On using Simbiology, I am realizing how wonderful it is! It is equivalent and infact better than much commercial software available in the market for hefty prices (not to name anyone in purpose). Moreover, the product is backed up by the world leaders in software engineering - MATLAB (which gives more confidence to the product). It would be very helpful if someone can share the list or some of the PubMed indexed publication on population pharmacokinetics in which Simbiology is utilized for modeling and computation.
Identification of model (one compartment, two compartments or three compartments) which a drug follows is an important step before population pharmacokinetic modeling. I am aware that the graph between the concentration vs time, gives an idea of the number of compartment a drug follows.
But is there a standard way to explore and determine the number of compartment a drug follows in a more objective manner. This would also be helpful to determine the model in which the data needs to be fit. In addition, a note on determining the order of reaction is also welcomed and would make the discussion complete.
Fulden and I will have a booth at PAGE next week. Come and chat with us to learn the latest with MATLAB and SimBiology for PK/PD, PBPK, and QSP modeling.
I use Simbiology for population PK-PD model development. During the model fitting of data, I understand that the model diagnostics play a major decisive role in selecting the suitable model. Hence would like to make it clear regarding the interpretation of the model diagnostics.
If for example, I have two models. First model: DFE= 411, LogLikelihood = - 807.6 (minus 807.6), AIC = 1633.2 , BIC = 1647.2 and RMSE = 1.92 Second model: DFE= 410, LogLikelihood = - 888.8 (minus 888.8), AIC = 1797.6 , BIC = 1813.2 and RMSE = 0.34 Which among the model is better and why? What are the individual interpretation of DFE, LogLikelihood, AIC, BIC and RMSE?
In PK-PD research paper generally, they take Objective Function value as decisive model diagnostics. What is the Objective Function Value in Simbiology? I did some literature search and found that Objective Function Value is -2 times LogLikelihood value? So should I multiply the LogLikelihood value given in Simbiology by - 2 to obtain Objective Function Value? Moreover, if the LogLikelihood value is multiplied with -2 then the entire interpretation will be changing (as minus will reverse the direction). So please guide in this regard and give your valuable inputs.
Bonjour A tous,
Je vous écris pour solliciter votre aide si possible. Je travaille en ce moment sur un projet qui consiste a un traitement d'image.
J'utilise une camera thermique associée a une logiciel. Avec ce logiciel, j'enregistre une film de 4 secondes. En gros j'ai une vidéo constituée de 100 images.
Une image comprend 90 pixels donc les valeurs de ces pixels sont reparties en sous forme matricielle (10x9). Ce qui fait que pour les 100 images, j'ai 100 matrices.
L’idée c'est d'avoir une image moyenne donc une moyenne des 100 matrices.
Le problème c'est que je fais le traitement avec ce logiciel. Pour la suite des travaux j'utilise matlab.
J'ai donc besoin de créer un programme sous matlab qui me permet de lire le fichier recueilli par ce logiciel de traitement d'image en me refaisant sortir de façon automatique une moyenne de ces 100 matrices.
Je vous joins en copie un exemple de ce fichier.
J'ai donc besoin de piste pour le programmer.
Je vous remercie d'avance.
Bien cordialement,
Hi I would like to improve my skills in PK/PD modelling using Simbiology. Are you going to organize any courses in the near future? Many thanks Anas
Hi, In my circuit, I have an element for which I don't have its equivalent circuit. What I have is a lookup table of its frequency response (magnitude and phase), which is complicated and cannot be represented by a lumped parameters equivalent circuit, and also some of its elements are frequency-dependent. Even if I do obtain somehow its transfer function Laplace representation (using e.g. "System identification" toolbox, and I want to add it to my circuit simulation (Simscape Electrical) as a block.
Does anyone have an idea how to do it? Thank you!
We've been hearing from more and more customers who are interested in using Xilinx's new Zynq UltraScale+ devices in power electronics control applications. The attraction appears to be the dual-core ARM Cortex-R5 processors, which are well suited to hard real time applications. Are you looking at Zynq UltraScale+ MPSoCs as a platform for power electronics control? If so, we'd love to hear from you as we look at support for these devices.
In the meantime, MathWorks offers a reference design example for FOC motor control on Zynq-7000 devices that many customers have used as a basis for developing Simulink models for C and HDL code generation.
Hi guys could u guys please help me explain what is happening on this 3 graphs. I am an electrical student and just to be honest i am not a smart student but I'm willing to learn. Please do help me.Ive got my simulation i managed to generate perfect sin wave but i just couldn't explain on the graphs i simulated.
With the need for higher sampling frequencies, power electronics control engineers are moving some of their controller implementations to FPGAs or FPGA-based SoCs. Besides the use of wide-band gap semiconductors (GaN and SiC), what other reasons are driving the need for higher controller sampling frequencies? Let us know your thoughts.
If you have not seen this yet, in Release 2018b we added several examples to Simulink Control Design that show how to use this product to tune the gains of field-oriented controllers.
The first two examples make use of Closed-Loop PID Autotuner block . We show how to use this block to tune multiple loops in the motor control system, one loop at a time.
One of the examples shows tuning the controller gains for a PMSM:
Tune Field-Oriented Controllers Using Closed-Loop PID Autotuner Block
The other example shows how to tune four loops for an asynchronous machine (inductance motor):
Tune Field-Oriented Controllers for an Asynchronous Machine Using Closed-Loop PID Autotuner Block
This approach works well when you have initial gains that provide stable response, and you want to fine tune the controller to improve performance.
What do you do when you start with a new design and need to design your controller from scratch? That is what the third example is showing. Here we design all 3 loops (id, iq, speed) for a PMSM by running an AC sweep to compute a frequency response, then identifying a state-space model using System Identification Toolbox, and finally tuning all 3 loops simultaneously to provide desired performance.
Check it out here:
Tune Field-Oriented Controllers Using SYSTUNE
What do you think about these examples?
Share your opinion.
Arkadiy
On Wednesday, April 17, 12-1 PM EDT, Dr. Ing. Markus Rehberg, QSP Scientist at Sanofi in Frankfurt (Germany) will show how Sanofi and Rosa & Co created a QSP model for Rheumatoid Arthritis, using SimBiology, that transformed the way Sanofi uses and implements data in drug research and early development.
I invite you to register for the webinar, and afterward let me know what you think: https://register.gotowebinar.com/register/325575685872200717
What is it?
SimFunction allows you to perform multiple simulations in a single line of code by providing an interface to execute SimBiology® models like a regular MATLAB function.
Consider the following similarity: If you want to calculate the value of the sine function at multiple times defined in the variable t, you use the following syntax:
>> y = sin(t)
If mymodel represents a SimFunction, you can simulate your model with multiple parameter sets using the following syntax:
>> simulationData = mymodel(parameterValues, stopTime, dose)
What is it good for?
Multiple simulations
Because it allows you to perform multiple simulations in a single line of code by providing a matrix of parameter values or variants or a cell array of dosing tables, it is particularly suited for
- parameter and dose scans
- Monte Carlo simulations
- customized analyses that require multiple model simulations such as a customized optimization
Performance
SimFunctions are optimized for performance as they are automatically accelerated at the first function execution, which converts the model into compiled C code. Those simulations can be distributed to multiple cores or to a cluster and run in parallel if Parallel Computing Toolbox™ is available thanks to its built-in parallelization or within a parfor loop.
Simulation deployment
Since SimFunction objects cannot be changed once created, they can be shared with others without the risk of altering the model inadvertently.
Also, you can use SimFunctions to integrate a SimBiology model into a customized MATLAB App and compile it as standalone application to share with anyone without the need of a MATLAB license.
How does it work?
Create a SimFunction object using the createSimFunction method by choosing:
- which parameters it should take as inputs
- which targets will be dosed
- which model quantities it should return
- which sensitivities it should return if any

Have a look at the following example from the SimBiology documentation for an executable script to help you get started: Perform a Parameter Scan.
We are often asked to help with parameter estimation problems. This discussion aims to provide guidance for parameter estimation, as well as troubleshooting some of the more common failure modes. We welcome your thoughts and advice in the comments below.
Guidance and Best Practices:
1. Make sure your data is formatted correctly. Your data should have:
- a time column (defined as independent variable) that is monotonically increasing within every grouping variable,
- one or more concentration columns (dependent variable),
- one or more dose columns (with associated rate, if applicable) if you want your model to be perturbed by doses,
- a column with a grouping variable is optional.
Note: the dose column should only have entries at time points where a dose is administered. At time points where the dose is not administered, there should be no entry. When importing your data, MATLAB/SimBiology will replace empty cells with NaNs. Similarly, the concentration column should only have entries where measurements have been acquired and should be left empty otherwise.
If you import your data first in MATLAB, you can manipulate your data into the right format using datatype ' table ' and its methods such as sortrows , join , innerjoin , outerjoin , stack and unstack . You can then add the data to SimBiology by using the 'Import data from MATLAB workspace' functionality.
2. Visually inspect data and model response. Create a simulation task in the SimBiology desktop where you plot your data ( plot external data ), together with your model response. You can create sliders for the parameters you are trying to estimate (or use group simulation). You can then see whether, by varying these parameter values, you can bring the model response in line with your data, while at the same time giving you good initial estimates for those parameters. This plot can also indicate whether units might cause a discrepancy between your simulations and data, and/or whether doses administered to the model are configured correctly and result in a model response.
3. Determine sensitivity of your model response to model parameters. The previous section can be considered a manual sensitivity analysis. There is also a more systematic way of performing such an analysis: a global or local sensitivity analysis can be used to determine how sensitive your responses are to the parameters you are trying to estimate. If a model is not sensitive to a parameter, the parameter’s value may change significantly but this does not lead to a significant change in the model response. As a result, the value of the objective function is not sensitive to changes in that parameter value, hindering estimating the parameter’s value effectively.
4. Choose an optimization algorithm. SimBiology supports a range of optimization algorithms, depending on the toolboxes you have installed. As a default, we would recommend using lsqnonlin if you have access to the Optimization Toolbox. See troubleshooting below for more considerations choosing an appropriate optimization algorithm.
5. Map your data to your model components: Make sure the columns for your dependent variable(s) and dose(s) are mapped to the corresponding component(s) in your model.
6. Start small: bring the estimation task down to the smallest meaningful objective. If you want to estimate 10 parameters, try to start with estimating one or two instead. This will make troubleshooting easier. Once your estimation is set up properly with a few parameters, you can increase the number of parameters.
Troubleshooting
1. Are you trying to estimate a parameter that is governed by a rule? You can’t estimate parameters that are the subject of a rule (initial/repeated assignment, algebraic rule, rate rule), as the rule would supersede the value of the parameter you are trying to estimate. See this topic.
2. Is the optimization using the correct initial conditions and parameter values? Check whether - for the fit task - the parameter values and initial conditions that are used for the model, make sense. You can do this by passing the relevant dose(s) and variant(s) to the getequations function. In the SimBiology App, you can look at your equation view (When you have your model open, in the Model Tab, click Open -> Equations). Subsequently, - in the Model tab - click "Show Tasks" and select your fit task and inspect the initial conditions for your parameters and species. A typical example of this is when you do a dosing species but ka (the absorption rate) is set to zero. In that case, your dose will not transfer into the model and you will not see a model response.
3. Are your units consistent between your data and your model? You can use unit conversion to automatically achieve this.
4. Have you checked your solver tolerances? The absolute and relative tolerance of your solver determine how accurate your model simulation is. If a state in your model is on the order of 1e-9 but your tolerances only allow you to calculate this state with an accuracy down to 1e-8, your state will practically represent a random error around 1e-8. This is especially relevant if your data is on an order that is lower than your solver tolerances. In that case, your objective function will only pick up the solver error, rather than the true model response and will not be able to effectively estimate parameters. When you plot your data and model response together and by using a log-scale on the y-axis (right-click on your Live plot, select Properties, select Axes Properties, select Log scale under “Y-axis”) you can also see whether your ODE solver tolerances are sufficiently small to accurately compute model responses at the order of magnitude of your data. A give-away that this is not the case is when your model response appears to randomly vary as it bottoms-out around absolute solver tolerance.

_ Tolerances are too low to simulate at the order of magnitude of the data. Absolute Tolerance: 0.001, Relative Tolerance: 0.01_

_ Sufficiently high tolerances. Absolute Tolerance: 1e-8, Relative Tolerance: 1e-5_
5. Have you checked the tolerances and stopping criteria of your optimization algorithm? The goal for your optimization should be that it terminates because it meets the imposed tolerances rather than because it exceeds the maximum number of iterations. Optimization algorithms terminate the estimation based on tolerances and stopping criteria. An example of a tolerance here is that you specify the precision with which you want to estimate a certain parameter, e.g. Cl with a precision down to 0.1 ml/hour. If these tolerances and stopping criteria are not set properly, your optimization could terminate early (leading to loss of precision in the estimation) or late (leading to unnecessarily long optimization compute times).
6. Have you considered structural and practical identifiability of your parameters? In your model, there might exist values for two (or more) parameters that result in a very similar model response. When estimating these parameters, the objective function will be very similar for these two parameters, resulting in the optimization algorithm not being able to find a unique set of parameter estimates. This effect is sometimes called aliasing and is a structural identifiability problem. An example would be if you have parallel enzymatic (Km, Vm) and linear clearance (Cl) routes. Practical identifiability occurs when there is not enough data available to sufficiently constrain the parameters you are estimating. An example is estimating the intercompartmental clearance (Q12), when you only have data on the central compartment of a two-compartment model. Another example would be that your data does not capture the process you are trying to estimate, e.g. you don’t have data on the absorption phase but are trying to estimate the absorption constant (Ka).
7. Have you considered trying another optimization algorithm? SimBiology supports a range of optimization algorithms, depending on the toolboxes you have installed. There is no single answer as to which algorithm you should use but some general guidelines can help in selecting the best algorithm.
- Non-linear regression: If your aim is to estimate parameters estimates for each group in your dataset (unpooled) or for all groups (pooled), you can use non-linear regression estimation methods. The optimization algorithms can be broken down into local and global optimization algorithms. You can use a local optimization algorithm when you have good initial estimates for the parameters you are trying to estimate. Each of the local optimization functions has a different default optimization algorithm: fminsearch (Nelder-Mead/downhill simplex search method), fmincon (interior-point), fminunc (quasi-newton), nlinfit (Levenberg-Marquardt), lsqcurvefit, lsqnonlin (both trust-region-reflective algorithm). As a default, we would recommend using lsqnonlin if you have access to the Optimization Toolbox. Note that all but the fminsearch algorithm are gradient based. If a gradient based algorithm fails to find suitable estimates, you can try fminsearch and see whether that improves the optimization. All local optimization algorithms can get “stuck” in a local minimum of the objective function and might therefore fail to reach the true minimum. Global optimization algorithms are developed to find the absolute minimum of the objective function. You can use global optimization algorithms when your fitting task results in different parameter estimates when repeated with different initial values (in other words, your optimization is getting stuck in local minima). You are more likely to encounter this as you increase the number of parameters you are estimating, as you increase the parameter space you are exploring (in other words, the bounds you are imposing on your estimates) and when you have poor initial estimates (in other words, your initial estimates are potentially very far from the estimates that correspond with the minimum of the objective function). A disadvantage of global optimization algorithms is that these algorithms are much more computationally expensive – they often take significantly more time to converge than the local optimization methods do. When using global optimization methods, we recommend using SimBiology’s built-in scattersearch algorithm, combined with lsqnonlin as a local solver. If you have access to the Global Optimization Toolbox, you can try the functions ga (genetic algorithm), patternsearch and particleswarm. Note that some of the global optimization algorithms, including scattersearch, lend themselves well to be accelerated using parallel or distributed computing.
- Estimate category-specific parameters: If you want to estimate category-specific parameters for multiple subjects, e.g. you have 10 male and 10 female subjects in your dataset and you want to estimate a separate clearance value for each gender while all other parameters will be gender-independent, you can also use non-linear regression. Please refer to this example in the documentation.
- Non-linear mixed effects: If your data represents a population of individuals where you think there could be significant inter-individual variability you can use mixed effects modeling to estimate the fixed and random effects present in your population, while also understanding covariance between different parameters you are trying to estimate. When performing mixed effects estimation, it is advisable to perform fixed effects estimation in order to obtain reasonable initial estimates for the mixed effects estimation. SimBiology supports two estimation functions: nlmefit (LME, RELME, FO or FOCE algorithms), and nlmefitsa (Stochastic Approximation Expectation-Maximization). Sometimes, these solvers might seem to struggle to converge. In that case, it is worthwhile determining whether your (objective) function tolerance is set too low and increasing the tolerance somewhat.
8. Does your optimization get stuck? Sometimes, the optimization algorithm can get stuck at a certain iteration. For a particular iteration, the parameter values that model is simulated with as part of the optimization process, can cause the model to be in a state where the ODE solver needs to take very small time-steps to achieve the tolerances (e.g. very rapid changes of model responses). Solutions can include: changing your initial estimates, imposing lower and upper bounds on the parameters you are trying to estimate, selecting to a different solver, easing solver tolerances (only where possible, see also “Visually inspect data and model response”).
9. Are you using the proportional error model? The objective function for the proportional error model contains a term where your response data is part of the denominator. As response variables get close to zero or are exactly zero, this effectively means the objective function contains one or more terms that divide by zero, causing errors or at least very slow iterations of your optimization algorithm. You can try to change the error model to constant or combined to circumvent this problem. Alternatively, you can define separate error models for each response: proportional for those responses that don’t have measurements that contain values close to zero and a constant error model for those responses that do.