Model-Driven Design in Engineering and Simulation
This is a brief article in the sense of popular science to introduce one of my fields of interest. It is about transforming high level descriptions of models into programming code. First of all the focus of the description is focused on old school manual transformation, but of course an important field is an automatic transformation performed by special code generators. I will give a short outlook at the end of the article. This article should give you an idea of what I mean with a high level abstraction of modelling, its benefits and up to now its drawbacks. This field is not just of pure academic interest. It is the base of up-to-date simulation tools with the goal to give their users a good mixture of convenience, rapid development and high performance.
The Modelling Process
"Simulation is the process of designing a model of a real system and conducting experiments with this model for the purpose either of understanding the behavior of the system and its underlying causes or of evaluating various designs of an artificial system or strategies for the operation of the system."
by Robert E. Shannon; Systems Simulation: The Art and Science (1975)
As the quote of Shannon indicates it is in general a real system we would like to simulate but just a model of this system that we are able to simulate. So, when we enhance our knowledge by the simulation, we mainly enhance our knowledge about the behavior of the model. Which of the achieved conclusions can be applied to the real system itself is another step. First of all we deal with the model.
It is a long way from the need to simulate a real or mental system to the source code that finally will be the base of the executed software. The different levels of models are illustrated in the thumbsketch on the right. I may have to apologize for the simplification. A lot of aspects of e.g. friction are not really modelled on the mathematical level and below, because the goal of this picture is only to illustrate the different levels. The levels are listed here:
- Real or Mental System
- Physical Model
- Mathematical Model
- Numerical Model
- Implementation or Code Generation
On every level it is possible to make decisions, which of course influence the results.
The Physical Model
On the top there is a system that ought to be simulated. Next there is the physical model. In this case I would like to use the phrase "physical model" meaning the part of the modelling process, where e.g. the engineer mainly uses a pen and a piece of paper. In this first step a lof of decisions are made. Often this is the time for the main simplifications and assumptions that will make the difference between the real system and the simulated model. If we think about a simple pendulum, the modeler can decide that a mathematical pendulum is OK for his purpose. This means that the whole mass is concentrated on a weight hanging on the end of a massless cord suspended from a pivot. Beyond this there is no friction modelled in this system. That would be a very simple model. Let's try to keep this simple example for the rest of the text.
The Mathematical Model
When the modeler has fixed his ideas, he will start finding a mathematical expression for his physical model. The mathematical expression is - in contrast to a quite common presumption - not unique. If you think of our pendulum one can e.g. describe the equations of motion using a Laplace or a Newton based approach. Let us assume that he will use the well-known Newton based approach, which will lead him to the equation:
So, we first make a decision about the theoretical approach we are going to use. Both will transform our physical model to a mathematical model without further simplifications and assumptions, but this decision will have implications when it comes to the numerical model. Nevertheless, we can make - often quite usefull - further simplifications and assumptions on the mathematical level as well. For example we can make the assumption that the maximum deflection of our pendulum will be very small. In this case it is justified to replace sin with its low order Taylor approximation:
Now we have got a mathematical model. If the modeler has chosen to use the linear version, he will be able to solve this equation analytically using the following approach:
The Numerical Model and the Code Generation
But the chance to solve a real life problem analytically is very rare. In most cases in the field, like for example the non-linear version of the pendulum, the analytical approach is useless. So, in these cases a numerical model is needed. If we want to transform the mathematical model to a numerical model we again have to make some decisions. We can choose between implicit and explicit solvers, Runge-Kutta based approaches or multistep methods and so on. A very simple solver is the explicit Euler method. In this case we just replace the derivative with a difference quotient.
double g=9.81, l=1,t;
double v_old=0, phi_old = 2;
double v_new=0, phi_new = 2;
double t_end = 10, deltat=0.001;
int i, steps=(int) (t_end/deltat);
phi_new = phi_old + deltat*v_old;
v_new = v_old + deltat*(-g/l*sin(phi_old));
v_old = v_new;
phi_old = phi_new;
if ( i%10 == 0 )
printf("phi_new(%e) : %e\n",t,phi_new);
This numerical algorithm is now the starting point for the software development. The code can be written by a human programmer or it can be generated by a program form based on a graphical description (e.g. Simulink) or textual discretion (e.g. Modelica). Independent of the programming language in mostly any cases we will have to face effects arising from the used floating point instead of real numbers. A very simple example for an implementation is displayed as C-Source-Code on the right. If you execute this code you will realize that the results computed by this program are not the same as the ones you would measure using a real pendulum. For example you can read the following output on the command line: phi_new(8)=2.033.... Because 2.03 is bigger than 2, what violates the law "conservation of energy", you know that something went terribly wrong. Now it would be important to know where this error has its cause, if it has a single cause. We can take the fact for granted that it is not the physical model, because the conservation of energy is valid in this model. We did not make any simplifications in the mathematical model, so we skip that. What we have left is the numerical model and the implementation. If we decrease the variable deltat to 0.0001 we see that phi_new(8)=2.006... now, which is much better. So, the error has its origin in the numerical model. If we need a higher accuracy we have got to decrease deltat or change from the very simple Euler algorithm to a more advanced one.
If a modeler has a deep knowledge on all levels of modelling the debugging is comparatively easy. So general knowledge in all levels is always a good idea. Nevertheless, especially the last step of coding is error-prone and generates high costs, because of the time spent in developing the software.
Now there are very different approaches how to reduce costs. A very straight-forward idea is to use sophisticated libraries like e.g. LAPACK, FFTW, or the Gnu Scientific Library etc. instead of coding these algorithms from scratch. That lowers costs and allows improvement.
Model Based Design and Simulation
Alternatively, the modeler or engineer can use a tool like Simulink to generate Code based on the mathematical model, e.g. using the Real-Time Workshop. This allows him to model on the mathematical level of abstraction and generate C-Code. If the Code is gernerated form an abstract model this is called this "Model Based Design" or sometimes "Model-driven engineering". Working with he mathematical model makes it easier to switch between different hardware plattforms, so we can generate the code for the PC-Hardware and the for a special embedded hardware. In this case, assuming that the tool that generates the code is bug-free, errors are limited to the mathematical and the physical model. Most of the numerical aspects are handled by Simulink and the code generation is performed by the Real-Time Workshop. But it is an illusion to hope that now there is no knowledge of numerical mathematics required any more. For example for choosing the right integrator or interpreting the computed results you still need skills in numerical simulation; maybe different ones, more a general overview than very special details. For example for choosing the right integrator or interpreting the omputed results you still need skills in numerical simulation; maybe different ones, more a general overview than very special details.
There's no such thing as a free lunch
The next important question is about performance. Can generated code be as efficient as or even more efficient than human programmers? Of course the answer depends on the skill of the human programmer and the quality of the generating software, but beyond this it very much depends on the level of abstraction. The higher the abstraction is, the harder is the task for the generating software. This is natural, because a higher level of abstraction means more degrees of freedom for the software that generates the code. Generally we can expect that a higher level of abstraction leads, more or less, to a loss of performance. This is a drawback, but of course there are also a lot of benefits in a higher level of abstraction. If you think of programming languages - you may think of Assembly language compared e.g. to C++ - it becomes clear.
Now we can sum-up that we have got a trade-off between abstraction level and algorithmic computation difficulty. A general computer algorithm will not achieve the same performance as an expert at the lowest abstraction level. This person will achieve a much better performance but generally it will also take a lot more working hours.
Beyond tools like Simulink which work nearly solely on the mathematical level, there are approaches to model more on the physical level. The goal of physical modelling is to establish a level beyond the mathematical abstraction and without a limited domain e.g. tools purely focused on multi-body-simulation. So it should be possible for the engineer to use the work done in the phase of physical modelling directly as input for the simulation engine without caring too much about the lower levels. his could speed up the modelling process itself. Another problem arises when it comes to quality control and new requirements. On a low abstraction level it is harder to react fast on new requirements. This is especially a problem when remembering the fact that simulation often is used in the context of prototyping. Therefore, changes are not the exception, they are standard. Another point are errors in the program ode as it was mentioned above. Nevertheless I do not know any tool or software that really operates on the physical level, as I understand physical modelling. Approaches like SimScape or Modelica try to focus on the physical level but in humble opinion this is not pure physical modelling because the physical description is firmly connected with the mathematical description. As mentioned above this relationship is not unique and so these approaches mixed up the two modelling levels. So I think the next step in "Physical Modelling" is still open.
- Modelica Webpage - Modelica is an object-oriented, declarative, multi-domain modelling language. I think it is somewhere between what I call the physical and the mathematical model
- SimScape - A Modelling language form TheMathWorks
- MapleSim - A high level modelling tool from MapeSoft
- Center for Model-based Product Development in Linköping