Frequently Asked Questions


How to cite us?

If you write a paper using results obtained with the help of MoFEM, please cite our publication in The Journal of Open Source Software (follow link on the badge).

The BibTex entry for MoFEM paper [35] is

title = {{MoFEM:} An open source, parallel finite element library},
author = {
Kaczmarczyk, {\L}ukasz and Ullah, Zahur and Lewandowski, Karol and Meng,
Xuan and Zhou, Xiao-Yi and Athanasiadis, Ignatios and Nguyen,
Hoang and Chalons-Mouriesse, Christophe-Alexandre and Richardson,
Euan and Miur, Euan and Shvarts, Andrei and Wakeni,Mebratu and Pearce, Chris},
journal={The Journal of Open Source Software},
static Index< 'L', 3 > L
implementation of Data Operators for Forces and Sources
Definition: Common.hpp:21
Each user module can have own DOI and reference. All module references supplements MoFEM reference. For example homogenisation module DOI is on the badge DOI

How MoFEM version is changed?

The current MoFEM version can be identified by Semantic Versioning and Git Commit Id. Git Commit Id is unique and points to particular code commit. Semantic Versioning is not unique, more than one commits to git repository can have the same version.

First two lines of every executed code which is build with MoFEM library look like that

git commit id 3d06924d27973df16e3260c6e962929330cf9348

That allows to identify git commit id and human readable MoFEM version.

MoFEM is developed continuously (see How MoFEM is developed?) and any commit which introducing changes that directly have an impact on users modules implementation should result in increment build version. For example if new functionality is added, name of interface function changed or some function deprecated etc. that will result in incremented build version. Note that each users module has set minimal MoFEM version to which is compatible, see How to add user module? for details.

On the other hand changes/improvements to core library like modification of local variable name or when local documentation added, that will result in new commit however will NOT result in new build version, since implementation of users modules is not influenced by changes in main library.

Build version is in range from 0-100, at the end of the range minor version should be automatically incremented. Minor version is in the range from 0-10, at the end of the range major version should be automatically incremented. The minor or major version could be changed at any time when major/minor feature is initiated.

In addition to above rules, the general principles could apply, in short,

  • 3.7.11 (bug fix), incremental change in build version
  • 3.7.12 (partial new feature, hidden behind a Feature Toggle.), incremental change in build version
  • 3.7.13 (performance improvement), incremental change in build version
  • 3.8.0 (completed feature initiated in 3.7.12), direct change in minor version
  • 4.0.0 (breaking changes in public API), direct change in major version

How MoFEM is developed?

MoFEM is developed continuously, i.e. any merged pull request to the CDashTesting branch triggers automatic testing on the development server. The code is verified during when a pull request is accepted and merged and validated when the test on the development server are passed. If tests are passed CDashBranch is merged to master branch.

How to convert output h5m files to the vtk format?

In order to convert a single h5m file to vtk, you can use the mbconvert tool:

mbconvert out.h5m out.vtk

Moreover, to convert a set of files with one command you can use a multiprocessing script convert.py:

$USER_MODULES/tools/convert.py -np 2 out*

The above command will convert all h5m files in the current directory having name starting with "out" to vtk files with the same name, using a pool of two processes. To see all parameters of the script and their descriptions you can run:

$USER_MODULES/tools/convert.py -h
double h
convective heat coefficient

Note that the script is compatible with both Python 2.x and 3.x.

How to partition mesh?

If problem is large, mesh can be partitioned for to save memory and improve efficiency. This can be done using native MoFEM tool,

$USER_MODULES/tools/mofem_part -my_file $USER_MODULES/basic_finite_elements/nonlinear_elasticity/examples/cylinder.cub -my_nparts 16

The partitioned mesh is saved to file out.h5m in current working directory.

For large meshes, partitioning can be in parallel, for example

mpirun -np 2 $USER_MODULES/tools/mofem_part -my_file $USER_MODULES/basic_finite_elements/nonlinear_elasticity/examples/cylinder.cub -my_nparts 16

How to run multi-grid solver via approximation orders?

Code is run using direct solver, i.e. MUMPS on coarse level. Note that loaded mesh is portioned and each processor only reads part of the mesh, i.e. -my_is_partitioned.

mpirun -np 4 ./elasticity \
-my_file dam_4parts.h5m -my_is_partitioned \
-ksp_type gmres -ksp_max_it 1000 -ksp_atol 1e-13 -ksp_rtol 0 -ksp_monitor_lg_residualnorm -ksp_final_residual -ksp_monitor -ksp_converged_reason
-my_order 1 -my_block_config block_congig.in \
-mofem_mg_verbose 1 -mofem_mg_coarse_order 1 -mofem_mg_levels 4 \
-pc_type mg \
-mg_coarse_ksp_type preonly -mg_coarse_pc_type lu -mg_coarse_pc_factor_mat_solver_type mumps \
-pc_mg_smoothup 20 -pc_mg_smoothdown 20 -pc_mg_type multiplicative
  • Option -my_is_partitioned is set if mesh is partitioned using mbpart
  • Option -mofem_mg_coarse_order 1 set coarse level is for linear approximation, i.e. order 1.
  • Option -mofem_mg_levels 4 set number of multi-grid level. In that case maximal approx. order for some part of mesh is 4, thus 4 multi-grid levels.
  • Option -pc_mg_smoothup 20 -pc_mg_smoothdown 20 set number of smoothing iterations, for more details look to PETSc manual.
  • In line
    -mg_coarse_ksp_type preonly -mg_coarse_pc_type lu -mg_coarse_pc_factor_mat_solver_type mumps
    a direct solver for coarse mesh is set.

How to make 2nd order geometry in cubit and transfer it to MoFEM?

In cubit you need to generate 10 node tetrahedral. You simply create block and set element type to TETRA10, as follows

set duplicate block elements on
block 2 volume 1
block 2 element type TETRA10

In the code, you need to create field to keep geometry

CHKERR m_field.add_field("MESH_NODE_POSITIONS",H1,3,MF_ZERO);
CHKERR m_field.add_ents_to_field_by_TETs(0,"MESH_NODE_POSITIONS",2);
Definition: definitions.h:111
@ H1
continuous field
Definition: definitions.h:98
#define CHKERR
Inline error check.
Definition: definitions.h:548

Next set order of approximation field. If you have 10 node tetrahedrons, you need to have at least 2nd order polynomials to approximate geometry;

CHKERR m_field.set_field_order(0,MBTET,"MESH_NODE_POSITIONS",2);
CHKERR m_field.set_field_order(0,MBTRI,"MESH_NODE_POSITIONS",2);
CHKERR m_field.set_field_order(0,MBEDGE,"MESH_NODE_POSITIONS",2);
CHKERR m_field.set_field_order(0,MBVERTEX,"MESH_NODE_POSITIONS",1);

The last step is to prject information from 10 node terahedrons on hierarchical approximation field, as follows

Projection10NodeCoordsOnField ent_method_material(m_field,"MESH_NODE_POSITIONS");
CHKERR m_field.loop_dofs("MESH_NODE_POSITIONS",ent_method_material);

Look at examples of use to user modules, for example elasticity.cpp.

How to generate this documentation on your local machine?

In you library directory execute

make doc

This create directory html. Open file html/index.html to see results in your browser.

Use webhook to send message to Microsoft Teams

Similar process to send message to MS Teams can be done using their APIs

TITLE="Title of your message"
MESSAGE="Body of your message"
JSON="{\"title\": \"${TITLE}\", \"text\": \"${MESSAGE}\" }"
curl -H "Content-Type: application/json" -d "${JSON}" "${WEBHOOK_URL}"
const Tensor1_Expr< const dTensor0< T, Dim, i >, typename promote< T, double >::V, Dim, i > d(const Tensor0< T * > &a, const Index< i, Dim > index, const Tensor1< int, Dim > &d_ijk, const Tensor1< double, Dim > &d_xyz)
Definition: dTensor0.hpp:27
double H
Definition: plastic.cpp:109

Where WEBHOOK_URL is the webhook URL of the Teams channel to which the message will be sent. To get webhook URL click on the three-dot sign (next to a channel name) -> Connectors -> Configure (Incoming Webhook) -> Give a name and click Create -> Copy the URL.

Git and repository

Why do users_modules changes affect mofem-cephas/mofem git repository?

When making changes in a users_modules, a git submodule, the main git repo will register a version change. For example:

cd mofem-cephas/mofem
git diff
diff --git a/mofem/users_modules b/mofem/users_modules
index d18f43f88..42bcece0e 160000
--- a/mofem/users_modules
+++ b/mofem/users_modules
@@ -1 +1 @@
-Subproject commit d18f43f884bfbfbce50f0f70155d911a2d361164
+Subproject commit 42bcece0e7e6577b83bb2a5c64d2709a2bc15b47
constexpr double a

The commi t ID of the submodule is used to track which version to pull when pulling the main repo. Updating to the latest commit will then pull the your latest changes.


How to check for memory leaks and other bugs with Valgrind?

Look to Valgrind documentation. However, a quick answer is

valgrind \
--dsymutil=yes \
--track-origins=yes \
--trace-children=yes \
--show-leak-kinds=all \
--verbose \
--log-file=valgrind-out.txt \
./command args
double trace(FTensor::Tensor2_symmetric< T, SPACE_DIM > &t_stress)
Definition: PlasticOps.hpp:351

How to debug a multiprocessing program?

In order to debug a multiprocessing program, one can still use a serial debugger (such as gdb) as follows:

mpirun -np 4 xterm -e gdb ./program

The command above will open 4 xterm windows, each running one process of the program in gdb. If the program requires arguments, they can be passed manually to each process by typing:

run [arg1] [arg2] [arg3]

in each window. Alternatively, the following command can be used to automatically pass arguments to all processes:

mpirun -np 4 xterm -e gdb -ex run --args ./program [arg1] [arg2] [arg3]

See OpenMPI documentation for more details.

If you using lldb, that is usally a case of Mac OSX, you run debugging as follows:

mpirun -np 4 xterm -e lldb -- ./program [arg1] [arg2] [arg3]

How to profile memory usage?

Valgrind comes with a tool for measuring memory usage: massif. Example usage:

valgrind --trace-children=yes --tool=massif --pages-as-heap=yes ./command args

This generates massif.out files that can be visualised with a graph using ms_print.

ms_print massif.out

How to profile code using Xcode tools?

To profile code in macOS environment, from line command you execute instruments, for example

instruments -v -t "Time Profiler" \
./elasticity -my_file LShape.h5m -ksp_type fgmres -ksp_monitor -my_order 5 \
-pc_type mg -mofem_mg_verbose 1 -mofem_mg_coarse_order 1 \
-mofem_mg_levels 5 -mg_coarse_ksp_type preonly \
-mg_coarse_pc_type lu -mg_coarse_pc_factor_mat_solver_type mumps \
-pc_mg_smoothup 10 -pc_mg_smoothdown 10 -pc_mg_type multiplicative -log_view
double v
phase velocity of light in medium (cm/ns)

This generates directory instrumentscli0.trace and for next run instrumentscli1.trace, and similarly for subsequent runs. You can see changes in execution of the code by

open instrumentscli0.trace

If you use Linux you can alternatively use Valgrind, see How to profile code with Valgrind?.

How to profile code using PETSc tools?

See PETSc documentation http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Profiling/PetscLogStageRegister.html and examples http://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/examples/tutorials/ex9.c.html

How to time and log events with PETSC?

PETSC is capable of timing events and displaying their collective output. To create a new event requires registering the event and add this to run time commands:


Example output:

Event Count Time (sec) Flop --- Global --- --- Stage --- Total
Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s
--- Event Stage 0: Main Stage
BuildTwoSidedF 17 1.0 1.2918e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
LoadMesh 1 1.0 1.6751e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 15 0 0 0 0 15 0 0 0 0 0
buildFields 1 1.0 4.5471e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 4 0 0 0 0 4 0 0 0 0 0
buildFiniteElements 1 1.0 7.8488e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 7 0 0 0 0 7 0 0 0 0 0
SimpleSetUp 1 1.0 3.4389e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 3 0 0 0 0 3 0 0 0 0 0
ProblemsManager 1 1.0 2.9552e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
static Index< 'M', 3 > M
const double T

How to profile code with Valgrind?

You have to install Valgrind http://valgrind.org and graphic user interface KCachegrind http://kcachegrind.sourceforge.net/html/Home.html. If you using Linux, for example ubuntu you can do that executing following commands,

sudo apt-get install valgrind kcachegrind

If you are using macOS, you can use Homebrew http://brew.sh to make installation,

brew install valgrind
brew install qcachegrind --with-graphviz

If you have packages installed, follow instruction from http://kcachegrind.sourceforge.net/html/Documentation.html


How to add user module?

You can see here How to add a new module and program or read the following.

MoFEM is a core library providing functionality for implementation of user modules where applications for particular finite elements or problems are implemented. User module is an independent repository, private or public and independently managed by its owner.

User module is added to the project by cloning repository into directory $HOME/mofem-cephas/mofem/users_modules, for example, module for computational homogenisation has repository in Bitbucket and can be added by

cd $HOME/mofem-cephas/mofem/users_modules
git clone https://bitbucket.org/likask/mofem_um_homogenisation.git homogenisation

Sometimes users modules depend on other modules, in that case, homogenisation module uses some old obsolete classes (which should not be used in new developments), thus in this particular case addition you have to clone also obsolete module

git clone https://bitbucket.org/likask/mofem_um_obsolete.git obsolete

Once the module is added, you have to go main build directory where users modules are located and rebuild the code. So you have to do

cd $HOME/mofem_build/um
touch CMakeCache.txt
make -j 4
FTensor::Index< 'j', 3 > j

Note the first command is used to trigger reconfiguration of users modules with the new module.

Note, each user module consist InstalledAddModule.cmake, with beginning lines,

# Check minimum version of MoFEM with this module works
check_mofem_version(0 5 63)

In that file minimal version of the core library is given (e.g. v0.5.63). Thus if you have too old version of core lib, it won't be added and cmake will generate an error. In that case, you need to update core library by pulling most recent version from Bitbucket repository and install core library.

User module can be independent repository, private or public and independently managed and owned form MoFEM library. If user module is part of MoFEM repository can be simply added by adding it to ModulesLists.cmake in users modules directory. However is recommended that user module is project/respository by its own, repository should be cloned to users modules directory, f.e. in mofem-cephas/mofem/users_modules

git clone https://bitbucket.org/likask/mofem_um_homogenisation.git homogenisation

In user module directory directory should be file file, InstalledAddModule.cmake, with content,

# Check monimimal version of MoFEM with this module works
check_mofem_version(0 3 6)

OpenMPI error in docker

If in docker you have following error

[2761d885210d:00053] Read -1, expected 8192, errno = 1
[2761d885210d:00053] Read -1, expected 87888, errno = 1
[2761d885210d:00054] Read -1, expected 95784, errno = 1

Setting in command line

export OMPI_MCA_btl_vader_single_copy_mechanism=none

will fix the problem.

How to make debugging in docker?

You can use gdb inside docker, remembering to set cmake with debugging build type. See Developers, and setting build_type as explained in Setting build type and compiler flags. Now you can run code in docker container and use GDB or Vagrind to debug code.

How to view matrix

PETSc has to be compiled with X11 library if you like to draw matrix.

In command line put

-mat_view draw -draw_pause -1

Or you can do that in the code

double A

How to test and print jacobian matrix with PETSc?

When debugging a nonlinear problem, it might be useful to test the jacobian matrix. The command below will print 'Explicit preconditioning Jacobian', 'Finite difference Jacobian' and the most useful 'User-provided matrix minus finite difference Jacobian'.


To view those matrices in a graphical form, simply add:


This will create a folder named Draw_0x7f86644**** with 3 image files: user-jacobian (0), finite difference jacobian (1) and the difference (2). The draw function works only with one processor.

If you like to see diffrence of jacobians, add option:


Visual Studio

How to configure MoFEM in Visual Studio Code?

Basic support for C/C++ languages is provided with Microsoft C/C++ extension. In the CMake files we set option CMAKE_EXPORT_COMPILE_COMMANDS=ON, which results gebration of file compile_commands.json file which has to be added to .vscode/c_cpp_properties.json file located in the working project. Example c_cpp_properties configuration on Mac should look like follows:

"configurations": [
"name": "Mac",
"includePath": [
"defines": [
"macFrameworkPath": [
"compilerPath": "/usr/bin/clang",
"cStandard": "gnu17",
"cppStandard": "gnu++14",
"intelliSenseMode": "macos-clang-x64",
"compileCommands": "${workspaceFolder}/um-build-Debug-l3ew3vn/compile_commands.json"
"browse": {
"databaseFilename": "${default}",
"limitSymbolsToIncludedHeaders": true
"version": 4

Note line

"compileCommands": "${workspaceFolder}/um-build-Debug-l3ew3vn/compile_commands.json"

which points to the particular build directory, i.e. um-build-Debug-l3ew3vn, which is associated with spack package hash, i.e. l3ew3vn. In this particular case, we can see

~ % spack find -lv mofem-users-modules
==> 2 installed packages
-- darwin-macos-skylake / apple-clang@12.0.0 --------------------
l3ew3vn mofem-users-modules@develop+copy_user_modules~docker~ipo build_type=Debug dev_path=/Users/likask/mofem_install/mofem-cephas/mofem/users_modules install_id=0

where fisrt package has hash l3ew3vn.

In settings (file .vscode/settings.json), by presing CMD-, (on mac), or CTRL-, on other systemsm you can choose intelliSenseEngine. "Default" setting for intelliSenseEngine, is not working or working slow; we recommend switching to "Tag Parser".

"C_Cpp.intelliSenseEngine": "Tag Parser"

However, "Default" IntelliSense engine is the new engine that provides semantic-aware IntelliSense features and will be the eventual replacement for the Tag Parser. So in the future, you can check if the "Default" IntelliSense engine works well for you. Using Tag Parser is a temporary solution.

How to setup LLDB debugger in Visual Studio Code?

For debugging on Mac the most commonly used extension is CodeLLDB plugin. The configuration is straightforward: choose lldb type, set the path to the executable (compiled with debug flag!) as program, arguments for the command line put in args. Optionally, the path to current working directory can be set (cwd). Example configuration for fracture module is presented below.

"configurations": [
"type": "lldb",
"request": "launch",
"name": "Debug_my_crack1",
"program": "/Users/User1/moFEM/users_modules_debug/fracture_mechanics/crack_propagation",
"args": [
"cwd": "/Users/User1/mofem_install/users_modules_debug/fracture_mechanics"