v0.14.0 |
If you write a paper using results obtained with the help of MoFEM, please cite our publication in The Journal of Open Source Software (follow the link on the badge).
The BibTex entry for MoFEM paper [50] is
The current MoFEM version can be identified by Semantic Versioning and Git Commit Id. Git Commit Id is unique and points to a particular code commit. Semantic Versioning is not unique, more than one commit to the git repository can have the same version.
The first two lines of every executed code which is built with MoFEM library look like that
That allows to identify git commit id and human readable MoFEM version.
MoFEM is developed continuously (see How MoFEM is developed?) and any commit which introducing changes that directly have an impact on users modules implementation should result in increment build version. For example if new functionality is added, name of interface function changed or some function deprecated etc. that will result in incremented build version. Note that each users module has set minimal MoFEM version to which is compatible, see How to add user module? for details.
On the other hand changes/improvements to core library like modification of local variable name or when local documentation added, that will result in new commit however will NOT result in new build version, since implementation of users modules is not influenced by changes in main library.
Build version is in range from 0-100, at the end of the range minor version should be automatically incremented. Minor version is in the range from 0-10, at the end of the range major version should be automatically incremented. The minor or major version could be changed at any time when major/minor feature is initiated.
In addition to above rules, the general principles could apply, in short,
MoFEM is developed continuously, i.e. any merged pull request to the CDashTesting branch triggers automatic testing on the development server. The code is verified during when a pull request is accepted and merged and validated when the test on the development server are passed. If tests are passed CDashBranch is merged to master branch.
In order to convert a single h5m file to vtk, you can use the mbconvert tool:
Moreover, to convert a set of files with one command you can use a multiprocessing script convert.py:
The above command will convert all h5m files in the current directory having name starting with "out" to vtk files with the same name, using a pool of two processes. To see all parameters of the script and their descriptions you can run:
Note that the script is compatible with both Python 2.x and 3.x.
If problem is large, mesh can be partitioned for to save memory and improve efficiency. This can be done using native MoFEM tool,
The partitioned mesh is saved to file out.h5m in current working directory.
For large meshes, partitioning can be in parallel, for example
Code is run using direct solver, i.e. MUMPS on coarse level. Note that loaded mesh is portioned and each processor only reads part of the mesh, i.e. -my_is_partitioned.
In cubit you need to generate 10 node tetrahedral. You simply create block and set element type to TETRA10, as follows
In the code, you need to create field to keep geometry
Next set order of approximation field. If you have 10 node tetrahedrons, you need to have at least 2nd order polynomials to approximate geometry;
The last step is to prject information from 10 node terahedrons on hierarchical approximation field, as follows
Look at examples of use to user modules, for example elasticity.cpp.
In you library directory execute
This create directory html. Open file html/index.html to see results in your browser.
Similar process to send message to MS Teams can be done using their APIs
Where WEBHOOK_URL is the webhook URL of the Teams channel to which the message will be sent. To get webhook URL click on the three-dot sign (next to a channel name) -> Connectors -> Configure (Incoming Webhook) -> Give a name and click Create -> Copy the URL.
If higher approximation orders are used is sometimes required to refine mesh used to postprocessing to visualise higher order polynomials fields. Classes derived from PostProcGenerateRefMeshBase enable adaptive refinement, such that elements only with high orders are refined. To set refinement, use the command line option.
Above results int to refinment levels,
When making changes in a users_modules, a git submodule, the main git repo will register a version change. For example:
The commi t ID of the submodule is used to track which version to pull when pulling the main repo. Updating to the latest commit will then pull the your latest changes.
It is good practice to squash commits before the merge. That simplifies reviews and makes the git commits tree easier to browse and understand. Details on how to squash commits you will find here How to: Squashing for an external PR.
Look to Valgrind documentation. However, a quick answer is
In order to debug a multiprocessing program, one can still use a serial debugger (such as gdb) as follows:
The command above will open 4 xterm windows, each running one process of the program in gdb. If the program requires arguments, they can be passed manually to each process by typing:
in each window. Alternatively, the following command can be used to automatically pass arguments to all processes:
See OpenMPI documentation for more details.
If you using lldb, that is usally a case of Mac OSX, you run debugging as follows:
If you do not heve graphic terminal, you can alos run screen sessions,
Valgrind comes with a tool for measuring memory usage: massif. Example usage:
This generates massif.out files that can be visualised with a graph using ms_print.
To profile code in macOS environment, from line command you execute instruments, for example
This generates directory instrumentscli0.trace and for next run instrumentscli1.trace, and similarly for subsequent runs. You can see changes in execution of the code by
If you use Linux you can alternatively use Valgrind, see How to profile code with Valgrind?.
See PETSc documentation http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Profiling/PetscLogStageRegister.html and examples http://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/examples/tutorials/ex9.c.html
PETSC is capable of timing events and displaying their collective output. To create a new event requires registering the event and add this to run time commands:
Example output:
You have to install Valgrind http://valgrind.org and graphic user interface KCachegrind http://kcachegrind.sourceforge.net/html/Home.html. If you using Linux, for example ubuntu you can do that executing following commands,
If you are using macOS, you can use Homebrew http://brew.sh to make installation,
If you have packages installed, follow instruction from http://kcachegrind.sourceforge.net/html/Documentation.html
When you compile code and programming, you make errors which produce long error messages that are diffcult to comprehend. If you like to code stop complying at the fatal fist error, you can do that by adding to the CMake file
You can see here How to add a new module and program or read the following.
MoFEM is a core library providing functionality for implementation of user modules where applications for particular finite elements or problems are implemented. User module is an independent repository, private or public and independently managed by its owner.
User module is added to the project by cloning repository into directory $HOME/mofem-cephas/mofem/users_modules, for example, module for computational homogenisation has repository in Bitbucket and can be added by
Sometimes users modules depend on other modules, in that case, homogenisation module uses some old obsolete classes (which should not be used in new developments), thus in this particular case addition you have to clone also obsolete module
Once the module is added, you have to go main build directory where users modules are located and rebuild the code. So you have to do
Note the first command is used to trigger reconfiguration of users modules with the new module.
Note, each user module consist InstalledAddModule.cmake, with beginning lines,
In that file minimal version of the core library is given (e.g. v0.5.63). Thus if you have too old version of core lib, it won't be added and cmake will generate an error. In that case, you need to update core library by pulling most recent version from Bitbucket repository and install core library.
User module can be independent repository, private or public and independently managed and owned form MoFEM library. If user module is part of MoFEM repository can be simply added by adding it to ModulesLists.cmake in users modules directory. However is recommended that user module is project/respository by its own, repository should be cloned to users modules directory, f.e. in mofem-cephas/mofem/users_modules
In user module directory directory should be file file, InstalledAddModule.cmake, with content,
If in docker you have following error
Setting in command line
will fix the problem.
You can use gdb inside docker, remembering to set cmake with debugging build type. See Developers, and setting build_type
as explained in Setting build type and compiler flags. Now you can run code in docker container and use GDB or Vagrind to debug code.
In command line put
Or you can do that in the code
If you luking about resources about python, follow links,
When debugging a nonlinear problem, it might be useful to test the jacobian matrix. The command below will print 'Explicit preconditioning Jacobian', 'Finite difference Jacobian' and the most useful 'User-provided matrix minus finite difference Jacobian'.
To view those matrices in a graphical form, simply add:
This will create a folder named Draw_0x7f86644**** with 3 image files: user-jacobian (0), finite difference jacobian (1) and the difference (2). The draw function works only with one processor.
If you like to see diffrence of jacobians, add option:
Basic support for C/C++ languages is provided with Microsoft C/C++ extension. In the CMake files we set option CMAKE_EXPORT_COMPILE_COMMANDS=ON
, which results gebration of file compile_commands.json
file which has to be added to .vscode/c_cpp_properties.json file located in the working project. Example
c_cpp_properties
configuration on Mac should look like follows:
Note line
which points to the particular build directory, i.e. um-build-Debug-l3ew3vn, which is associated with spack package hash, i.e. l3ew3vn
. In this particular case, we can see
where fisrt package has hash l3ew3vn
.
In settings (file .vscode/settings.json), by presing CMD-, (on mac), or CTRL-, on other systemsm you can choose intelliSenseEngine. "Default" setting for intelliSenseEngine, is not working or working slow; we recommend switching to "Tag Parser".
However, "Default" IntelliSense engine is the new engine that provides semantic-aware IntelliSense features and will be the eventual replacement for the Tag Parser. So in the future, you can check if the "Default" IntelliSense engine works well for you. Using Tag Parser is a temporary solution.
For debugging on Mac the most commonly used extension is CodeLLDB plugin. The configuration is straightforward: choose lldb
type, set the path to the executable (compiled with debug flag!) as program
, arguments for the command line put in args
. Optionally, the path to current working directory can be set (cwd
). Example configuration for fracture module is presented below.