v0.14.0
COR-9: Reaction-diffusion equation

Fisher's reaction-diffusion equation

Fisher's equation [26] describes the population dynamics, transformation of species into "mutant" species in space and time. Its predictive capabilities include modelling of fire propagation, virile mutant propagation, evolution of a neutron population in a nuclear reactor, spreading of Alzheimer's disease in human brain, and many more phenomena.

In this tutorial, we consider the equation in its original form presented by Fisher in 1937 [26]

\[ \frac{\partial u}{\partial t} - D \nabla^2 u = ru\left(1-\frac{u}{k}\right) \]

where \(D\) is diffusivity, \(r\) is rate factor and \(k\) is carrying capacity. This equation can be interpreted that advancing wave with a speed no greater than \(\sqrt{Dr}\) will transform abandoned species into another "mutant" form, switching state variable \(u\) from the equilibrium state of \(u=0\) to another equilibrium state of \(u=k\).

To understand the equation better, we can focus our attention on the spreading of fire on a plane of dry grass which is isolated from the rest of the domain by a closed trench of water. Fire will spread over the whole area, whereby all grass over time will be consumed by fire and transformed into ash, i.e. mutant, where virgin (unburned) grass is indicated by \(u=0\), and grass consumed by fire, i.e. ash is \(u=k\). The front of the fire will move with the velocity not greater than \(\sqrt{Dr}\). In the case with the grass is mixed with a plant having some resistance to fire, that would be controlled by parameter \(k\), i.e. carrying capacity. Also, if there is no wind or no surface inclination, the diffusivity parameter \(D\) is scalar. On the contrary, if the ground has some inclination or wind is present, the fire will have a tendency to go up the hill or follow the direction of the wind, that would be accounted for by tensorial representation of diffusivity. If the grass is of uniform height, \(D\) is equal everywhere, i.e. homogenous. If, for example, the height of the grass varies from one place to another, diffusivity and rate factor is heterogeneous, i.e. \(D=D(\mathbf{x})\) and \(r=r(\mathbf{x})\).

In this example, to keep the problem as simple as possible, we implement a homogenous case, with isotropic diffusivity. Moreover, we assume that fire is initiated at two places, as shown in Figure 1 below.

Figure 1: Two yellow spots are placed where the fire is initiated. On boundary, Dirichlet boundary condition u = 0 is applied.
Note
The estimated speed of the advancing wave can be used further to choose the duration of analysis and length of the time step.

Running code

The code can run in parallel, so before starting the analysis, we have to do mesh partitioning, such that, from very beginning, each processor stores in the memory only one part of the mesh. In order to partition mesh, execute the script in build directory in directory basic_finite_elements/reaction_diffusion_equation

NBPROC=6 && ../../tools/mofem_part \
-my_file mesh.cub -output_file mesh.h5m -my_nparts $NBPROC -dim 2 -adj_dim 1

where variable NBPROC represents the number of processors. The mesh file mesh.cub is the initial mesh which can be generated by a preprocessor, for example Cubit in this case. The partitioned mesh is saved to file mesh.h5m. Having that mesh at hand, we can solve the problem by running the command line below

time mpirun -np $NBPROC ./reaction_diffusion_equation 2>&1 | tee log

with the necessary parameters to run calculations found in the file param_file.petsc which is stored in the working directory

The key parameters are

  • -ts_max_time: time duration
  • -ts_dt: time step size
  • -ts_arkimex_type: ARKIMEX time integration type
  • -ts_adapt_type: time step adaptation (if the stiff part is not updated, it has to be set to node)
  • -snes_lag_jacobian: determines how often jacobian (stiff part) is updated. If set to -2 jacobian is calculated only once at the beginning of the analysis

Figure 2: Solution of the problem.

Discretisation

Semi-discrete form of the equation

To solve the equation, we apply standard Galerkin method, used in finite element approximation, i.e. multiplying both sides by a test function and integrating by parts to reduce the demand for the regularity of the tested and testing functions, we get

\[ \int_\Omega v \frac{\partial u}{\partial t} \textrm{d}\Omega + \int_\Omega \nabla v \cdot D \nabla u \textrm{d}\Omega = \int_\Omega v r u \left(1-\frac{u}{k}\right) \textrm{d}\Omega \]

where \(\Omega\) is the solution Domain. To have a unique solution, we have to a priori enforce essential boundary conditions, such that the test functions \(v\) and tested function \(u\) disappear on the boundary \(\partial\Omega\). Also, the approximation and tested functions are in Hilbert space \(H^1_0(\Omega)\), such that the integral of the first derivative and function value over the domain is bounded. That will make the solution for our weak equation stable and equal to the solution of the strong equation, if smooth enough initial and boundary conditions are provided. Moreover, the solution to the problem can be approximated by a finite-dimensional and complete set of the piecewise mesh conforming polynomials, which are dense in \(H^1(\Omega)\), thus we will have convergence of the approximate solution to the exact solution with mesh refinement or increasing polynomial approximation order. The approximation of test and tested functions is given as follows

\[ v^h = \pmb\Phi \overline{\mathbf{v}}^\textrm{T} \quad\textrm{and}\quad u^h = \pmb\Phi \overline{\mathbf{u}}^\textrm{T} \]

where \(\pmb\Phi\) is the vector of hierarchical base functions, and \(\overline{\mathbf{v}}\), \(\overline{\mathbf{u}}\) are vectors of coefficients at degrees of freedom. Utilising finite element approximation functions we finally get semi-discrete form of the Fisher's equation

\[ \mathbf{M} \frac{\partial \overline{\mathbf{u}}}{\partial t} + \mathbf{K} \overline{\mathbf{u}} = \mathbf{G} \]

where

\[ \mathbf{M} := \int_\Omega \pmb\Phi^\textrm{T} \pmb\Phi\, \textrm{d}\Omega,\quad \mathbf{K} := \int_\Omega \nabla \pmb\Phi^\textrm{T} \nabla \pmb\Phi \, \textrm{d}\Omega\quad\textrm{and}\quad \mathbf{G} := \int_\Omega \pmb\Phi^\textrm{T} \left\{ r u^h \left(1-\frac{u^h}{k}\right) \right\} \, \textrm{d}\Omega \]

where \(\mathbf{M}\) is so called mass matrix, \(\mathbf{K}\) stiffness matrix and \(\mathbf{G}\) is source vector.

Time discretisation

In this section, we fully rely on time stepping algorithms briefly described in PETSc documentation in section 6. Moreover, we focus attention on IMEX method, i.e. implicit-explicit time integration. The IMEX method is particularly useful whereby two time scales are separated and present in the solution, i.e. one fast scale, associated with stiff part of a differential equation, and slow scale usually associated with the strongly nonlinear part of the equation. In our particular case, stiff part of the equation is assisted with the diffusion process, and slow is associated with the reaction part of the equation. Focusing attention on our example of fire propagation, in the plane of dry grass, stiff part controls the speed of advancing fire, and slow part is related to the length of the burning process itself, which takes a bit longer time. Using formalism presented in PETSc documentation, we have

\[ \mathbf{F} \left( t,\overline{\mathbf{u}^n}, \dot{\overline{\mathbf{u}^n}} \right) = \mathbf{G} \left( t,\overline{\mathbf{u}^n} \right) \]

where

\[ \left. \frac{\textrm{d}\mathbf{F}}{\textrm{d}\overline{\mathbf{u}}} \right|_{\overline{\mathbf{u}^n}} = \sigma \mathbf{F}_{\dot{\overline{\mathbf{u}^n}}} \left( t,\overline{\mathbf{u}^n}, \dot{\overline{\mathbf{u}^n}} \right) + \mathbf{F}_{\overline{\mathbf{u}^n}} \left( t,\overline{\mathbf{u}^n}, \dot{\overline{\mathbf{u}^n}} \right) = \sigma \mathbf{M} + \mathbf{K} \]

where

\[ \sigma = \left. \frac{\partial \dot{u}}{\partial u} \right|_{u^n} \]

is the parameter provided by the algorithm of time integration, implemented in PETSc. The key advantage of the presented algorithm is that implementation of the equation is independent on time integration algorithm, and user can freely and quickly change time integration method without need of changing a line of the code. The change of time integration scheme is accounted by \(\sigma\).

Implementation

Total control of the solution process including calling functions to calculate matrices and vectors in the right order is handled by PETSc time solver (TS). TS calls MoFEM methods iterating over finite element entities, which are provided by Discrete Manager (DM). MoFEM is responsible for managing degrees of freedom, finite elements, and iterating over entities on the mesh. MOAB is the mesh database where all data is stored on mesh tags at any point of the analysis.

MoFEM and MOAB databases provide low-level functions, however in this case flexibility of MoFEM::Simple interface is adequate. MoFEM::Simple interface creates approximation fields and finite elements on the whole mesh, and it creates problem data structures accessed by DM. DM is a bridge between PETSc and MoFEM data structures and functions, in this particular case TS.

The programmer responsibility is to initialise data structures, methods and provide users operators, called by finite elements, to evaluate matrices and vectors when called by time solver (TS). The relation between TS functions, DM in MoFEM and finite elements are shown in Figure 3.

Figure 3: Finite elements and operators. Yellow colour indicates functions related to TS. Red colour indicates functions managed by DM. Blue colour indicates finite element instances. Green colour indicates user data operators, where dark green represents standard user data operators, and light green represents user data operators implemented for this tutorial

Implementation of the problem is for PDE in 2D, however, with minimal effort changing the type of element, it can be extended to 3D. Moreover is independent on time integration method, exploiting how PETSc time solver is implemented. width=800p

Setup problem

MoFEM always has to start with initialisation and the main function has the followwing format

int main(int argc, char *argv[]) {
// initialize petsc
const char param_file[] = "param_file.petsc";
MoFEM::Core::Initialize(&argc, &argv, param_file, help);
try {
// Implementation
}
// finish work cleaning memory, getting statistics, etc.
return 0;
}

It is worth noting that almost all MoFEM, PETSc and MOAB functions should be checked by CHKERR which is essential for safe code development and error detection.

In this tutorial, the initialisation of the database contains following steps

  1. Main code starts with creating MOAB database and MoFEM database instances and intefaces to each of them
    moab::Core mb_instance;
    moab::Interface &moab = mb_instance;
    MoFEM::Core core(moab);
    MoFEM::Interface &m_field = core;
  2. Registration of MoFEM Discrete Manager in PETSc
    DMType dm_name = "DMMOFEM";
  3. Loading mesh, adding fields and setting-up DM manager
    // Simple interface
    Simple *simple_interface;
    CHKERR m_field.getInterface(simple_interface);
    CHKERR simple_interface->getOptions();
    CHKERR simple_interface->loadFile();
    // add fields
    CHKERR simple_interface->addDomainField("u", H1, AINSWORTH_LEGENDRE_BASE,
    1);
    // set fields order
    CHKERR simple_interface->setFieldOrder("u", order);
    // setup problem
    CHKERR simple_interface->setUp();
    Note that here we add only one field u, in \(H^1(\Omega)\) space, approximation base is constructed following AINSWORTH_LEGENDRE_BASE recipe, and field is scalar, i.e. number of coefficients for base function is 1. Simple interface presumes that the field is spanned over the whole domain, and integration in domain will be performed over the highest dimension entities on the mesh, in this particular example, all triangles. The finite element name is accessed through
    std::string fe_name = simple_interface->getDomainFEName();
    On that element, finite element instances will be executed, described below, and on each element, the user pushes a set of operators to execute. On the other hand, finite element instances will be called through DM manager by TS solver.
  4. Getting access to the database from this point can be done exclusively through DM
    MoFEM::SmartPetscObj<DM> dm = simple_interface->getDM();
    Note that MoFEM::SmartPetscObj wraps PETSc object so that they can be used as a regular PETSc object but the user is relieved from the need of calling the destructor for it. Having access to DM, one can push finite element instances which can be used by TS to calculate matrices and vectors at subsequent time steps.

Telling TS which elements should be used

In Figure 3, yellows are PETSc functions and those on red background are part of DM, interfacing TS with MoFEM. In this tutorial TS is set to use IMEX (implicit/explicit) method, see details in section 6. Methods calculating vector and matrices needed by TS solver to proceed set by DM functions, are as follows

  1. MoFEM::DMMoFEMTSSetRHSFunction to calculate \(\mathbf{G}\)
  2. MoFEM::DMMoFEMTSSetIFunction to calculate \(\mathbf{F}\)
  3. MoFEM::DMMoFEMTSSetIJacobian to calculate \(\left. \frac{\textrm{d}\mathbf{F}}{\textrm{d}\overline{\mathbf{u}}} \right|_{\overline{\mathbf{u}^n}}\)
  4. MoFEM::DMMoFEMTSSetMonitor to set monitor run at the end of each load step

Those functions provide sequences of finite elements (or just one element for each term in this particular case) to calculate entries of vectors and matrices at specific time steps.

Blue boxes mark finite elements in Figure 3 are created by the following code

// Create finite element instances to integrate the right-hand side of slow
// and stiff vector, and the tangent left-hand side for stiff part.
boost::shared_ptr<Ele> vol_ele_slow_rhs(new Ele(m_field));
boost::shared_ptr<Ele> vol_ele_stiff_rhs(new Ele(m_field));
boost::shared_ptr<Ele> vol_ele_stiff_lhs(new Ele(m_field));

where integration rule, controlling number of integration points on each element are set by

auto vol_rule = [](int, int, int p) -> int { return 2 * p; };
vol_ele_slow_rhs->getRuleHook = vol_rule;
vol_ele_stiff_rhs->getRuleHook = vol_rule;
vol_ele_stiff_lhs->getRuleHook = vol_rule;

The integration rule depends on the type of operator evaluated on the element, in our case, we evaluate mass matrices, thus we need to calculate integrals exactly with polynomial order \(2p\).

Note that finite element instances implementation is generic. Elements do problem specific calculations by providing to them user data operators which will be described in the following sections. Three elements vol_ele_slow_rhs, vol_ele_stiff_rhs and vol_ele_stiff_lhs are provided to Discrete Manager (DM) to calculate slow and stiff vectors and Jacobian matrix. Those three elements are instances of the class MoFEM::FaceElementForcesAndSourcesCore. Once elements are created, we can add them to the TS manager through the DM interface

// Add element to calculate lhs of stiff part
CHKERR DMMoFEMTSSetIJacobian(dm, simple_interface->getDomainFEName(),
vol_ele_stiff_lhs, null, null);
// Add element to calculate rhs of stiff part
CHKERR DMMoFEMTSSetIFunction(dm, simple_interface->getDomainFEName(),
vol_ele_stiff_rhs, null, null);
// Add element to calculate rhs of slow (nonlinear) part
CHKERR DMMoFEMTSSetRHSFunction(dm, simple_interface->getDomainFEName(),
vol_ele_slow_rhs, null, null);
Note
In case of 3D problem, user has to switch to MoFEM::VolumeElementForcesAndSourcesCore to integrate over volume entities. Note that all users operators implemented in the example are templates, where the template variable is the dimension of the element. That makes implementation dimension independent.

Telling finite elements which operators should be used

The problem specific matrices and vectors are implemented in namespace ReactionDiffusionEquation, by users data operators ReactionDiffusionEquation::OpEle. In this tutorial, following operators are implemented

  1. ReactionDiffusionEquation::OpAssembleMass used to calculate mass matrix \(\mathbf{M}\)
  2. ReactionDiffusionEquation::OpAssembleSlowRhs used to calculate the slow right-hand side vector \(\mathbf{G}\)
  3. ReactionDiffusionEquation::OpAssembleStiffRhs used to calculate the stiff vector \(\mathbf{F}\)
  4. ReactionDiffusionEquation::OpAssembleStiffLhs used to calculate the stiff matrix \(\frac{\textrm{d}\mathbf{F}}{\textrm{d} \overline{\mathbf{u}^n}}\)
  5. ReactionDiffusionEquation::Monitor used to post-process results at the end of each time step

Implementation of each operator follows a similar pattern,

  1. The class is inherited from ReactionDiffusionEquation::OpEle
  2. Two types of operators are implemented ReactionDiffusionEquation::OpEle::OPROW to calculate vectors, and ReactionDiffusionEquation::OpEle::OPROWCOL to calculate matrices.
  3. Implementation of user operator overload doWork member function
  4. In each doWork method, user iterates over integration points, and then over base functions. In case of a matrix, over base functions on rows and columns.
  5. Once local element vector is assembled, doWork function is finalised with assembling to global vector or global matrix.
Note
More detail description on how to implement user data operator can be found in other tutorials, for example, COR-3: Implementing operators for the Poisson equation.

Common data

The data between the operator and finite elements are passed through ReactionDiffusionEquation::CommonData

struct CommonData {
MatrixDouble grad; ///< Gradients of field "u" at integration points
VectorDouble val; ///< Values of field "u" at integration points
VectorDouble dot_val; ///< Rate of values of field "u" at integration points
SmartPetscObj<Mat> M; ///< Mass matrix
SmartPetscObj<KSP> ksp; ///< Linear solver
};

which is dynamically allocated and kept by shared pointer

boost::shared_ptr<CommonData> data(new CommonData());

in addition some other operators need direct access to data, it can be safely done by so called aliased shared pointers (see here)

auto val_ptr = boost::shared_ptr<VectorDouble>(data, &data->val);
auto dot_val_ptr = boost::shared_ptr<VectorDouble>(data, &data->dot_val);
auto grad_ptr = boost::shared_ptr<MatrixDouble>(data, &data->grad);

Pushing operators to finite element

Each of those user data operators are added to finite element, for example to vol_ele_stiff_rhs we add operators as following

auto det_ptr = boost::make_shared<VectorDouble>();
auto jac_ptr = boost::make_shared<MatrixDouble>();
auto inv_jac_ptr = boost::make_shared<MatrixDouble>();
vol_ele_stiff_rhs->getOpPtrVector().push_back(new OpCalculateHOJac<2>(jac_ptr));
vol_ele_stiff_rhs->getOpPtrVector().push_back(new OpInvertMatrix<2>(jac_ptr, det_ptr, inv_jac_ptr));
vol_ele_stiff_rhs->getOpPtrVector().push_back(new
vol_ele_stiff_rhs->getOpPtrVector().push_back(new
vol_ele_stiff_rhs->getOpPtrVector().push_back(new
vol_ele_stiff_rhs->getOpPtrVector().push_back(new

where MoFEM::OpCalculateJacForFace, and MoFEM::OpSetHOInvJacToScalarBases<2> are standard operators to calculate element inverse of jacobian, and pull derivatives of base functions to element reference configuration

\[ \frac{\partial \pmb\Phi}{\partial \mathbf{x}} = \frac{\partial \pmb\Phi}{\partial \pmb\xi}\mathbf{J}^{-\textrm{T}} \quad\textrm{where}\quad \mathbf{J} = \frac{\partial \mathbf{x}}{\partial \pmb\xi},\; \mathbf{x} = \pmb\Phi(\pmb \xi) \overline{\mathbf{x}} \]

where \(\pmb \xi\) are coordinates in local element configuration, \(\overline{\mathbf{x}}\) are nodal positions or higher order coefficients in edges and faces if necessary to describe higher order geometry. Operators MoFEM::OpCalculateScalarFieldGradient and MoFEM::OpCalculateScalarFieldGradient are standard operators and are used to calculate field values, and gradients of field values at integration points, respectively. All operators in Figure 3 marked by dark green color are standard operators and are implemented in basic user modules.

Problem with IMAX method in TS

The implementation of IMAX method in TS solver requires the user to provide

\[ \mathbf{g}(t,\overline{\mathbf{u}}) = \mathbf{M}^{-1}\mathbf{G}(t,\overline{\mathbf{u}}) \]

whereas user data operators provide vector \(\mathbf{G}(t,\overline{\mathbf{u}})\). This issue can be resolved by exploiting finite element functionality. Each finite element class is derived from MoFEM::BasicMethod

struct BasicMethod : public KspMethod, SnesMethod, TSMethod {
virtual MoFEMErrorCode preProcess() {
if(preProcessHook)
CHKERR preProcessHook()
}
virtual MoFEMErrorCode operator()();
virtual MoFEMErrorCode postProcess() {
if(preProcessHook)
CHKERR postProcessHook()
}
};

Every element is run in three stages

  1. preProcess() is executed before the code iterates over all given entity finite elements on the mesh
  2. operator() is executed for each entity of finite element on the mesh
  3. postProcess() is executed after the code iterates over all given entity finite elements on the mesh

If user provides hook to postProcess stage, after vector \(\mathbf{G}(t,\overline{\mathbf{u}})\) is calculated, can solve on place for \(\mathbf{g}(t,\overline{\mathbf{u}})\). This is done by following lambda function

auto solve_for_g = [&]() {
if (*(vol_ele_slow_rhs->vecAssembleSwitch)) {
CHKERR VecGhostUpdateBegin(vol_ele_slow_rhs->ts_F, ADD_VALUES, SCATTER_REVERSE);
CHKERR VecGhostUpdateEnd(vol_ele_slow_rhs->ts_F, ADD_VALUES, SCATTER_REVERSE);
CHKERR VecAssemblyBegin(vol_ele_slow_rhs->ts_F);
CHKERR VecAssemblyEnd(vol_ele_slow_rhs->ts_F);
*vol_ele_slow_rhs->vecAssembleSwitch = false;
}
CHKERR KSPSolve(data->ksp, vol_ele_slow_rhs->ts_F, vol_ele_slow_rhs->ts_F);
};

The lambda function is set as a hook to the slow finite element, i.e. vol_ele_slow_rhs as follows

vol_ele_slow_rhs->postProcessHook =
solve_for_g;

Note, that is also indicated on Figure 3. In solve_for_g the KSP (linear) solver is used to solve linear equation

\[ \mathbf{M}\mathbf{g}(t,\overline{\mathbf{u}}) = \mathbf{G}(t,\overline{\mathbf{u}}) \]

where mass matrix \(\mathbf{M}\) is calculated as follows

CHKERR MatZeroEntries(data->M);
boost::shared_ptr<Ele> vol_mass_ele(new Ele(m_field));
vol_mass_ele->getOpPtrVector().push_back(new
DMoFEMLoopFiniteElements(dm, simple_interface->getDomainFEName(), vol_mass_ele);
CHKERR MatAssemblyBegin(data->M, MAT_FINAL_ASSEMBLY);
CHKERR MatAssemblyEnd(data->M, MAT_FINAL_ASSEMBLY);

with the use on the fly created finite element vol_mass_ele with pushed one operator ReactionDiffusionEquation::OpAssembleMass. The linear KSP solver is created and set up as follows

data->ksp = createKSP(m_field.get_comm());
CHKERR KSPSetOperators(data->ksp, data->M, data->M);
CHKERR KSPSetFromOptions(data->ksp); CHKERR KSPSetUp(data->ksp);

Initial and boundary conditions

Initial conditions

To set initial conditions, we use MoFEM::MeshsetsManager to get entities on the blockset, which are set by mesh generators, for example, Cubit, Salome or gMEsh. We assume that entities on which we set boundary conditions are set on a subset of entities, which are listed in BLOCKSET 1 in the following code

Range inner_surface;
CHKERR m_field.getInterface<MeshsetsManager>()->getEntitiesByDimension(
1, BLOCKSET, 2, inner_surface, true);
if (!inner_surface.empty()) {
Range inner_surface_verts;
CHKERR moab.get_connectivity(inner_surface, inner_surface_verts, false);
CHKERR m_field.getInterface<FieldBlas>()->setField(
u0, MBVERTEX, inner_surface_verts, "u");
}
}
  1. We check if BLOCKSET 1 exist. Since the problem is solved in parallel, each the processor keeps only part of the mesh, BLOCKSET 1 exists only on some processors with the right set of the entities
  2. If the blockset exists, we ask MoFEM::MeshsetsManager to give entities dimension 2, i.e. faces, i.e. triangles in the blockset
  3. We get all the vertices on faces, and using MoFEM::FieldBlas interface we set values to the nodes on vertices. The value which is set is constant \(u_0\).

Dirichlet boundary conditions

To keep initial assumption, given in the beginning paragraphs about fire propagation, that around the plain of dry grass we have a closed trench filled with water, we will apply homogenous Dirichlet boundary conditions on the boundary of the domain. To do that we follow the code

CHKERR moab.get_entities_by_type(0, MBTRI, surface, false);
Skinner skin(&m_field.get_moab());
Range edges;
CHKERR skin.find_skin(0, surface, false, edges);
Range edges_part;
ParallelComm *pcomm = ParallelComm::get_pcomm(&moab, MYPCOMM_INDEX);
CHKERR pcomm->filter_pstatus(edges, PSTATUS_SHARED | PSTATUS_MULTISHARED,
PSTATUS_NOT, -1, &edges_part);
Range edges_verts;
CHKERR moab.get_connectivity(edges_part, edges_verts, false);
CHKERR m_field.getInterface<ProblemsManager>()->removeDofsOnEntities(
simple_interface->getProblemName(), "u",
unite(edges_verts, edges_part));
  1. Take all triangles on the mesh
  2. Using Skinner implemented in MOAB, we take skin from the mesh. Consequently we have all edges bounding part of the mesh.
  3. Since we work on the parallel partitioned mesh, not all edges are on true domain boundary. In order to filter out true boundary we use moab::ParallelComm::filter_pstatus which is provided with MOAB interface. Using moab::ParallelComm::filter_pstatus, we filter out all entities that are not shared with any other processors. Finally, we obtain entities on physical domain boundary.
  4. In order to enforce homogenous essential boundary conditions in the finite element method, the simplest way is to remove degrees of freedom (DOFs) from the problem. That is done by MoFEM::ProblemsManager interface using MoFEM::ProblemsManager::removeDofsOnEntities function. MoFEM::ProblemsManager is a MoFEM interface to manage DOFs and finite elements on the problem.

The plain program.

The plain program is located in basic_finite_elements/reaction_diffusion_equation/reaction_diffusion.cpp

/**
* \file reaction_diffusion.cpp
* \example reaction_diffusion.cpp
*
**/
#include <MoFEM.hpp>
using namespace MoFEM;
static char help[] = "...\n\n";
const double D = 2e-3; ///< diffusivity
const double r = 1; ///< rate factor
const double k = 1; ///< caring capacity
const double u0 = 0.1; ///< inital vale on blocksets
const int save_every_nth_step = 1;
/**
* @brief Common data
*
* Common data are used to keep and pass data between elements
*
*/
struct CommonData {
MatrixDouble grad; ///< Gradients of field "u" at integration points
VectorDouble val; ///< Values of field "u" at integration points
VectorDouble dot_val; ///< Rate of values of field "u" at integration points
SmartPetscObj<Mat> M; ///< Mass matrix
SmartPetscObj<KSP> ksp; ///< Linear solver
};
/**
* @brief Assemble mass matrix
*/
struct OpAssembleMass : OpEle {
OpAssembleMass(boost::shared_ptr<CommonData> &data)
: OpEle("u", "u", OpEle::OPROWCOL), commonData(data) {
sYmm = true;
}
MoFEMErrorCode doWork(int row_side, int col_side, EntityType row_type,
EntityType col_type, EntData &row_data,
EntData &col_data) {
const int nb_row_dofs = row_data.getIndices().size();
const int nb_col_dofs = col_data.getIndices().size();
if (nb_row_dofs && nb_col_dofs) {
const int nb_integration_pts = getGaussPts().size2();
mat.resize(nb_row_dofs, nb_col_dofs, false);
mat.clear();
auto t_row_base = row_data.getFTensor0N();
auto t_w = getFTensor0IntegrationWeight();
const double vol = getMeasure();
for (int gg = 0; gg != nb_integration_pts; ++gg) {
const double a = t_w * vol;
for (int rr = 0; rr != nb_row_dofs; ++rr) {
auto t_col_base = col_data.getFTensor0N(gg, 0);
for (int cc = 0; cc != nb_col_dofs; ++cc) {
mat(rr, cc) += a * t_row_base * t_col_base;
++t_col_base;
}
++t_row_base;
}
++t_w;
}
CHKERR MatSetValues(commonData->M, row_data, col_data, &mat(0, 0),
ADD_VALUES);
if (row_side != col_side || row_type != col_type) {
transMat.resize(nb_col_dofs, nb_row_dofs, false);
noalias(transMat) = trans(mat);
CHKERR MatSetValues(commonData->M, col_data, row_data, &transMat(0, 0),
ADD_VALUES);
}
}
}
private:
MatrixDouble mat, transMat;
boost::shared_ptr<CommonData> commonData;
};
/**
* @brief Assemble slow part
*
* Solve problem \f$ F(t,u,\dot{u}) = G(t,u) \f$ where here the right hand side
* \f$ G(t,u) \f$ is implemented.
*
*/
struct OpAssembleSlowRhs : OpEle {
OpAssembleSlowRhs(boost::shared_ptr<CommonData> &data)
: OpEle("u", OpEle::OPROW), commonData(data) {}
MoFEMErrorCode doWork(int side, EntityType type, EntData &data) {
const int nb_dofs = data.getIndices().size();
if (nb_dofs) {
vecF.resize(nb_dofs, false);
vecF.clear();
const int nb_integration_pts = getGaussPts().size2();
auto t_val = getFTensor0FromVec(commonData->val);
auto t_base = data.getFTensor0N();
auto t_w = getFTensor0IntegrationWeight();
const double vol = getMeasure();
for (int gg = 0; gg != nb_integration_pts; ++gg) {
const double a = vol * t_w;
const double f = a * r * t_val * (1 - t_val / k);
for (int rr = 0; rr != nb_dofs; ++rr) {
const double b = f * t_base;
vecF[rr] += b;
++t_base;
}
++t_val;
++t_w;
}
CHKERR VecSetOption(getFEMethod()->ts_F, VEC_IGNORE_NEGATIVE_INDICES,
PETSC_TRUE);
CHKERR VecSetValues(getFEMethod()->ts_F, data, &*vecF.begin(),
ADD_VALUES);
}
}
private:
boost::shared_ptr<CommonData> commonData;
};
/**
* @brief Assemble stiff part
*
* Solve problem \f$ F(t,u,\dot{u}) = G(t,u) \f$ where here the right hand side
* \f$ F(t,u,\dot{u}) \f$ is implemented.
*
*/
template <int DIM> struct OpAssembleStiffRhs : OpEle {
OpAssembleStiffRhs(boost::shared_ptr<CommonData> &data)
: OpEle("u", OpEle::OPROW), commonData(data) {}
MoFEMErrorCode doWork(int side, EntityType type, EntData &data) {
const int nb_dofs = data.getIndices().size();
if (nb_dofs) {
vecF.resize(nb_dofs, false);
vecF.clear();
const int nb_integration_pts = getGaussPts().size2();
auto t_dot_val = getFTensor0FromVec(commonData->dot_val);
auto t_grad = getFTensor1FromMat<DIM>(commonData->grad);
auto t_base = data.getFTensor0N();
auto t_diff_base = data.getFTensor1DiffN<DIM>();
auto t_w = getFTensor0IntegrationWeight();
const double vol = getMeasure();
for (int gg = 0; gg != nb_integration_pts; ++gg) {
const double a = vol * t_w;
for (int rr = 0; rr != nb_dofs; ++rr) {
vecF[rr] += a * (t_base * t_dot_val + D * t_diff_base(i) * t_grad(i));
++t_diff_base;
++t_base;
}
++t_dot_val;
++t_grad;
++t_w;
}
CHKERR VecSetOption(getFEMethod()->ts_F, VEC_IGNORE_NEGATIVE_INDICES,
PETSC_TRUE);
CHKERR VecSetValues(getFEMethod()->ts_F, data, &*vecF.begin(),
ADD_VALUES);
}
}
private:
boost::shared_ptr<CommonData> commonData;
};
/**
* @brief Assemble stiff part tangent
*
* Solve problem \f$ F(t,u,\dot{u}) = G(t,u) \f$ where here the right hand side
* \f$ \frac{\textrm{d} F}{\textrm{d} u^n} = a F_{\dot{u}}(t,u,\textrm{u}) +
* F_{u}(t,u,\textrm{u}) \f$ is implemented.
*
*/
template <int DIM> struct OpAssembleStiffLhs : OpEle {
OpAssembleStiffLhs(boost::shared_ptr<CommonData> &data)
: OpEle("u", "u", OpEle::OPROWCOL), commonData(data) {
sYmm = true;
}
MoFEMErrorCode doWork(int row_side, int col_side, EntityType row_type,
EntityType col_type, EntData &row_data,
EntData &col_data) {
const int nb_row_dofs = row_data.getIndices().size();
const int nb_col_dofs = col_data.getIndices().size();
if (nb_row_dofs && nb_col_dofs) {
mat.resize(nb_row_dofs, nb_col_dofs, false);
mat.clear();
const int nb_integration_pts = getGaussPts().size2();
auto t_row_base = row_data.getFTensor0N();
auto t_row_diff_base = row_data.getFTensor1DiffN<DIM>();
auto t_w = getFTensor0IntegrationWeight();
const double ts_a = getFEMethod()->ts_a;
const double vol = getMeasure();
for (int gg = 0; gg != nb_integration_pts; ++gg) {
const double a = vol * t_w;
for (int rr = 0; rr != nb_row_dofs; ++rr) {
auto t_col_base = col_data.getFTensor0N(gg, 0);
auto t_col_diff_base = col_data.getFTensor1DiffN<DIM>(gg, 0);
for (int cc = 0; cc != nb_col_dofs; ++cc) {
mat(rr, cc) += a * (t_row_base * t_col_base * ts_a +
D * t_row_diff_base(i) * t_col_diff_base(i));
++t_col_base;
++t_col_diff_base;
}
++t_row_base;
++t_row_diff_base;
}
++t_w;
}
CHKERR MatSetValues(getFEMethod()->ts_B, row_data, col_data, &mat(0, 0),
ADD_VALUES);
if (row_side != col_side || row_type != col_type) {
transMat.resize(nb_col_dofs, nb_row_dofs, false);
noalias(transMat) = trans(mat);
CHKERR MatSetValues(getFEMethod()->ts_B, col_data, row_data,
&transMat(0, 0), ADD_VALUES);
}
}
}
private:
boost::shared_ptr<CommonData> commonData;
MatrixDouble mat, transMat;
};
/**
* @brief Monitor solution
*
* This functions is called by TS solver at the end of each step. It is used
* to output results to the hard drive.
*/
struct Monitor : public FEMethod {
Monitor(SmartPetscObj<DM> &dm, boost::shared_ptr<PostProcEle> &post_proc)
: dM(dm), postProc(post_proc){};
MoFEMErrorCode preProcess() { return 0; }
MoFEMErrorCode operator()() { return 0; }
MoFEMErrorCode postProcess() {
if (ts_step % save_every_nth_step == 0) {
CHKERR DMoFEMLoopFiniteElements(dM, "dFE", postProc);
CHKERR postProc->writeFile(
"out_level_" + boost::lexical_cast<std::string>(ts_step) + ".h5m");
}
}
private:
boost::shared_ptr<PostProcEle> postProc;
};
}; // namespace ReactionDiffusionEquation
using namespace ReactionDiffusionEquation;
int main(int argc, char *argv[]) {
// initialize petsc
const char param_file[] = "param_file.petsc";
MoFEM::Core::Initialize(&argc, &argv, param_file, help);
try {
// Create moab and mofem instances
moab::Core mb_instance;
moab::Interface &moab = mb_instance;
MoFEM::Core core(moab);
MoFEM::Interface &m_field = core;
// Register DM Manager
DMType dm_name = "DMMOFEM";
// Simple interface
Simple *simple_interface;
CHKERR m_field.getInterface(simple_interface);
CHKERR simple_interface->getOptions();
CHKERR simple_interface->loadFile();
int order = 4; ///< approximation order
CHKERR PetscOptionsGetInt(PETSC_NULL, "", "-order", &order, PETSC_NULL);
// add fields
1);
// set fields order
CHKERR simple_interface->setFieldOrder("u", order);
// setup problem
CHKERR simple_interface->setUp();
// Create common data structure
boost::shared_ptr<CommonData> data(new CommonData());
/// Alias pointers to data in common data structure
auto val_ptr = boost::shared_ptr<VectorDouble>(data, &data->val);
auto dot_val_ptr = boost::shared_ptr<VectorDouble>(data, &data->dot_val);
auto grad_ptr = boost::shared_ptr<MatrixDouble>(data, &data->grad);
// Create finite element instances to integrate the right-hand side of slow
// and stiff vector, and the tangent left-hand side for stiff part.
boost::shared_ptr<Ele> vol_ele_slow_rhs(new Ele(m_field));
boost::shared_ptr<Ele> vol_ele_stiff_rhs(new Ele(m_field));
boost::shared_ptr<Ele> vol_ele_stiff_lhs(new Ele(m_field));
// Push operators to integrate the slow right-hand side vector
vol_ele_slow_rhs->getOpPtrVector().push_back(
new OpCalculateScalarFieldValues("u", val_ptr));
vol_ele_slow_rhs->getOpPtrVector().push_back(new OpAssembleSlowRhs(data));
// PETSc IMAX and Explicit solver demans that g = M^-1 G is provided. So
// when the slow right-hand side vector (G) is assembled is solved for g
// vector.
auto solve_for_g = [&]() {
if (*(vol_ele_slow_rhs->vecAssembleSwitch)) {
CHKERR VecGhostUpdateBegin(vol_ele_slow_rhs->ts_F, ADD_VALUES,
SCATTER_REVERSE);
CHKERR VecGhostUpdateEnd(vol_ele_slow_rhs->ts_F, ADD_VALUES,
SCATTER_REVERSE);
CHKERR VecAssemblyBegin(vol_ele_slow_rhs->ts_F);
CHKERR VecAssemblyEnd(vol_ele_slow_rhs->ts_F);
*vol_ele_slow_rhs->vecAssembleSwitch = false;
}
CHKERR KSPSolve(data->ksp, vol_ele_slow_rhs->ts_F,
vol_ele_slow_rhs->ts_F);
};
// Add hook to the element to calculate g.
vol_ele_slow_rhs->postProcessHook = solve_for_g;
auto det_ptr = boost::make_shared<VectorDouble>();
auto jac_ptr = boost::make_shared<MatrixDouble>();
auto inv_jac_ptr = boost::make_shared<MatrixDouble>();
// Add operators to calculate the stiff right-hand side
vol_ele_stiff_rhs->getOpPtrVector().push_back(
new OpCalculateHOJac<2>(jac_ptr));
vol_ele_stiff_rhs->getOpPtrVector().push_back(
new OpInvertMatrix<2>(jac_ptr, det_ptr, inv_jac_ptr));
vol_ele_stiff_rhs->getOpPtrVector().push_back(
new OpSetHOInvJacToScalarBases<2>(H1, inv_jac_ptr));
vol_ele_stiff_rhs->getOpPtrVector().push_back(
new OpCalculateScalarFieldValuesDot("u", dot_val_ptr));
vol_ele_stiff_rhs->getOpPtrVector().push_back(
new OpCalculateScalarFieldGradient<2>("u", grad_ptr));
vol_ele_stiff_rhs->getOpPtrVector().push_back(
new OpAssembleStiffRhs<2>(data));
// Add operators to calculate the stiff left-hand side
vol_ele_stiff_lhs->getOpPtrVector().push_back(
new OpCalculateHOJac<2>(jac_ptr));
vol_ele_stiff_lhs->getOpPtrVector().push_back(
new OpInvertMatrix<2>(jac_ptr, det_ptr, inv_jac_ptr));
vol_ele_stiff_lhs->getOpPtrVector().push_back(
new OpSetHOInvJacToScalarBases<2>(H1, inv_jac_ptr));
vol_ele_stiff_lhs->getOpPtrVector().push_back(
new OpAssembleStiffLhs<2>(data));
// Set integration rules
auto vol_rule = [](int, int, int p) -> int { return 2 * p; };
vol_ele_slow_rhs->getRuleHook = vol_rule;
vol_ele_stiff_rhs->getRuleHook = vol_rule;
vol_ele_stiff_lhs->getRuleHook = vol_rule;
// Crate element for post-processing
auto post_proc = boost::make_shared<PostProcEle>(m_field);
boost::shared_ptr<ForcesAndSourcesCore> null;
auto u_ptr = boost::make_shared<VectorDouble>();
post_proc->getOpPtrVector().push_back(
new OpCalculateScalarFieldValues("u", u_ptr));
post_proc->getOpPtrVector().push_back(
new OpPPMap(
post_proc->getPostProcMesh(), post_proc->getMapGaussPts(),
{{"u", u_ptr}},
{},
{},
{}
)
);
// Get PETSc discrete manager
auto dm = simple_interface->getDM();
// Get surface entities form blockset, set initial values in those
// blocksets. To keep it simple is assumed that inital values are on
// blockset 1
Range inner_surface;
CHKERR m_field.getInterface<MeshsetsManager>()->getEntitiesByDimension(
1, BLOCKSET, 2, inner_surface, true);
if (!inner_surface.empty()) {
Range inner_surface_verts;
CHKERR moab.get_connectivity(inner_surface, inner_surface_verts, false);
CHKERR m_field.getInterface<FieldBlas>()->setField(
u0, MBVERTEX, inner_surface_verts, "u");
}
}
// Get skin on the body, i.e. body boundary, and apply homogenous Dirichlet
// conditions on that boundary.
CHKERR moab.get_entities_by_dimension(0, 2, surface, false);
Skinner skin(&m_field.get_moab());
Range edges;
CHKERR skin.find_skin(0, surface, false, edges);
Range edges_part;
ParallelComm *pcomm = ParallelComm::get_pcomm(&moab, MYPCOMM_INDEX);
CHKERR pcomm->filter_pstatus(edges, PSTATUS_SHARED | PSTATUS_MULTISHARED,
PSTATUS_NOT, -1, &edges_part);
Range edges_verts;
CHKERR moab.get_connectivity(edges_part, edges_verts, false);
// Since Dirichlet b.c. are essential boundary conditions, remove DOFs from
// the problem.
CHKERR m_field.getInterface<ProblemsManager>()->removeDofsOnEntities(
simple_interface->getProblemName(), "u",
unite(edges_verts, edges_part));
// Create mass matrix, calculate and assemble
CHKERR MatZeroEntries(data->M);
boost::shared_ptr<Ele> vol_mass_ele(new Ele(m_field));
vol_mass_ele->getOpPtrVector().push_back(new OpAssembleMass(data));
vol_mass_ele);
CHKERR MatAssemblyBegin(data->M, MAT_FINAL_ASSEMBLY);
CHKERR MatAssemblyEnd(data->M, MAT_FINAL_ASSEMBLY);
// Create and septup KSP (linear solver), we need this to calculate g(t,u) =
// M^-1G(t,u)
data->ksp = createKSP(m_field.get_comm());
CHKERR KSPSetOperators(data->ksp, data->M, data->M);
CHKERR KSPSetFromOptions(data->ksp);
CHKERR KSPSetUp(data->ksp);
// Create and setup TS solver
auto ts = createTS(m_field.get_comm());
// Use IMEX solver, i.e. implicit/explicit solver
CHKERR TSSetType(ts, TSARKIMEX);
CHKERR TSARKIMEXSetType(ts, TSARKIMEXA2);
// Add element to calculate lhs of stiff part
CHKERR DMMoFEMTSSetIJacobian(dm, simple_interface->getDomainFEName(),
vol_ele_stiff_lhs, null, null);
// Add element to calculate rhs of stiff part
CHKERR DMMoFEMTSSetIFunction(dm, simple_interface->getDomainFEName(),
vol_ele_stiff_rhs, null, null);
// Add element to calculate rhs of slow (nonlinear) part
vol_ele_slow_rhs, null, null);
// Add monitor to time solver
boost::shared_ptr<Monitor> monitor_ptr(new Monitor(dm, post_proc));
CHKERR DMMoFEMTSSetMonitor(dm, ts, simple_interface->getDomainFEName(),
monitor_ptr, null, null);
// Create solution vector
CHKERR DMoFEMMeshToLocalVector(dm, X, INSERT_VALUES, SCATTER_FORWARD);
// Solve problem
double ftime = 1;
CHKERR TSSetDM(ts, dm);
CHKERR TSSetDuration(ts, PETSC_DEFAULT, ftime);
CHKERR TSSetSolution(ts, X);
CHKERR TSSetFromOptions(ts);
CHKERR TSSolve(ts, X);
}
// finish work cleaning memory, getting statistics, etc.
return 0;
}
MoFEM::UnknownInterface::getInterface
MoFEMErrorCode getInterface(IFACE *&iface) const
Get interface reference to pointer of interface.
Definition: UnknownInterface.hpp:93
MoFEM::EntitiesFieldData::EntData
Data on single entity (This is passed as argument to DataOperator::doWork)
Definition: EntitiesFieldData.hpp:128
MYPCOMM_INDEX
#define MYPCOMM_INDEX
default communicator number PCOMM
Definition: definitions.h:228
MoFEM::CoreTmp< 0 >
Core (interface) class.
Definition: Core.hpp:82
H1
@ H1
continuous field
Definition: definitions.h:85
MoFEM::MatSetValues
MoFEMErrorCode MatSetValues(Mat M, const EntitiesFieldData::EntData &row_data, const EntitiesFieldData::EntData &col_data, const double *ptr, InsertMode iora)
Assemble PETSc matrix.
Definition: EntitiesFieldData.hpp:1644
surface
Definition: surface.py:1
MoFEM::ProblemsManager
Problem manager is used to build and partition problems.
Definition: ProblemsManager.hpp:21
MoFEM::FEMethod
structure for User Loop Methods on finite elements
Definition: LoopMethods.hpp:369
MoFEM::createTS
auto createTS(MPI_Comm comm)
Definition: PetscSmartObj.hpp:249
MoFEM::CoreInterface::get_comm
virtual MPI_Comm & get_comm() const =0
CommonData::ksp
SmartPetscObj< KSP > ksp
Linear solver.
Definition: dynamic_first_order_con_law.cpp:416
help
static char help[]
Definition: activate_deactivate_dofs.cpp:13
MoFEM::Exceptions::MoFEMErrorCode
PetscErrorCode MoFEMErrorCode
MoFEM/PETSc error code.
Definition: Exceptions.hpp:56
MoFEM::Types::MatrixDouble
UBlasMatrix< double > MatrixDouble
Definition: Types.hpp:77
MoFEM::OpSetHOInvJacToScalarBases< 2 >
Definition: HODataOperators.hpp:78
MoFEM::OpCalculateScalarFieldValuesDot
OpCalculateScalarFieldValuesFromPetscVecImpl< PetscData::CTX_SET_X_T > OpCalculateScalarFieldValuesDot
Definition: UserDataOperators.hpp:273
MoFEM::Simple::loadFile
MoFEMErrorCode loadFile(const std::string options, const std::string mesh_file_name, LoadFileFunc loadFunc=defaultLoadFileFunc)
Load mesh file.
Definition: Simple.cpp:194
MoFEM.hpp
ReactionDiffusionEquation
Definition: reaction_diffusion.cpp:12
MoFEM::DMoFEMMeshToLocalVector
PetscErrorCode DMoFEMMeshToLocalVector(DM dm, Vec l, InsertMode mode, ScatterMode scatter_mode)
set local (or ghosted) vector values on mesh for partition only
Definition: DMMoFEM.cpp:523
MoFEM::CoreTmp< 0 >::Finalize
static MoFEMErrorCode Finalize()
Checks for options to be called at the conclusion of the program.
Definition: Core.cpp:112
MoFEM::createKSP
auto createKSP(MPI_Comm comm)
Definition: PetscSmartObj.hpp:261
MoFEM::Simple
Simple interface for fast problem set-up.
Definition: Simple.hpp:27
MoFEM::VecSetValues
MoFEMErrorCode VecSetValues(Vec V, const EntitiesFieldData::EntData &data, const double *ptr, InsertMode iora)
Assemble PETSc vector.
Definition: EntitiesFieldData.hpp:1589
sdf.r
int r
Definition: sdf.py:8
order
constexpr int order
Definition: dg_projection.cpp:18
MoFEM::OpCalculateHOJac< 2 >
Definition: HODataOperators.hpp:273
MoFEM::DeprecatedCoreInterface
Deprecated interface functions.
Definition: DeprecatedCoreInterface.hpp:16
MoFEM::OpCalculateScalarFieldGradient
Get field gradients at integration pts for scalar filed rank 0, i.e. vector field.
Definition: UserDataOperators.hpp:1293
MoFEM::Interface
DeprecatedCoreInterface Interface
Definition: Interface.hpp:2010
MoFEM::EntitiesFieldData::EntData::getFTensor0N
FTensor::Tensor0< FTensor::PackPtr< double *, 1 > > getFTensor0N(const FieldApproximationBase base)
Get base function as Tensor0.
Definition: EntitiesFieldData.hpp:1502
MoFEM::DMCreateGlobalVector_MoFEM
PetscErrorCode DMCreateGlobalVector_MoFEM(DM dm, Vec *g)
DMShellSetCreateGlobalVector.
Definition: DMMoFEM.cpp:1167
MoFEM::Simple::getOptions
MoFEMErrorCode getOptions()
get options
Definition: Simple.cpp:180
MoFEM::PostProcBrokenMeshInMoab
Definition: PostProcBrokenMeshInMoabBase.hpp:667
MoFEM::Simple::getDM
MoFEMErrorCode getDM(DM *dm)
Get DM.
Definition: Simple.cpp:747
PlasticOps::M
FTensor::Index< 'M', 3 > M
Definition: PlasticOps.hpp:117
MoFEM::DMCreateMatrix_MoFEM
PetscErrorCode DMCreateMatrix_MoFEM(DM dm, Mat *M)
Definition: DMMoFEM.cpp:1197
save_every_nth_step
int save_every_nth_step
Definition: photon_diffusion.cpp:67
CHKERR
#define CHKERR
Inline error check.
Definition: definitions.h:548
MoFEM::FaceElementForcesAndSourcesCore::UserDataOperator
friend class UserDataOperator
Definition: FaceElementForcesAndSourcesCore.hpp:86
MoFEM::CoreInterface::get_moab
virtual moab::Interface & get_moab()=0
MoFEM::DMMoFEMTSSetIJacobian
PetscErrorCode DMMoFEMTSSetIJacobian(DM dm, const std::string fe_name, boost::shared_ptr< MoFEM::FEMethod > method, boost::shared_ptr< MoFEM::BasicMethod > pre_only, boost::shared_ptr< MoFEM::BasicMethod > post_only)
set TS Jacobian evaluation function
Definition: DMMoFEM.cpp:853
ReactionDiffusionEquation::OpAssembleStiffRhs
Assemble stiff part.
Definition: reaction_diffusion.cpp:151
ReactionDiffusionEquation::u0
const double u0
inital vale on blocksets
Definition: reaction_diffusion.cpp:24
MoFEM
implementation of Data Operators for Forces and Sources
Definition: Common.hpp:10
a
constexpr double a
Definition: approx_sphere.cpp:30
MoFEM::FaceElementForcesAndSourcesCore::UserDataOperator
default operator for TRI element
Definition: FaceElementForcesAndSourcesCore.hpp:94
convert.type
type
Definition: convert.py:64
CommonData::M
SmartPetscObj< Mat > M
Mass matrix.
Definition: dynamic_first_order_con_law.cpp:415
OpPPMap
OpPostProcMapInMoab< SPACE_DIM, SPACE_DIM > OpPPMap
Definition: photon_diffusion.cpp:29
MoFEM::getFTensor0FromVec
static auto getFTensor0FromVec(ublas::vector< T, A > &data)
Get tensor rank 0 (scalar) form data vector.
Definition: Templates.hpp:135
PlasticOps::Monitor
Definition: PlasticOpsMonitor.hpp:9
MoFEM::OpCalculateScalarFieldValues
Get value at integration points for scalar field.
Definition: UserDataOperators.hpp:82
MoFEM::Simple::addDomainField
MoFEMErrorCode addDomainField(const std::string &name, const FieldSpace space, const FieldApproximationBase base, const FieldCoefficientsNumber nb_of_coefficients, const TagType tag_type=MB_TAG_SPARSE, const enum MoFEMTypes bh=MF_ZERO, int verb=-1)
Add field on domain.
Definition: Simple.cpp:264
MoFEM::EntitiesFieldData::EntData::getIndices
const VectorInt & getIndices() const
Get global indices of dofs on entity.
Definition: EntitiesFieldData.hpp:1214
surface.surface
def surface(x, y, z, eta)
Definition: surface.py:3
MoFEM::DMRegister_MoFEM
PetscErrorCode DMRegister_MoFEM(const char sname[])
Register MoFEM problem.
Definition: DMMoFEM.cpp:43
MoFEM::Simple::getDomainFEName
const std::string getDomainFEName() const
Get the Domain FE Name.
Definition: Simple.hpp:368
MoFEM::FaceElementForcesAndSourcesCore
Face finite element.
Definition: FaceElementForcesAndSourcesCore.hpp:23
i
FTensor::Index< 'i', SPACE_DIM > i
Definition: hcurl_divergence_operator_2d.cpp:27
MoFEM::OpCalculateScalarFieldValuesFromPetscVecImpl
Get rate of scalar field at integration points.
Definition: UserDataOperators.hpp:156
main
int main(int argc, char *argv[])
Definition: activate_deactivate_dofs.cpp:15
EntData
EntitiesFieldData::EntData EntData
Definition: child_and_parent.cpp:37
FTensor::Index< 'i', DIM >
MoFEM::Simple::setFieldOrder
MoFEMErrorCode setFieldOrder(const std::string field_name, const int order, const Range *ents=NULL)
Set field order.
Definition: Simple.cpp:545
Range
MoFEM::DMMoFEMTSSetRHSFunction
PetscErrorCode DMMoFEMTSSetRHSFunction(DM dm, const std::string fe_name, boost::shared_ptr< MoFEM::FEMethod > method, boost::shared_ptr< MoFEM::BasicMethod > pre_only, boost::shared_ptr< MoFEM::BasicMethod > post_only)
set TS the right hand side function
Definition: DMMoFEM.cpp:882
MoFEM::CoreTmp< 0 >::Initialize
static MoFEMErrorCode Initialize(int *argc, char ***args, const char file[], const char help[])
Initializes the MoFEM database PETSc, MOAB and MPI.
Definition: Core.cpp:72
MoFEM::EntitiesFieldData::EntData::getFTensor1DiffN
FTensor::Tensor1< FTensor::PackPtr< double *, Tensor_Dim >, Tensor_Dim > getFTensor1DiffN(const FieldApproximationBase base)
Get derivatives of base functions.
Definition: EntitiesFieldData.cpp:526
CATCH_ERRORS
#define CATCH_ERRORS
Catch errors.
Definition: definitions.h:385
MoFEM::Core
CoreTmp< 0 > Core
Definition: Core.hpp:1148
HenckyOps::f
auto f
Definition: HenckyOps.hpp:15
Ele
MoFEM::PipelineManager::ElementsAndOpsByDim< SPACE_DIM >::DomainEle Ele
Definition: hdiv_check_approx_in_3d.cpp:18
BLOCKSET
@ BLOCKSET
Definition: definitions.h:161
CommonData
Definition: continuity_check_on_skeleton_with_simple_2d_for_h1.cpp:22
MoFEM::DMMoFEMTSSetMonitor
PetscErrorCode DMMoFEMTSSetMonitor(DM dm, TS ts, const std::string fe_name, boost::shared_ptr< MoFEM::FEMethod > method, boost::shared_ptr< MoFEM::BasicMethod > pre_only, boost::shared_ptr< MoFEM::BasicMethod > post_only)
Set Monitor To TS solver.
Definition: DMMoFEM.cpp:1056
AINSWORTH_LEGENDRE_BASE
@ AINSWORTH_LEGENDRE_BASE
Ainsworth Cole (Legendre) approx. base .
Definition: definitions.h:60
MoFEM::DMMoFEMTSSetIFunction
PetscErrorCode DMMoFEMTSSetIFunction(DM dm, const char fe_name[], MoFEM::FEMethod *method, MoFEM::BasicMethod *pre_only, MoFEM::BasicMethod *post_only)
set TS implicit function evaluation function
Definition: DMMoFEM.cpp:800
MoFEM::MeshsetsManager
Interface for managing meshsets containing materials and boundary conditions.
Definition: MeshsetsManager.hpp:104
MoFEM::OpInvertMatrix
Definition: UserDataOperators.hpp:3249
ReactionDiffusionEquation::OpAssembleMass
Assemble mass matrix.
Definition: reaction_diffusion.cpp:47
MoFEM::Types::VectorDouble
UBlasVector< double > VectorDouble
Definition: Types.hpp:68
ReactionDiffusionEquation::D
const double D
diffusivity
Definition: reaction_diffusion.cpp:20
MoFEM::SmartPetscObj< DM >
MoFEM::Simple::setUp
MoFEMErrorCode setUp(const PetscBool is_partitioned=PETSC_TRUE)
Setup problem.
Definition: Simple.cpp:683
MoFEM::MeshsetsManager::checkMeshset
bool checkMeshset(const int ms_id, const CubitBCType cubit_bc_type) const
check for CUBIT Id and CUBIT type
Definition: MeshsetsManager.cpp:360
MoFEM::Simple::getProblemName
const std::string getProblemName() const
Get the Problem Name.
Definition: Simple.hpp:389
k
FTensor::Index< 'k', 3 > k
Definition: matrix_function.cpp:20
MoFEM::DMoFEMLoopFiniteElements
PetscErrorCode DMoFEMLoopFiniteElements(DM dm, const char fe_name[], MoFEM::FEMethod *method, CacheTupleWeakPtr cache_ptr=CacheTupleSharedPtr())
Executes FEMethod for finite elements in DM.
Definition: DMMoFEM.cpp:586
convert.int
int
Definition: convert.py:64
MoFEM::PetscOptionsGetInt
PetscErrorCode PetscOptionsGetInt(PetscOptions *, const char pre[], const char name[], PetscInt *ivalue, PetscBool *set)
Definition: DeprecatedPetsc.hpp:142
MoFEMFunctionReturn
#define MoFEMFunctionReturn(a)
Last executable line of each PETSc function used for error handling. Replaces return()
Definition: definitions.h:429
MoFEM::FieldBlas
Basic algebra on fields.
Definition: FieldBlas.hpp:21
MoFEMFunctionBegin
#define MoFEMFunctionBegin
First executable line of each MoFEM function, used for error handling. Final line of MoFEM functions ...
Definition: definitions.h:359
MoFEM::OpPostProcMapInMoab
Post post-proc data at points from hash maps.
Definition: PostProcBrokenMeshInMoabBase.hpp:698