v0.9.0
Using fieldsplit solver and DM sub problem.

In this tutorial, we show how to create sub-problems by using discrete manager (DM) and sub-problems DMs to create nested matrices and use them with field-split preconditioner. This example shows simplified use of field-split preconditioner, for more advanced example of use of sub discrete managers and field-split preconditioner see cell_forces.cpp

The field-split preconditioner is a block solver where on a block of matrices we apply Jacobi or Gauss–Seidel iterations. Alternatively, if a matrix has 2x2 blocks, a Schur complement can be approximated by preconditioner. We do not intend to explain mathematical details of block solver but merely show how to use PETSc preconditioner, i.e. PCFIELDSPLIT (see http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCFIELDSPLIT.html).

Block matrix is represented using nested matrix, see details MATNEST (see http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateNest.html).

We change only small part of the code described in tutorial Solving the Poisson equation.

Block matrix

We solve the same problem to one shown in Solving the Poisson equation. However, to remove some issues with field-split solver, we will make the upper diagonal block of matrix invertible, by adding stabilisation matrix, which in this case has the interpretation of the penalty;

\[ \left[ \begin{array}{cc} \mathbf{K}+\mathbf{S} & \mathbf{C}^\textrm{T}\\ \mathbf{C} & \mathbf{0} \end{array} \right] \left\{ \begin{array}{c} \mathbf{U}\\ \mathbf{L} \end{array} \right\} = \left[ \begin{array}{c} \mathbf{F}+\mathbf{S}\overline{U} \\ \mathbf{g} \end{array} \right],\\ \mathbf{K}= \int_\Omega (\nabla \boldsymbol\phi)^\textrm{T} \nabla \boldsymbol\phi \textrm{d}\Omega,\quad \mathbf{C} = \int_{\partial\Omega} \boldsymbol\psi^\textrm{T} \boldsymbol\phi \textrm{d}\partial\Omega,\\ \mathbf{F} = \int_\Omega \boldsymbol\phi^\textrm{T} f \textrm{d}\Omega,\quad \mathbf{g} = \int_{\partial\Omega} \boldsymbol\psi^\textrm{T} \overline{u} \textrm{d}\partial\Omega,\\ \mathbf{S} = \int_{\partial\Omega} \boldsymbol\phi^\textrm{T} \boldsymbol\phi \textrm{d}\partial\Omega. \]

Code dissection

/**
* \file analytical_poisson_field_split.cpp
* \ingroup mofem_simple_interface
* \example analytical_poisson_field_split.cpp
*
* For more information and detailed explain of this
* example see \ref poisson_tut3
*
*
*/
/* This file is part of MoFEM.
* MoFEM is free software: you can redistribute it and/or modify it under
* the terms of the GNU Lesser General Public License as published by the
* Free Software Foundation, either version 3 of the License, or (at your
* option) any later version.
*
* MoFEM is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public
* License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with MoFEM. If not, see <http://www.gnu.org/licenses/>. */
static char help[] = "...\n\n";
static const bool debug = false;
/**
* \brief Function
*
* This is prescribed exact function. If this function is given by polynomial
* order of "p" and order of approximation is "p" or higher, solution of
* finite element method is exact (with machine precision).
*
* \f[
* u = 1+x^2+y^2+z^3
* \f]
*
*/
struct ExactFunction {
double operator()(const double x, const double y, const double z) const {
return 1 + x * x + y * y + z * z * z;
}
};
/**
* \brief Exact gradient
*/
FTensor::Tensor1<double, 3> operator()(const double x, const double y,
const double z) const {
grad(0) = 2 * x;
grad(1) = 2 * y;
grad(2) = 3 * z * z;
return grad;
}
};
/**
* \brief Laplacian of function.
*
* This is Laplacian of \f$u\f$, it is calculated using formula
* \f[
* \nabla^2 u(x,y,z) = \nabla \cdot \nabla u
* \frac{\partial^2 u}{\partial x^2}+
* \frac{\partial^2 u}{\partial y^2}+
* \frac{\partial^2 u}{\partial z^2}
* \f]
*
*/
double operator()(const double x, const double y, const double z) const {
return 4 + 6 * z;
}
};
OpS(const bool beta = 1)
true),
bEta(beta) {}
/**
* \brief Do calculations for give operator
* @param row_side row side number (local number) of entity on element
* @param col_side column side number (local number) of entity on element
* @param row_type type of row entity MBVERTEX, MBEDGE, MBTRI or MBTET
* @param col_type type of column entity MBVERTEX, MBEDGE, MBTRI or MBTET
* @param row_data data for row
* @param col_data data for column
* @return error code
*/
MoFEMErrorCode doWork(int row_side, int col_side, EntityType row_type,
EntityType col_type,
// get number of dofs on row
nbRows = row_data.getIndices().size();
// if no dofs on row, exit that work, nothing to do here
if (!nbRows)
// get number of dofs on column
nbCols = col_data.getIndices().size();
// if no dofs on Columbia, exit nothing to do here
if (!nbCols)
// get number of integration points
nbIntegrationPts = getGaussPts().size2();
// check if entity block is on matrix diagonal
if (row_side == col_side && row_type == col_type) {
isDiag = true; // yes, it is
} else {
isDiag = false;
}
// integrate local matrix for entity block
CHKERR iNtegrate(row_data, col_data);
// assemble local matrix
CHKERR aSsemble(row_data, col_data);
}
private:
const double bEta;
///< error code
int nbRows; ///< number of dofs on rows
int nbCols; ///< number if dof on column
int nbIntegrationPts; ///< number of integration points
bool isDiag; ///< true if this block is on diagonal
FTensor::Index<'i', 3> i; ///< summit Index
MatrixDouble locMat; ///< local entity block matrix
/**
* \brief Integrate grad-grad operator
* @param row_data row data (consist base functions on row entity)
* @param col_data column data (consist base functions on column entity)
* @return error code
*/
// set size of local entity bock
locMat.resize(nbRows, nbCols, false);
// clear matrix
locMat.clear();
// get element area
double area = getArea() * bEta;
// get integration weights
auto t_w = getFTensor0IntegrationWeight();
// get base function gradient on rows
auto t_row_base = row_data.getFTensor0N();
// loop over integration points
for (int gg = 0; gg != nbIntegrationPts; gg++) {
// take into account Jacobean
const double alpha = t_w * area;
// take fist element to local matrix
&*locMat.data().begin());
// loop over rows base functions
for (int rr = 0; rr != nbRows; rr++) {
// get column base functions gradient at gauss point gg
auto t_col_base = col_data.getFTensor0N(gg, 0);
// loop over columns
for (int cc = 0; cc != nbCols; cc++) {
// calculate element of local matrix
a += alpha * t_row_base * t_col_base;
++t_col_base; // move to another gradient of base function on column
++a; // move to another element of local matrix in column
}
++t_row_base; // move to another element of gradient of base function on
// row
}
++t_w; // move to another integration weight
}
}
/**
* \brief Assemble local entity block matrix
* @param row_data row data (consist base functions on row entity)
* @param col_data column data (consist base functions on column entity)
* @return error code
*/
// get pointer to first global index on row
const int *row_indices = &*row_data.getIndices().data().begin();
// get pointer to first global index on column
const int *col_indices = &*col_data.getIndices().data().begin();
Mat B = getFEMethod()->ksp_B != PETSC_NULL ? getFEMethod()->ksp_B
: getFEMethod()->snes_B;
// assemble local matrix
CHKERR MatSetValues(B, nbRows, row_indices, nbCols, col_indices,
&*locMat.data().begin(), ADD_VALUES);
if (!isDiag) {
// if not diagonal term and since global matrix is symmetric assemble
// transpose term.
locMat = trans(locMat);
CHKERR MatSetValues(B, nbCols, col_indices, nbRows, row_indices,
&*locMat.data().begin(), ADD_VALUES);
}
}
};
int main(int argc, char *argv[]) {
// Initialize PETSc
MoFEM::Core::Initialize(&argc, &argv, (char *)0, help);
// Create MoAB database
moab::Core moab_core; // create database
moab::Interface &moab = moab_core; // create interface to database
try {
// Get command line options
int order = 3; // default approximation order
PetscBool flg_test = PETSC_FALSE; // true check if error is numerical error
CHKERR PetscOptionsBegin(PETSC_COMM_WORLD, "", "Poisson's problem options",
"none");
// Set approximation order
CHKERR PetscOptionsInt("-order", "approximation order", "", order, &order,
PETSC_NULL);
// Set testing (used by CTest)
CHKERR PetscOptionsBool("-test", "if true is ctest", "", flg_test,
&flg_test, PETSC_NULL);
ierr = PetscOptionsEnd();
// Create MoFEM database and link it to MoAB
MoFEM::Core mofem_core(moab); // create database
MoFEM::Interface &m_field = mofem_core; // create interface to database
// Register DM Manager
CHKERR DMRegister_MoFEM("DMMOFEM"); // register MoFEM DM in PETSc
// Create vector to store approximation global error
Vec global_error;
// First we crate elements, implementation of elements is problem
// independent, we do not know yet what fields are present in the problem,
// or even we do not decided yet what approximation base or spaces we are
// going to use. Implementation of element is free from those constrains and
// can be used in different context.
// Elements used by KSP & DM to assemble system of equations
boost::shared_ptr<ForcesAndSourcesCore>
domain_lhs_fe; ///< Volume element for the matrix
boost::shared_ptr<ForcesAndSourcesCore>
boundary_lhs_fe; ///< Boundary element for the matrix
boost::shared_ptr<ForcesAndSourcesCore>
domain_rhs_fe; ///< Volume element to assemble vector
boost::shared_ptr<ForcesAndSourcesCore>
boundary_rhs_fe; ///< Volume element to assemble vector
boost::shared_ptr<ForcesAndSourcesCore>
domain_error; ///< Volume element evaluate error
boost::shared_ptr<ForcesAndSourcesCore>
post_proc_volume; ///< Volume element to Post-process results
boost::shared_ptr<ForcesAndSourcesCore> null; ///< Null element do nothing
boost::shared_ptr<ForcesAndSourcesCore> boundary_penalty_lhs_fe;
{
// Add problem specific operators the generic finite elements to calculate
// matrices and vectors.
boundary_lhs_fe, domain_rhs_fe, boundary_rhs_fe, false);
// Add problem specific operators the generic finite elements to calculate
// error on elements and global error in H1 norm
global_error, domain_error);
// Post-process results
.creatFEToPostProcessResults(post_proc_volume);
const double beta = 1;
boundary_penalty_lhs_fe = boost::shared_ptr<ForcesAndSourcesCore>(
boundary_penalty_lhs_fe->getRuleHook = PoissonExample::FaceRule();
boundary_penalty_lhs_fe->getOpPtrVector().push_back(new OpS(beta));
boundary_rhs_fe->getOpPtrVector().push_back(
new PoissonExample::Op_g(ExactFunction(), "U", beta));
}
// Get simple interface is simplified version enabling quick and
// easy construction of problem.
Simple *simple_interface;
// Query interface and get pointer to Simple interface
CHKERR m_field.getInterface(simple_interface);
// Build problem with simple interface
{
// Get options for simple interface from command line
CHKERR simple_interface->getOptions();
// Load mesh file to database
CHKERR simple_interface->loadFile();
// Add field on domain and boundary. Field is declared by space and base
// and rank. space can be H1. Hcurl, Hdiv and L2, base can be
// AINSWORTH_LEGENDRE_BASE, DEMKOWICZ_JACOBI_BASE and more, where rank is
// number of coefficients for dof.
//
// Simple interface assumes that there are four types of field; 1) defined
// on body domain, 2) fields defined on body boundary, 3) skeleton field
// defined on finite element skeleton. Finally data field is defined on
// body domain. Data field is not solved but used for post-process or to
// keep material parameters, etc. Here we using it to calculate
// approximation error on elements.
// Add domain filed "U" in space H1 and Legendre base, Ainsworth recipe is
// used to construct base functions.
CHKERR simple_interface->addDomainField("U", H1, AINSWORTH_LEGENDRE_BASE,
1);
// Add Lagrange multiplier field on body boundary
CHKERR simple_interface->addBoundaryField("L", H1,
// Add error (data) field, we need only L2 norm. Later order is set to 0,
// so this is piecewise discontinuous constant approx., i.e. 1 DOF for
// element. You can use more DOFs and collate moments of error to drive
// anisotropic h/p-adaptivity, however this is beyond this example.
CHKERR simple_interface->addDataField("ERROR", L2,
// Set fields order domain and boundary fields.
CHKERR simple_interface->setFieldOrder("U",
order); // to approximate function
CHKERR simple_interface->setFieldOrder("L",
order); // to Lagrange multipliers
CHKERR simple_interface->setFieldOrder(
"ERROR", 0); // approximation order for error
// Setup problem. At that point database is constructed, i.e. fields,
// finite elements and problem data structures with local and global
// indexing.
CHKERR simple_interface->setUp();
}
// Get access to PETSC-MoFEM DM manager.
// or more derails see
// <http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/index.html>
// Form that point internal MoFEM data structures are linked with PETSc
// interface. If DM functions contains string MoFEM is is MoFEM specific DM
// interface function, otherwise DM function part of standard PETSc
// interface.
DM dm;
// Get dm
CHKERR simple_interface->getDM(&dm);
// Solve problem, only PETEc interface here.
{
// Create the right hand side vector and vector of unknowns
Vec F, D;
CHKERR DMCreateGlobalVector(dm, &F);
// Create unknown vector by creating duplicate copy of F vector. only
// structure is duplicated no values.
CHKERR VecDuplicate(F, &D);
DM dm_sub_KK, dm_sub_LU;
ublas::matrix<Mat> nested_matrices(2, 2);
ublas::vector<IS> nested_is(2);
CHKERR DMCreate(PETSC_COMM_WORLD, &dm_sub_KK);
CHKERR DMSetType(dm_sub_KK, "DMMOFEM");
CHKERR DMMoFEMCreateSubDM(dm_sub_KK, dm, "SUB_KK");
CHKERR DMSetFromOptions(dm_sub_KK);
CHKERR DMMoFEMSetSquareProblem(dm_sub_KK, PETSC_TRUE);
CHKERR DMMoFEMAddSubFieldRow(dm_sub_KK, "U");
CHKERR DMMoFEMAddSubFieldCol(dm_sub_KK, "U");
simple_interface->getDomainFEName().c_str());
simple_interface->getBoundaryFEName().c_str());
CHKERR DMSetUp(dm_sub_KK);
CHKERR DMMoFEMGetSubRowIS(dm_sub_KK, &nested_is[0]);
CHKERR DMCreateMatrix(dm_sub_KK, &nested_matrices(0, 0));
domain_lhs_fe->ksp_B = nested_matrices(0, 0);
dm_sub_KK, simple_interface->getDomainFEName(), domain_lhs_fe);
boundary_penalty_lhs_fe->ksp_B = nested_matrices(0, 0);
simple_interface->getBoundaryFEName(),
boundary_penalty_lhs_fe);
CHKERR MatAssemblyBegin(nested_matrices(0, 0), MAT_FINAL_ASSEMBLY);
CHKERR MatAssemblyEnd(nested_matrices(0, 0), MAT_FINAL_ASSEMBLY);
CHKERR DMDestroy(&dm_sub_KK);
//
CHKERR DMCreate(PETSC_COMM_WORLD, &dm_sub_LU);
CHKERR DMSetType(dm_sub_LU, "DMMOFEM");
CHKERR DMMoFEMCreateSubDM(dm_sub_LU, dm, "SUB_LU");
CHKERR DMSetFromOptions(dm_sub_LU);
CHKERR DMMoFEMSetSquareProblem(dm_sub_LU, PETSC_FALSE);
CHKERR DMMoFEMAddSubFieldRow(dm_sub_LU, "L");
CHKERR DMMoFEMAddSubFieldCol(dm_sub_LU, "U");
simple_interface->getBoundaryFEName().c_str());
CHKERR DMSetUp(dm_sub_LU);
CHKERR DMMoFEMGetSubRowIS(dm_sub_LU, &nested_is[1]);
CHKERR DMCreateMatrix(dm_sub_LU, &nested_matrices(1, 0));
boundary_lhs_fe->ksp_B = nested_matrices(1, 0);
dm_sub_LU, simple_interface->getBoundaryFEName(), boundary_lhs_fe);
CHKERR MatAssemblyBegin(nested_matrices(1, 0), MAT_FINAL_ASSEMBLY);
CHKERR MatAssemblyEnd(nested_matrices(1, 0), MAT_FINAL_ASSEMBLY);
// CHKERR MatCreateTranspose(nested_matrices(1,0),&nested_matrices(0,1));
CHKERR MatTranspose(nested_matrices(1, 0), MAT_INITIAL_MATRIX,
&nested_matrices(0, 1));
CHKERR DMDestroy(&dm_sub_LU);
domain_rhs_fe->ksp_f = F;
CHKERR DMoFEMLoopFiniteElements(dm, simple_interface->getDomainFEName(),
domain_rhs_fe);
boundary_rhs_fe->ksp_f = F;
CHKERR DMoFEMLoopFiniteElements(dm, simple_interface->getBoundaryFEName(),
boundary_rhs_fe);
CHKERR VecAssemblyBegin(F);
CHKERR VecAssemblyEnd(F);
Mat A;
nested_matrices(1, 1) = PETSC_NULL;
if (debug) {
MatType type;
MatGetType(nested_matrices(0, 0), &type);
cerr << "K " << type << endl;
MatGetType(nested_matrices(1, 0), &type);
cerr << "C " << type << endl;
MatGetType(nested_matrices(0, 1), &type);
cerr << "CT " << type << endl;
std::string wait;
cerr << "UU" << endl;
MatView(nested_matrices(0, 0), PETSC_VIEWER_DRAW_WORLD);
std::cin >> wait;
cerr << "LU" << endl;
MatView(nested_matrices(1, 0), PETSC_VIEWER_DRAW_WORLD);
std::cin >> wait;
cerr << "UL" << endl;
MatView(nested_matrices(0, 1), PETSC_VIEWER_DRAW_WORLD);
std::cin >> wait;
}
CHKERR MatCreateNest(PETSC_COMM_WORLD, 2, &nested_is[0], 2, &nested_is[0],
&nested_matrices(0, 0), &A);
// Create solver and link it to DM
KSP solver;
CHKERR KSPCreate(PETSC_COMM_WORLD, &solver);
CHKERR KSPSetFromOptions(solver);
// Set operators
CHKERR KSPSetOperators(solver, A, A);
PC pc;
CHKERR KSPGetPC(solver, &pc);
PetscBool is_pcfs = PETSC_FALSE;
PetscObjectTypeCompare((PetscObject)pc, PCFIELDSPLIT, &is_pcfs);
if (is_pcfs) {
CHKERR PCFieldSplitSetIS(pc, NULL, nested_is[0]);
CHKERR PCFieldSplitSetIS(pc, NULL, nested_is[1]);
} else {
SETERRQ(PETSC_COMM_WORLD, MOFEM_DATA_INCONSISTENCY,
"Only works with pre-conditioner PCFIELDSPLIT");
}
// Set-up solver, is type of solver and pre-conditioners
CHKERR KSPSetUp(solver);
// At solution process, KSP solver using DM creates matrices, Calculate
// values of the left hand side and the right hand side vector. then
// solves system of equations. Results are stored in vector D.
CHKERR KSPSolve(solver, F, D);
// Scatter solution on the mesh. Stores unknown vector on field on the
// mesh.
CHKERR DMoFEMMeshToGlobalVector(dm, D, INSERT_VALUES, SCATTER_REVERSE);
// Clean data. Solver and vector are not needed any more.
CHKERR KSPDestroy(&solver);
for (int i = 0; i != 2; i++) {
CHKERR ISDestroy(&nested_is[i]);
for (int j = 0; j != 2; j++) {
CHKERR MatDestroy(&nested_matrices(i, j));
}
}
CHKERR MatDestroy(&A);
CHKERR VecDestroy(&D);
CHKERR VecDestroy(&F);
}
// Calculate error
{
// Loop over all elements in mesh, and run users operators on each
// element.
CHKERR DMoFEMLoopFiniteElements(dm, simple_interface->getDomainFEName(),
domain_error);
global_error);
if (flg_test == PETSC_TRUE) {
}
}
{
// Loop over all elements in the mesh and for each execute
// post_proc_volume element and operators on it.
CHKERR DMoFEMLoopFiniteElements(dm, simple_interface->getDomainFEName(),
post_proc_volume);
// Write results
CHKERR boost::static_pointer_cast<PostProcVolumeOnRefinedMesh>(
post_proc_volume)
->writeFile("out_vol.h5m");
}
// Destroy DM, no longer needed.
CHKERR DMDestroy(&dm);
// Destroy ghost vector
CHKERR VecDestroy(&global_error);
}
// finish work cleaning memory, getting statistics, etc.
return 0;
}

Creating penalty finite element instance

This code is largely the same as in analytical_poisson.cpp, with only two places where we introduce changes. We will focus only on those parts. First we make pointer new penalty finite element instance,

boost::shared_ptr<ForcesAndSourcesCore> boundary_penalty_lhs_fe;

and create finite element class instance itself

boundary_penalty_lhs_fe = boost::shared_ptr<ForcesAndSourcesCore>(new FaceElementForcesAndSourcesCore(m_field));

with that at hand we can add appropriate user data operators

boundary_penalty_lhs_fe->getOpPtrVector().push_back(new PoissonExample::OpS(beta));

Implementation of penalty operator OpS::iNtegrate is in file analytical_poisson_field_split.cpp and do not differ to what was shown in Implementing operators for the Poisson equation.

Code dissection

We will start with declaration of data structure necessary for creation of nested matrix. Nested matrix is PETSc structure which is used to store block matrices and on which one can perform operations as on other matrices types.

DM dm_sub_KK,dm_sub_LU;
ublas::matrix<Mat> nested_matrices(2,2);
ublas::vector<IS> nested_is(2);

Vector of IS (Index Set) is used to store global indices of block matrices.

Assembly of K and S matrices

Assembly of C matrix

Assembly of off-diagonal blocks is similar to diagonal term, note that of diagonal block is not square matrix and integration is only over finite elements entities on the boundary

CHKERR DMCreate(PETSC_COMM_WORLD,&dm_sub_LU);
CHKERR DMSetType(dm_sub_LU,"DMMOFEM");
CHKERR DMMoFEMCreateSubDM(dm_sub_LU,dm,"SUB_LU");
CHKERR DMMoFEMSetSquareProblem(dm_sub_LU,PETSC_FALSE);
CHKERR DMMoFEMAddSubFieldRow(dm_sub_LU,"L");
CHKERR DMMoFEMAddSubFieldCol(dm_sub_LU,"U");
CHKERR DMMoFEMAddElement(dm_sub_LU,simple_interface->getBoundaryFEName().c_str());
CHKERR DMSetUp(dm_sub_LU);
CHKERR DMMoFEMGetSubRowIS(dm_sub_LU,&nested_is[1]);
CHKERR DMCreateMatrix(dm_sub_LU,&nested_matrices(1,0));
boundary_lhs_fe->ksp_B = nested_matrices(1,0);
CHKERR DMoFEMLoopFiniteElements(dm_sub_LU,simple_interface->getBoundaryFEName(),boundary_lhs_fe);
CHKERR MatAssemblyBegin(nested_matrices(1,0),MAT_FINAL_ASSEMBLY);
CHKERR MatAssemblyEnd(nested_matrices(1,0),MAT_FINAL_ASSEMBLY);
CHKERR MatTranspose(nested_matrices(1,0),MAT_INITIAL_MATRIX,&nested_matrices(0,1));
CHKERR DMDestroy(&dm_sub_LU);

Assembly of the right hand side vector

Now we assemble global the right hand side vector, in the usual way using global DM

domain_rhs_fe->ksp_f = F;
CHKERR DMoFEMLoopFiniteElements(dm,simple_interface->getDomainFEName(),domain_rhs_fe);
boundary_rhs_fe->ksp_f = F;
CHKERR DMoFEMLoopFiniteElements(dm,simple_interface->getBoundaryFEName(),boundary_rhs_fe);
CHKERR VecAssemblyBegin(F);
CHKERR VecAssemblyEnd(F);

Creation of global nested matrix

For details how to create nested matrix see PETSc manual (http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateNest.html)

CHKERR MatCreateNest(
PETSC_COMM_WORLD,
2,&nested_is[0],2,&nested_is[0],&nested_matrices(0,0),&A
);

Solving problem

  • Create solver instance, and set-up from command line
    // Create solver and link it to DM
    KSP solver;
    CHKERR KSPCreate(PETSC_COMM_WORLD,&solver);
    CHKERR KSPSetFromOptions(solver);
    // Set operators
    CHKERR KSPSetOperators(solver,A,A);
  • Get access to solver pre-conditioner and set-up PCFIELDSPLIT (see http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCFIELDSPLIT.html)
    PC pc;
    CHKERR KSPGetPC(solver,&pc);
    PetscBool is_pcfs = PETSC_FALSE;
    PetscObjectTypeCompare((PetscObject)pc,PCFIELDSPLIT,&is_pcfs);
    if(is_pcfs) {
    CHKERR PCFieldSplitSetIS(pc,NULL,nested_is[0]);
    CHKERR PCFieldSplitSetIS(pc,NULL,nested_is[1]);
    } else {
    SETERRQ(
    PETSC_COMM_WORLD,
    "Only works with pre-conditioner PCFIELDSPLIT"
    );
    }
    Note that we set nested matrix as operator for pre-conditioner, and using PCFieldSplitSetIS we setting elements of the field to PCFIELDSPLIT pre-conditioner.
  • Finally set-up solver and solve system of equations
    // Set-up solver, is type of solver and pre-conditioners
    CHKERR KSPSetUp(solver);
    // At solution process, KSP solver uses DM creates matrices, Calculates
    // values of the left hand side and the right hand side vector. then
    // solves the system of equations. Results are stored in the vector D.
    CHKERR KSPSolve(solver,F,D);

Running the program

In order to run the program, one should first go to the directory where the problem is located, compile the code and then run the executable file. Typically, this can be done as follows

cd mofem_install/um/build/basic_finite_elements/poisson
make -j2
mpirun -np 2 ./analytical_poisson_field_split -file_name cube_2part.h5m -order 3 -pc_type fieldsplit

where options in .petscrc file are

-ksp_type fgmres
-ksp_atol 1e-12
-ksp_rtol 0
-ksp_monitor ascii
-pc_fieldsplit_type schur
-pc_fieldsplit_schur_precondition self
-fieldsplit_0_ksp_type gmres
#-fieldsplit_0_ksp_monitor
-fieldsplit_0_ksp_max_it 100
-fieldsplit_0_ksp_atol 1e-6
-fieldsplit_0_ksp_rtol 1e-6
-fieldsplit_0_pc_type lu
-fieldsplit_0_pc_factor_mat_solver_package mumps
-fieldsplit_0_pc_type gamg
-fieldsplit_0_pc_mg_smoothup 5
-fieldsplit_0_pc_mg_smoothdown 5
-fieldsplit_0_mg_coarse_ksp_type preonly
-fieldsplit_0_mg_coarse_pc_type hypre
-fieldsplit_0_mg_coarse_pc_hypre_type parasails
-fieldsplit_0_mg_coarse_pc_hypre_parasails_sym SPD
-fieldsplit_0_mg_coarse_pc_hypre_parasails_thresh 0.2
-fieldsplit_0_mg_coarse_pc_hypre_parasails_logging 1
-fieldsplit_1_ksp_type minres
-fieldsplit_1_ksp_monitor
-fieldsplit_1_ksp_max_it 100
-fieldsplit_1_ksp_atol 1e-2
-fieldsplit_1_ksp_rtol 1e-2
-fieldsplit_1_pc_type lsc
-fieldsplit_1_lsc_ksp_type cg
-fieldsplit_1_lsc_ksp_atol 1e-5
-fieldsplit_1_lsc_ksp_rtol 1e-5
-fieldsplit_1_lsc_ksp_max_it 100
-fieldsplit_1_lsc_pc_type hypre
-fieldsplit_1_lsc_pc_hypre_type parasails
-fieldsplit_1_lsc_pc_hypre_parasails_sym SPD
-fieldsplit_1_lsc_pc_hypre_parasails_nlevels 1
-fieldsplit_1_lsc_pc_hypre_parasails_thresh 0.2
-fieldsplit_1_lsc_pc_hypre_parasails_logging 1

as result of this we get

 0 KSP Residual norm 8.033746248668e-01
** ParaSails Setup Pattern Statistics ***********
symmetric             : 1
thresh                : 0.200000
num_levels            : 1
Max cost (average)    : 3.9e+01 (2.0e+01)
Nnz (ratio)           : 39 ( 0.07)
Max setup pattern time:      0.0
*************************************************
** ParaSails Setup Values Statistics ************
filter                : 0.100000
loadbal               : 0.000000
Final Nnz (ratio)     : 39 ( 0.07)
Max setup values time :      0.0
*************************************************
Setup (pattern and values) times:
 0:      0.0
 1:      0.0
ave:      0.0
*************************************************
** ParaSails Setup Pattern Statistics ***********
symmetric             : 1
thresh                : 0.200000
num_levels            : 1
Max cost (average)    : 6.0e+08 (4.9e+08)
Nnz (ratio)           : 103886 ( 1.48)
Max setup pattern time:      0.0
*************************************************
** ParaSails Setup Values Statistics ************
filter                : 0.100000
loadbal               : 0.000000
Final Nnz (ratio)     : 91505 ( 1.31)
Max setup values time :      0.4
*************************************************
Setup (pattern and values) times:
 0:      0.3
 1:      0.4
ave:      0.3
*************************************************
   Residual norms for fieldsplit_1_ solve.
   0 KSP Residual norm 8.569397744104e+03
   1 KSP Residual norm 2.381604391507e+03
   2 KSP Residual norm 1.122966950794e+03
   3 KSP Residual norm 6.059895318115e+02
   4 KSP Residual norm 2.785669509756e+02
   5 KSP Residual norm 1.439683032222e+02
   6 KSP Residual norm 6.002357292386e+01
 1 KSP Residual norm 7.690377439031e-04
   Residual norms for fieldsplit_1_ solve.
   0 KSP Residual norm 2.956833059930e+05
   1 KSP Residual norm 1.787320287092e+05
   2 KSP Residual norm 9.550287792776e+04
   3 KSP Residual norm 4.057604924174e+04
   4 KSP Residual norm 2.237002256464e+04
   5 KSP Residual norm 1.082742229824e+04
   6 KSP Residual norm 3.839937493403e+03
   7 KSP Residual norm 1.837737978611e+03
 2 KSP Residual norm 4.494035527480e-06
   Residual norms for fieldsplit_1_ solve.
   0 KSP Residual norm 2.705597685965e+05
   1 KSP Residual norm 1.587873121132e+05
   2 KSP Residual norm 7.567285121798e+04
   3 KSP Residual norm 4.027211131979e+04
   4 KSP Residual norm 2.030643277176e+04
   5 KSP Residual norm 9.996336070718e+03
   6 KSP Residual norm 4.499209738782e+03
   7 KSP Residual norm 2.298423470140e+03
 3 KSP Residual norm 2.350518497486e-08
   Residual norms for fieldsplit_1_ solve.
   0 KSP Residual norm 3.304403201367e+05
   1 KSP Residual norm 1.993083865911e+05
   2 KSP Residual norm 9.397960969595e+04
   3 KSP Residual norm 4.928650742565e+04
   4 KSP Residual norm 2.451251367195e+04
   5 KSP Residual norm 1.237756537851e+04
   6 KSP Residual norm 5.480296653554e+03
   7 KSP Residual norm 3.101516648503e+03
 4 KSP Residual norm 9.568748302483e-11
   Residual norms for fieldsplit_1_ solve.
   0 KSP Residual norm 4.448880690454e+05
   1 KSP Residual norm 2.628153049738e+05
   2 KSP Residual norm 1.281287424988e+05
   3 KSP Residual norm 7.292699051221e+04
   4 KSP Residual norm 3.180368227288e+04
   5 KSP Residual norm 1.546252803266e+04
   6 KSP Residual norm 6.807281733103e+03
   7 KSP Residual norm 3.144299396548e+03
 5 KSP Residual norm 5.494536394249e-13
Approximation error 3.391e-11

Note that KPS solver make 5 iterations to converge, since in this case, Schur complaint is approximated by pre-conditioner. If from other hand we use full Schur complement with and use direct solver

mpirun -np 1 ./analytical_poisson_field_split \
-file_name cube_1part.h5m -order 3 \
-pc_type fieldsplit \
-pc_fieldsplit_schur_precondition full \
-fieldsplit_1_pc_type lu -fieldsplit_1_pc_factor_mat_solver_packag mumps \
-dm_mat_type aij

we will get

0 KSP Residual norm 8.033746248668e-01
1 KSP Residual norm 6.498236020454e-01
2 KSP Residual norm 9.691076621610e-12

Note that we converged in three steps in that case.

Exercises

  • Use direct solver for pre-conditioner for two blocks and check how it changes convergence