PISM, A Parallel Ice Sheet Model  stable v2.1-1-g6902d5502 committed by Ed Bueler on 2023-12-20 08:38:27 -0800
Public Member Functions | Public Attributes | List of all members
pism::fem::ElementIterator Class Reference

Manages iterating over element indices. More...

#include <ElementIterator.hh>

Public Member Functions

 ElementIterator (const Grid &g)
 
int element_count ()
 The total number of elements to be iterated over. Useful for creating per-element storage. More...
 
int flatten (int i, int j)
 Convert an element index (i,j) into a flattened (1-d) array index, with the first element (i, j) to be iterated over corresponding to flattened index 0. More...
 

Public Attributes

int xs
 x-coordinate of the first element to loop over. More...
 
int xm
 total number of elements to loop over in the x-direction. More...
 
int ys
 y-coordinate of the first element to loop over. More...
 
int ym
 total number of elements to loop over in the y-direction. More...
 
int lxs
 x-index of the first local element. More...
 
int lxm
 total number local elements in x direction. More...
 
int lys
 y-index of the first local element. More...
 
int lym
 total number local elements in y direction. More...
 

Detailed Description

Manages iterating over element indices.

When computing residuals and Jacobians, there is a loop over all the elements in the Grid, and computations are done on each element. The Grid has an underlying PETSc DM, and a process typically does not own all of the nodes in the grid. Therefore we should perform a computation on a subset of the elements. In general, an element will have ghost (o) and real (*) vertices:

o---*---*---*---o
|   |   |   |   |
o---*---*---*---o
|   |   |   |   |
o---o---o---o---o

The strategy is to perform computations on this process on every element that has a vertex that is owned by this processor. But we only update entries in the global residual and Jacobian matrices if the corresponding row corresponds to a vertex that we own. In the worst case, where each vertex of an element is owned by a different processor, the computations for that element will be repeated four times, once for each processor.

This same strategy also correctly deals with periodic boundary conditions. The way PETSc deals with periodic boundaries can be thought of as using a kind of ghost. So the rule still works: compute on all elements containg a real vertex, but only update rows corresponding to that real vertex.

The calculation of what elements to index over needs to account for ghosts and the presence or absense of periodic boundary conditions in the Grid. The ElementIterator performs that computation for you (see ElementIterator::xs and friends).

Definition at line 59 of file ElementIterator.hh.


The documentation for this class was generated from the following files: