Skip to content

Commit 21b8f9a

Browse files
committed
added anatomy
1 parent 81543c2 commit 21b8f9a

File tree

4 files changed

+133
-0
lines changed

4 files changed

+133
-0
lines changed

pyclaw-anatomy/build.sh

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
pandoc --slide-level 2 -t slidy --standalone pyclaw-anatomy.md -o pyclaw-anatomy.html
95.2 KB
Loading

pyclaw-anatomy/pyclaw-anatomy.md

Lines changed: 132 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,132 @@
1+
# Anatomy of PyClaw
2+
3+
## Motivation
4+
5+
* Complete example of modernizing a serial Fortran code for
6+
large-scale computing
7+
* Start with Clawpack, a serial grid-based Fortran code
8+
* Wrap legacy Fortran code using f2py
9+
* Prototype, visualize, and test from Python
10+
* Scale serial grid code using petsc4py
11+
12+
## Prototyping Example
13+
14+
(Live demo of PyClaw notebook)
15+
16+
## Clawpack Overview
17+
18+
* **C**onservation **L**aws **Pack**age:

19+
+ General hyperbolic PDEs in 1/2/3 dimensions
20+
* Developed by Randy LeVeque, Marsh Berger, and various others over the past 15 years in Fortran 77
21+
* Dozens of “Riemann solvers” by numerous additional contributors
22+
* Includes adaptive mesh refinement (AMRClaw)
23+
* Textbook and many examples available
24+
* Models tidal waves, storm surges, acoustic waves, and pyroclastic flows/surges
25+
* Available at: www.clawpack.org
26+
27+
## Wrapping Clawpack with f2py
28+
29+
* We wrap at the hyperbolic solver level, Fortran code for
30+
advancing the solution on a grid by one time step
31+
* The solver is generic over different physics, it accepts a pointer
32+
to a Fortran subroutine for computing the Riemann kernel at each
33+
interface
34+
* We use f2py to wrap both the `step` subroutine and the `rp` Riemann
35+
kernel. We don't call the Riemann kernel from Python, it is simply
36+
passed as an argument to the f2py-wrapped `step` function.
37+
38+
## Wrapping 1-D Wave Propagation Kernels with f2py
39+
40+
```
41+
subroutine step1(num_eqn,num_waves,num_ghost,num_aux,mx,q,aux,dx, &
42+
dt,method,mthlim,cfl,f,wave,s,amdq,apdq,dtdx,use_fwave,rp1)
43+
```
44+
45+
We need to give f2py a little information about how we intend to use
46+
the data to avoid making unnecessary copies. We do this by adding
47+
f2py directives after the function declaration.
48+
49+
```
50+
!f2py intent(in,out) q
51+
!f2py intent(out) cfl
52+
!f2py intent(in) num_eqn
53+
!f2py intent(in) num_ghost
54+
!f2py intent(in) mx
55+
!f2py optional f, amdq, apdq, dtdx, s, wave
56+
```
57+
The variables `num_eqn`, `num_waves`, and `num_aux` are automatically inferred
58+
from the dimensions of the input arrays.
59+
60+
## Wrapping 2-D Wave Propagation Kernels with f2py
61+
62+
The 2-D picture is only slightly more complicated:
63+
64+
```
65+
subroutine step2(maxm,num_eqn,num_waves,num_aux,num_ghost,mx,my, &
66+
qold,qnew,aux,dx,dy,dt,method,mthlim,cfl, &
67+
qadd,fadd,gadd,q1d,dtdx1d,dtdy1d, &
68+
aux1,aux2,aux3,work,mwork,use_fwave,rpn2,rpt2)
69+
```
70+
71+
Note that we're being slightly less verbose here, only explicitly
72+
specifying the output variable `cfl` as well as the modified array `qnew`.
73+
74+
```
75+
!f2py intent(out) cfl
76+
!f2py intent(in,out) qnew
77+
!f2py optional q1d, qadd, fadd, gadd, dtdx1d, dtdy1d
78+
```
79+
80+
## Wrapping Riemann Fortran Kernels as Function Pointers with f2py
81+
82+
```
83+
subroutine rp1(maxm,meqn,mwaves,mbc,mx,ql,qr,auxl,auxr, &
84+
wave,s,amdq,apdq,num_aux)
85+
```
86+
87+
The function pointer is wrapped as-is!
88+
89+
## Calling f2py-Wrapped Wave Propagation Kernels from Python
90+
91+
Here's the original Fortran interface:
92+
93+
```
94+
subroutine step1(num_eqn,num_waves,num_ghost,num_aux,mx,q,aux,dx, &
95+
dt,method,mthlim,cfl,f,wave,s,amdq,apdq,dtdx,use_fwave,rp1)
96+
```
97+
98+
Here's the function call from Python.
99+
100+
```
101+
rp1 = self.rp.rp1._cpointer
102+
self.qbc,cfl = self.fmod.step1(num_ghost,mx,self.qbc,self.auxbc,dx,
103+
dt,self._method,self._mthlim,self.fwave,rp1)
104+
```
105+
106+
## Enabling Grid-Based Parallelism with PETSc DMDA
107+
108+
* Grid-based serial solver operates on a grid augmented by "ghost
109+
cells"
110+
* Exact same concept used by PETSc DMDA
111+
+ each process is responsible for one grid, exchanges boundary
112+
information with neighbors
113+
* Changes to PyClaw (Less than 100 LOC):
114+
+ Store grid data in DMDA instead of NumPy array
115+
+ Calculate global CFL condition by reduction
116+
+ Update neighbor information after successful time steps
117+
118+
## PyClaw Architecture
119+
120+
<img src="./pyclaw_architecture.png" style="width: 90%"/>
121+
122+
## Scaling
123+
124+
<img src="./euler_weak_scaling.png" style="height: 90%"/>
125+
126+
## Verification and Validation
127+
128+
* Code is prototyped and verified from Python scripts
129+
* Validated against Clawpack
130+
+ which in turn has been validated against other codes and models
131+
* Verified by developers before commits
132+
* Also verified continuously by Travis CI on GitHub
15.5 KB
Loading

0 commit comments

Comments
 (0)