class: center, middle # Julia Tutorial @ Home Office ## Assignments Michael Kraus NMPP Seminar 04.06.2020, 25.06.2020, 02.07.2020 --- .left-column[ ## Project ] .right-column[ ## Create a new Project In order to organise your code, create a new Julia project. Open the julia interpreter ``` $ julia ``` change to the `Pkg` REPL so that you see a prompt like the following: ``` (@v1.4) pkg> ``` Create the project using the `generate` command: ``` (@v1.4) pkg> generate Particles Generating project Particles: Particles/Project.toml Particles/src/Particles.jl ``` This creates two files: `Project.toml` stores meta-data and dependencies of the package, and `src/Particles.jl` contains the main module. The newly generated environment is activated by the `activate` command: ``` (@v1.4) pkg> activate Particles Activating new environment at `~/Particles/Project.toml` ``` Now you can add dependencies. You will probably want to plot simulation results, which can be done by either Plots.jl or Makie.jl. ] --- .left-column[ ## Project ### Code Organisation ] .right-column[ * I typically use the following directory hierarchy: * `./src` : module source code * `./test` : unit tests * `./docs` : documenation * `./prototyping` : prototyping scripts * `./scripts` : run scripts * `./examples` : example scripts * `./notebooks` : accompanying notebooks ] --- .left-column[ ## Pendulum ] .right-column[ ## Assignment 2: Pendulum Implement a simulation for `n` independent pendula: $$ \dot{x} = v , \quad \dot{v} = - \sin(x) $$ ] --- .left-column[ ## Pendulum ### Script ] .right-column[ ### Quick and dirty script The quickest way of solving this problem is to write a short script: ```julia # parameters Δt = 0.1 # time step size nt = 100 # number of time steps ns = 5 # number of samples # solution array x = zeros(2,ns,nt+1) # random initial conditions x[1,:,1] .= rand(ns) .* 2π x[2,:,1] .= rand(ns) .* 2 .- 1 # run simulation with explicit Euler for n in 1:nt for i in 1:ns x[1,i,n+1] = x[1,i,n] + Δt * x[2,i,n] x[2,i,n+1] = x[2,i,n] - Δt * sin(x[1,i,n]) end end ``` The script is stored in the folder `prototyping/pendulum.jl`. ] --- .left-column[ ## Pendulum ### Script ] .right-column[ The results can be visualised with *Plots.jl* ```julia # load Plots package using Plots # set plot ranges xlim = (-π, +π) ylim = (-3, +3) # plot energy contour lines plt = contour( LinRange(xlim..., 100), LinRange(ylim..., 100), (x,v) -> v^2 / 2 + 1 - cos(x), xlim = xlim, ylim = ylim, title = "Pendulum", legend = false, size = (800, 600) ) # plot solutions for i in 1:ns scatter!(plt, (x[1,i,:] .+ π) .% 2π .- π, x[2,i,:], marker=5) end # save figure to file savefig("pendulum.png") ``` ] --- .left-column[ ## Pendulum ### Script ] .right-column[  ] --- .left-column[ ## Pendulum ### Script ] .right-column[ ### Limitations * The use of global variables prevents Julia from generating efficient, optimised code. * The integration scheme and the vector field of the equation are hardcoded and cannot be easily changed. * Generally speaking, none of the code we have written is reusable in any sane way. * Minor annoyance: The indexing of our solution array starts at `1` although here `0` would be much more natural, with the index-0 entry storing the initial condition. ### Solutions * The optimisation issue could be remedied by putting `const` in front of the parameter names. However, it is better practice to put all parameters and simulation-relevant data structures into a dedicated `Simulation` type. * The integrator and the vector field can be provided by individual functions, which can be stored in the `Simulation` type so they can be replaced easily. * Even better, we can implement individual types for different integrators (which also store temporary arrays, etc.) and dispatch an `integrate_step!` function on these. * The indexing issue will be resolved by using an `OffsetArray` instead of a vanilla array. * These measures will lead to greatly improved code structure and reusability. ] --- .left-column[ ## Pendulum ### Script ### Simulation ] .right-column[ * write a `Simulation` type that stores all the necessary information (solution, time step size, number of time steps, etc.) * create a new file `src/simulation.jl` and include that file in `src/Particles.jl`: ```julia module Particles include("simulation.jl") end ``` * in `simulation.jl` implement a new `struct` with the aforementioned fields: ```julia struct Simulation{DT <: Number} Δt::DT nt::Int ns::Int x::Array{DT,3} function Simulation(x₀::AbstractArray{DT,2}, Δt::DT, nt::Int) where {DT} x = zeros(DT, size(x₀)..., nt+1) x[:,:,1] .= x₀ new{DT}(Δt, nt, size(x₀,2), x) end end ``` * the `Simulation` type has one constructor that takes the initial conditons `x₀`, time step `Δt` and the number of time steps to compute `nt` * we expect the initial conditions to be an abstract 2d array: the first dimension holds the degrees of freedom; the second dimension holds different samples * based on the size of the initial condition array, we create a 3d solution array that has the same first two dimensions as the initial conditions and the number of time steps as the third dimension ] --- .left-column[ ## Pendulum ### Script ### Simulation ] .right-column[ * at first, we support only one specific time-stepping scheme and one specific equation, in this case the explicit Euler method and the pendulum example, implemented in the function `run!`: ```julia function run!(sim::Simulation) for n in 1:sim.nt for i in 1:sim.ns sim.x[1,i,n+1] = sim.x[1,i,n] + sim.Δt * sim.x[2,i,n] sim.x[2,i,n+1] = sim.x[2,i,n] - sim.Δt * sin(sim.x[1,i,n]) end end return sim.x end ``` * `run!` takes an instance `sim` of `Simulation` as argument, applies the time integrator for `sim.nt` time steps and stores the solution in `sim.x` * for convenience, the `run!` function returns the solution array * note that the number of samples is not stored explicitly, but inferred from the solution array size ] --- .left-column[ ## Pendulum ### Script ### Simulation ] .right-column[ * in order to conveniently load the Particle package, export `Simulation` and `run!` in the module in `src/Particles.jl`: ```julia module Particles export Simulation, run! include("simulation.jl") end ``` * note that the order in which the export statement and the actual definition of e.g. the `Simulation` type appear does not matter ] --- .left-column[ ## Pendulum ### Script ### Simulation ### Run Script ] .right-column[ * in order to run the simulation, we create a new file `scripts/pendulum.jl`: ```julia # import Particles package using Particles # parameters Δt = 0.1 # time step size nt = 100 # number of time steps ns = 5 # number of samples # random initial conditions x₀ = hcat(rand(ns) .* 2π, rand(ns) .* 2 .- 1)' # create Simulation instance sim = Simulation(x₀, Δt, nt) # run simulation x = run!(sim) ``` * in the first three paragraphs, we import the `Particles` package, set the same parameters as before, and create a `2 × ns` array holding random initial conditions * then, we create an instance of `Simulation`, where we pass the initial conditions, time step size and number of time steps * note that in the following, the global variables `Δt`, `nt` and `ns` are not used anymore, but only the `sim` value is passed around * in particular, we pass `sim` to `run!`, which consecutively applies the explicit Euler method `nt` times to each sample and returns the results * after that we can apply the very same plotting code as before ] --- .left-column[ ## Pendulum ### Script ### Simulation ### Run Script ] .right-column[ * from the `Particles` directory, this script is called by ```shell julia --project=. scripts/pendulum.jl ``` * alternatively, from the `Particles/scripts` directory it is called by ```shell julia --project=.. pendulum.jl ``` ] --- .left-column[ ## Pendulum ### Script ### Simulation ### Run Script ### OffsetArrays ] .right-column[ ### The Simulation Type Revised In the following we want to refine our Simulation type a bit. ### OffsetArrays * we want to use an OffsetArray to store the solution instead of vanilla arrays * we have to add the OffsetArrays package to our project * start Julia in the `Particles` directory via ``` $ julia --project=. ``` * change to the `Pkg` REPL so that you see a prompt like the following: ``` (@v1.4) pkg> ``` * add the OffsetArrays package: ``` (@v1.4) pkg> add OffsetArrays Updating registry at `~/.julia/registries/General` Updating git-repo `https://github.com/JuliaRegistries/General` Resolving package versions... Updating `~/Particles/Project.toml` [6fe1bfb0] + OffsetArrays v1.0.4 Updating `~/Particles/Manifest.toml` [6fe1bfb0] + OffsetArrays v1.0.4 ``` ] --- .left-column[ ## Pendulum ### Script ### Simulation ### Run Script ### OffsetArrays ] .right-column[ * import `OffsetArrays` into the `Particles` module: ```julia module Particles using OffsetArrays export Simulation, run! include("simulation.jl") end ``` * we could import `OffsetArrays` in `src/simulation.jl`, but `using OffsetArrays` brings all names that are exported by `OffsetArrays` into the `Particles` namespace, so it is better practice to import it here ] --- .left-column[ ## Pendulum ### Script ### Simulation ### Run Script ### OffsetArrays ] .right-column[ * modify the Simulation type: ```julia struct Simulation{DT <: Number, AT <: OffsetArray{DT,3}} Δt::DT nt::Int ns::Int x::AT function Simulation(x₀::AbstractArray{DT,2}, Δt::DT, nt::Int) where {DT} x = OffsetArray(zeros(DT, size(x₀)..., nt+1), axes(x₀)..., 0:nt) x[:,:,0] .= x₀ new{DT, typeof(x)}(Δt, nt, x) end end ``` * the solution field `x` was modified to be of type `AT`, which is a type parameter, and a subtype of `OffsetArray{DT,3}` * in the constructor, we take the array we create by the `zero` function and pass it to the `OffsetArray` constructor, together with the ranges for all three dimensions * note that by allowing `x₀` to be an `AbstractArray` and copying the index ranges using `axes`, we support very general indexing for the state vectors and the samples, e.g., the first dimensions could be indexed by `-m, ..., 0, ... +m`, which might be useful when computing the time evolution of Fourier coefficients * moreover, we can store the initial conditions in the temporal index `0` instead of `1` * a final change with respect to the original implementation is that we have to pass the type of the solution array to the `new` function ] --- .left-column[ ## Pendulum ### Script ### Simulation ### Run Script ### OffsetArrays ] .right-column[ * in the `run!` function, we need to adapt the temporal indices of the solution array: ```julia function run!(sim::Simulation) for n in 1:sim.nt for i in 1:sim.ns sim.x[1,i,n] = sim.x[1,i,n-1] + sim.Δt * sim.x[2,i,n-1] sim.x[2,i,n] = sim.x[2,i,n-1] - sim.Δt * sin(sim.x[1,i,n-1]) end end return sim.x end ``` ] --- .left-column[ ## Pendulum ### Script ### Simulation ### Run Script ### OffsetArrays ] .right-column[ * adding the type parameter `AT` was mostly a matter convenience * we could also make this explicit and replace `AT` with `OffsetArray{DT,3,Array{DT,3}}`, which is the actual type of our OffsetArray: ```julia struct Simulation{DT <: Number} Δt::DT nt::Int ns::Int x::OffsetArray{DT,3,Array{DT,3}} function Simulation(x₀::AbstractArray{DT,2}, Δt::DT, nt::Int) where {DT} x = OffsetArray(zeros(DT, size(x₀)..., nt+1), axes(x₀)..., 0:nt) x[:,:,0] .= x₀ new{DT}(Δt, nt, x) end end ``` * with this, we do not need to pass the type of the solution array in the call to `new` * explicitly specifying the OffsetArray type removes one of the type parameters, which is generally considered preferable ] --- .left-column[ ## Pendulum ### Script ### Simulation ### Run Script ### OffsetArrays ### Convenience ] .right-column[ ### Convenience Functions * storing the number of samples and the number of time steps in the simulation type is redundant * these numbers are also encoded in the size of the solution array * we can eliminate the `nt` and `ns` fields without cluttering our code by adding functions that return the numbers of samples and time steps: ```julia nsamples(sim::Simulation) = lastindex(sim.x,2) ntimesteps(sim::Simulation) = lastindex(sim.x,3) ``` * with this, we can remove the `nt` and `ns` fields from `Simulation`: ```julia struct Simulation{DT <: Number} Δt::DT x::OffsetArray{DT,3,Array{DT,3}} function Simulation(x₀::AbstractArray{DT,2}, Δt::DT, nt::Int) where {DT} x = OffsetArray(zeros(DT, size(x₀)..., nt+1), axes(x₀)..., 0:nt) x[:,:,0] .= x₀ new{DT}(Δt, x) end end ``` we still need to pass `nt` to the constructor, though, as it is needed to create the solution array ] --- .left-column[ ## Pendulum ### Script ### Simulation ### Run Script ### OffsetArrays ### Convenience ] .right-column[ * while we are at it, we can do the same for the number of dimensions, i.e., the size of the state vector (this will become handy later on): ```julia ndims(sim::Simulation) = lastindex(sim.x,1) ``` * our implementation of `ndims` and `nsamples` is limitting: it assumes indices start at `1` * for `ntimesteps` this is appropriate, but for `ndims` and `nsamples` it destroys most of the flexibility we gained in using OffsetArrays * the way we initialise and store the solution in the `Simulation` is very general, and in particular allows for very general indexing of the state vector (and samples) ```julia struct Simulation{DT <: Number} Δt::DT x::OffsetArray{DT,3,Array{DT,3}} function Simulation(x₀::AbstractArray{DT,2}, Δt::DT, nt::Int) where {DT} x = OffsetArray(zeros(DT, size(x₀)..., nt+1), axes(x₀)..., 0:nt) x[:,:,0] .= x₀ new{DT}(Δt, x) end end ``` * `ndims` and `nsamples` can be implemented in a way, that supports this generality by ```julia ndims(sim::Simulation) = length(axes(sim.x,1)) nsamples(sim::Simulation) = length(axes(sim.x,2)) ``` ] --- .left-column[ ## Pendulum ### Script ### Simulation ### Run Script ### OffsetArrays ### Convenience ] .right-column[ * with the new convenience functions the loops in `run!` can be written as: ```julia function run!(sim::Simulation) for n in 1:ntimesteps(sim) for i in 1:nsamples(sim) sim.x[1,i,n] = sim.x[1,i,n-1] + sim.Δt * sim.x[2,i,n-1] sim.x[2,i,n] = sim.x[2,i,n-1] - sim.Δt * sin(sim.x[1,i,n-1]) end end return sim.x end ``` * this way of constructing the iterators, however, also limits the generality of the code and thus is not the prefered way of writing iterators in Julia (anymore) * instead `axes` should be used to obtain an iterator: ```julia function run!(sim::Simulation) for n in axes(sim.x, 3)[1:end] for i in axes(sim.x, 2) sim.x[1,i,n] = sim.x[1,i,n-1] + sim.Δt * sim.x[2,i,n-1] sim.x[2,i,n] = sim.x[2,i,n-1] - sim.Δt * sin(sim.x[1,i,n-1]) end end return sim.x end ``` * for the time steps we need to be careful not to loop over the initial condition * this code allows general indexing but it is less expressive as our original code ] --- .left-column[ ## Pendulum ### Script ### Simulation ### Run Script ### OffsetArrays ### Convenience ] .right-column[ * to remedy this, we can add three more convenience functions in analogy to Julia's `eachindex`: ```julia eachdim(sim::Simulation) = axes(sim.x,1) eachsample(sim::Simulation) = axes(sim.x,2) eachtimestep(sim::Simulation) = axes(sim.x,3)[1:end] ``` * with this the `run!` function can be written as ```julia function run!(sim::Simulation) for n in eachtimestep(sim) for i in eachsample(sim) sim.x[1,i,n] = sim.x[1,i,n-1] + sim.Δt * sim.x[2,i,n-1] sim.x[2,i,n] = sim.x[2,i,n-1] - sim.Δt * sin(sim.x[1,i,n-1]) end end return sim.x end ``` which is both expressive and general ] --- .left-column[ ## Equations ] .right-column[ ## Assignment 3: Equations Implement the possibility to change the equation, i.e. write an `Equation` type that stores the functions providing the vector fields and initial conditions. Implement a simulation for `n` independent charged particles in some prescribed electromagnetic field: $$ \dot{x} = v , \quad \dot{v} = E (x) + v \times B(x) . $$ Specifically, we use a simple Theta-pinch magnetic field with $$ B = (0,0,1) , $$ and an oscillating electric field with $$ E = (0, 0, \cos (2\pi z)) . $$ ] --- .left-column[ ## Equations ### Vector Fields ] .right-column[ * to separate the vector field from the integrator in the `run!` function, we extend the `Simulation` by a field `f` that stores the vector field function: ```julia struct Simulation{DT <: Number, FT <: Function} Δt::DT f::FT x::OffsetArray{DT,3,Array{DT,3}} function Simulation(f::FT, x₀::AbstractArray{DT,2}, Δt::DT, nt::Int) where {DT, FT} x = OffsetArray(zeros(DT, size(x₀)..., nt+1), axes(x₀)..., 0:nt) x[:,:,0] .= x₀ new{DT,FT}(Δt, f, x) end end ``` * here, it is important to also store the type of the function `f` as a type parameter `FT` * note that the type of `f` is not `Function`, but a subtype thereof ] --- .left-column[ ## Equations ### Vector Fields ] .right-column[ * implement a `pendulum!` function in `scripts/pendulum.jl` that takes two arguments `(ẋ, x)` and computes the vector field: ```julia function pendulum!(ẋ, x) ẋ[1] = x[2] ẋ[2] = - sin(x[1]) end ``` * the first argument, `ẋ`, holds the output vector, i.e., the vector field * the second vector, `x`, holds the current state vector of the system * we can enforce that `(ẋ, x)` are vectors of the right size by adding type restrictions and asserts: ```julia function pendulum!(ẋ::AbstractVector, x::AbstractVector) @assert length(ẋ) == length(x) == 2 ẋ[1] = x[2] ẋ[2] = - sin(x[1]) end ``` * this function is then passed to the `Simulation` constructor: ```julia sim = Simulation(pendulum!, x₀, Δt, nt) ``` ] --- .left-column[ ## Equations ### Vector Fields ] .right-column[ * before we adapt the `run!` function to work on generic vector fields, we add another convenience function that returns the length of the state vector: ```julia ndims(sim::Simulation) = lastindex(sim.x,1) ``` * we need this to create a temporary array in `run!`, that holds the vector field: ```julia function run!(sim::Simulation{DT}) where {DT} ẋ = zeros(DT, ndims(sim)) for n in eachtimestep(sim) for i in eachsample(sim) sim.f(ẋ, sim.x[:,i,n-1]) sim.x[:,i,n] .= sim.x[:,i,n-1] .+ sim.Δt .* ẋ end end return sim.x end ``` * as we need the solution data type, i.e., the `DT` parameter of the `Simulation` type, to instantiate the temporary vector field vector, we add it as a function parameter * we do not need to declare the vector field function type `FT` as function parameter in order to obtain optimised code; `FT` being specified as type parameter is sufficient * in the loop, we evaluate the vector field function, that is stored in the field `f` of the `Simulation` type * this allows us to prescribe arbitrary vector fields and are not restricted to the equation of the pendulum ] --- .left-column[ ## Equations ### Vector Fields ] .right-column[ * in our implementation of `run!` the slicing of `sim.x` creates a new array (twice!) ```julia sim.f(ẋ, sim.x[:,i,n-1]) sim.x[:,i,n] .= sim.x[:,i,n-1] .+ sim.Δt .* ẋ ``` * we can eliminate this by creating a view beforehand: ```julia function run!(sim::Simulation{DT}) where {DT} ẋ = zeros(DT, ndims(sim)) for n in eachtimestep(sim) for i in eachsample(sim) x = @view sim.x[:,i,n-1] sim.f(ẋ, x) sim.x[:,i,n] .= x .+ sim.Δt .* ẋ end end return sim.x end ``` or by creating another temporary array (which might be slightly more performant): ```julia function run!(sim::Simulation{DT}) where {DT} x = zeros(DT, ndims(sim)) ẋ = zeros(DT, ndims(sim)) for n in eachtimestep(sim) for i in eachsample(sim) x .= sim.x[:,i,n-1] sim.f(ẋ, x) sim.x[:,i,n] .= x .+ sim.Δt .* ẋ end end return sim.x end ``` ] --- .left-column[ ## Equations ### Vector Fields ### Equation Type ] .right-column[ * as an initial value problem is characterised not only by its vector field but also by its initial condition, it seems worthwhile to abstract this concept into an `Equation` type * in order to add the `Equation` type, create a new file `src/equation.jl`, include it in the `Particles` module and export `Equation`: ```julia module Particles using OffsetArrays export Equation, Simulation, run! include("equation.jl") include("simulation.jl") end ``` * the `Equation` type is just a container for the vector field `f` and the initial condition `x₀`: ```julia struct Equation{DT <: Number, FT <: Function} f::FT x₀::Array{DT,2} function Equation(f::FT, x₀::AbstractArray{DT,2}) where {DT, FT} new{DT,FT}(f, convert(Array{DT,2}, x₀)) end end ``` * in the constructor we allow `x₀` to be a abstract array, however, we store is as standard array and, if necessary, convert it when calling `new` * this is again unnecessarily limiting the generality and applicability of our code ] --- .left-column[ ## Equations ### Vector Fields ### Equation Type ] .right-column[ * we can add a type parameter that stores the actual array type ```julia struct Equation{DT <: Number, FT <: Function, AT <: AbstractArray{DT,2}} f::FT x₀::AT function Equation(f::FT, x₀::AT) where {DT, FT, AT <: AbstractArray{DT,2}} new{DT,FT,AT}(f, x₀) end end ``` * now the initial conditions can be any abstract array, like an `Adjoint` or an `OffsetArray`, and advanced features like custom indexing will properly propagate through our code * note that the function parameter `DT` is not explicitly specified by any of the arguments anymore, but implicitly by one of the other function parameters * it is arguable if specifying the array type as a type paramter is truly necessary: the initial conditions are rarely used in any performance-critical code, thus the following implementation should usually be sufficient ```julia struct Equation{DT <: Number, FT <: Function} f::FT x₀::AbstractArray{DT,2} function Equation(f::FT, x₀::AbstractArray{DT,2}) where {DT, FT} new{DT,FT}(f, x₀) end end ``` * in principle, the type declaration on `x₀` can also be omitted entirely ] --- .left-column[ ## Equations ### Vector Fields ### Equation Type ] .right-column[ * in the Simulation, we now store the equation instead of the vector field: ```julia struct Simulation{DT <: Number, FT <: Function} Δt::DT equ::Equation{DT,FT} x::OffsetArray{DT,3,Array{DT,3}} function Simulation(equ::Equation{DT,FT}, Δt::DT, nt::Int) where {DT, FT} x = OffsetArray(zeros(DT, size(equ.x₀)..., nt+1), axes(equ.x₀)..., 0:nt) x[:,:,0] .= equ.x₀ new{DT,FT}(Δt, equ, x) end end ``` * in general, we still have to store the vector field function type as a type parameter in order to obtain optimised code * alternatively, we can just store the equation type as type parameter ```julia struct Simulation{DT <: Number, ET <: Equation{DT}} Δt::DT equ::ET x::OffsetArray{DT,3,Array{DT,3}} function Simulation(equ::ET, Δt::DT, nt::Int) where {DT, ET <: Equation{DT}} x = OffsetArray(zeros(DT, size(equ.x₀)..., nt+1), axes(equ.x₀)..., 0:nt) x[:,:,0] .= equ.x₀ new{DT,ET}(Δt, equ, x) end end ``` * which version is preferable depends on the situation ] --- .left-column[ ## Equations ### Vector Fields ### Equation Type ] .right-column[ * specifying the function type as parameter is more explicit and somewhat stricter, however, propagation of the types of some member field's inner data structures can pollute the type parameters of outer types, especially in composition-rich codes * we will see later on, that storing these types is not always strictly necessary * the `run!` method requires only a small change, namely replacing `sim.f` with `sim.equ.f`: ```julia function run!(sim::Simulation{DT}) where {DT} ẋ = zeros(DT, ndims(sim)) for n in eachtimestep(sim) for i in eachsample(sim) sim.equ.f(ẋ, sim.x[:,i,n-1]) sim.x[:,i,n] .= sim.x[:,i,n-1] .+ sim.Δt .* ẋ end end return sim.x end ``` ] --- .left-column[ ## Equations ### Vector Fields ### Equation Type ### Refactoring ] .right-column[ * before we proceed, we perform a little bit of refactoring, that will become useful later on * we implemented some convenience functions for the `Simulation` type, that return the number of dimensions, samples, etc. * some of these functions should rather be implemented for the `Equation` type, specifically, ```julia ndims(equ::Equation) = length(axes(equ.x₀,1)) nsamples(equ::Equation) = length(axes(equ.x₀,2)) ``` * as these functions make also sense when applied to a `Simulation` instance, we do not remove the corresponding methods, but change them to ```julia ndims(sim::Simulation) = ndims(sim.equ) nsamples(sim::Simulation) = nsamples(sim.equ) ``` * this is a typical example of how composition is used in Julia ] --- .left-column[ ## Equations ### Vector Fields ### Equation Type ### Refactoring ### Charged Particles ] .right-column[ * now we have all the functionality we need in order to add another example * we create a new file `scripts/charged_particles.jl` where all the problem-specific functionality is implemented * we need to implement a function that computes the Lorentz force, and to that end we need functions that provide electric and magnetic fields * other than changing the vector field and adapting the initial conditions, the script is the same as for the pendulum example: ```julia # import Particles package using Particles # parameters Δt = 0.1 # time step size nt = 100 # number of time steps ns = 5 # number of samples # random initial conditions x₀ = hcat(rand(ns) .* 2 .- 1, rand(ns) .* 2 .- 1, zeros(ns), rand(ns) .- 0.5, rand(ns) .- 0.5, ones(ns))' # electric field E(x::Vector{DT}) where {DT} = DT[0, 0, cos(2π*x[3])] # magnetic field B(x::Vector{DT}) where {DT} = DT[0, 0, 1] ``` ] --- .left-column[ ## Equations ### Vector Fields ### Equation Type ### Refactoring ### Charged Particles ] .right-column[ * `scripts/charged_particles.jl` continued: ```julia # vector field function lorentz_force!(ż, z) x = z[1:3] v = z[4:6] e = E(x) b = B(x) ż[1] = v[1] ż[2] = v[2] ż[3] = v[3] ż[4] = e[1] + v[2] * b[3] - v[3] * b[2] ż[5] = e[2] + v[3] * b[1] - v[1] * b[3] ż[6] = e[3] + v[1] * b[2] - v[2] * b[1] end # create an Equation instance equ = Equation(lorentz_force!, x₀) # create Simulation instance sim = Simulation(equ, Δt, nt) # run simulation x = run!(sim) ``` ] --- .left-column[ ## Equations ### Vector Fields ### Equation Type ### Refactoring ### Charged Particles ] .right-column[ * the plotting part is adapted to visualise the 3d particle trajectories: ```julia # load Plots package using Plots # set plot ranges xlim = (-2, +2) ylim = (-2, +2) zlim = ( 0, 20) # create empty 3d plot with ns empty series plt = plot3d( ns, xlim = xlim, ylim = ylim, zlim = zlim, title = "Charged Particles", legend = false, marker = 5, size = (800, 600) ) # plot solutions for j in axes(x,3) for i axes(x,2) push!(plt, i, x[1,i,j], x[2,i,j], x[3,i,j]) end end # save figure to file savefig("charged_particles.png") ``` ] --- .left-column[ ## Equations ### Vector Fields ### Equation Type ### Refactoring ### Charged Particles ] .right-column[  ] --- .left-column[ ## Time Stepping ] .right-column[ ## Assignment 4: Time-stepping schemes Implement the possibility to change the time stepping schemes. Write an abstract integrator type with different concrete sub-types. Implement a function that integrates one time step and has different methods for different integrators. Add at least one more time stepping method, e.g., Störmer-Verlet. ] --- .left-column[ ## Time Stepping ### Abstract Integrator ] .right-column[ * we add a simple abstract integrator interface * add a new file `src/integrators.jl` and include it in `src/Particles.jl`: ```julia module Particles using OffsetArrays export Equation, Simulation, run! export ExplicitEuler include("equation.jl") include("integrators.jl") include("simulation.jl") end ``` * anticipating that we want to add an `ExplicitEuler` integrator, we export it already ] --- .left-column[ ## Time Stepping ### Abstract Integrator ] .right-column[ * in `src/integrators.jl` we add the abstract type `Integrator{DT}` with one type parameter and the function `integrate_step!` without any method: ```julia abstract type Integrator{DT} end function integrate_step! end ``` * this `integrate_step!` declaration should be used to add generic documentation for this function as there will be many methods, namely one for each integrator * every integrator should implement the `integrate_step!` method with the interface ```julia function integrate_step!(int::Integrator, equ::Equation, x₀::AbstractVector, x₁::AbstractVector) end ``` where `x₀` is the previous solution and `x₁` is the new solution * to make this more explicit, we could add a method ```julia integrate_step!(int::Integrator, equ::Equation, x₀::AbstractVector, x₁::AbstractVector) = error("integrate_step!() not implemented for integrator ", typeof(int)) ``` * when `integrate_step!` is called for any integrator that does not implement the corresponding method, an error is thrown * the solution arguments should be `AbstractVector`s instead of `Vector`s as more often than not these will be views on an array rather than an actual array ] --- .left-column[ ## Time Stepping ### Abstract Integrator ### Simulation ] .right-column[ * in the `Simulation` type we now store an integrator object instead of the time step with the constructor being modified accordingly: ```julia struct Simulation{DT <: Number, ET <: Equation{DT}, IT <: Integrator{DT}} equ::ET int::IT x::OffsetArray{DT,3,Array{DT,3}} function Simulation(equ::ET, int::IT, nt::Int) where {DT, ET <: Equation{DT}, IT <: Integrator{DT}} x = OffsetArray(zeros(DT, size(equ.x₀)..., nt+1), axes(equ.x₀)..., 0:nt) x[:,:,0] .= equ.x₀ new{DT,ET,IT}(equ, int, x) end end ``` * for the moment, we also store the type of the integrator as type parameter `IT` * we will discuss this issue in some more detail later on ] --- .left-column[ ## Time Stepping ### Abstract Integrator ### Simulation ] .right-column[ * in the `run!` function of the `Simulation` we call the `integrate_step!` function instead implementing a specific integration method: ```julia function run!(sim::Simulation{DT}) where {DT} for n in eachtimestep(sim) for i in eachsample(sim) @views integrate_step!(sim.int, sim.equ, sim.x[:,i,n-1], sim.x[:,i,n]) end end return sim.x end ``` * it is important that we pass views of the solution array to the `integrate_step!` function, so that the solution of the integration step is copied to the right place * the naive call, leaving out the `@views` macro creates a copy of the slice of the solution array, so that changes to `x₁` would be lost * a slightly longer but possibly more explicit alternative to the above implementation is the following: ```julia x₀ = @view sim.x[:,i,n-1] x₁ = @view sim.x[:,i,n] integrate_step!(sim.int, sim.equ, x₀, x₁) ``` or ```julia integrate_step!(sim.int, sim.equ, view(sim.x, :, i ,n-1), view(sim.x, :, i, n)) ``` * with this functionality we can easily choose different integrators in our `Simulation` ] --- .left-column[ ## Time Stepping ### Abstract Integrator ### Simulation ### Explicit Euler ] .right-column[ * we can add the `ExplicitEuler` integrator as follows: ```julia struct ExplicitEuler{DT} <: Integrator{DT} Δt::DT ẋ::Vector{DT} function ExplicitEuler(equ::Equation{DT}, Δt::DT) where {DT} new{DT}(Δt, zeros(DT, ndims(equ))) end end ``` * this type should store everything needed to compute one integration step, that is in particular the time step `Δt` (which can now be removed from the `Simulation` type), and a temporary array `ẋ` to store the vector field * the constructor takes an `Equation`, from which it extracts all relevant information, and the time step * the `integrate_step!` method for the `ExplicitEuler` integrator is very simple: ```julia function integrate_step!(int::ExplicitEuler, equ::Equation, x₀::AbstractVector, x₁::AbstractVector) equ.f(int.ẋ, x₀) x₁ .= x₀ .+ int.Δt .* int.ẋ end ``` * it has the aforementioned interface, computes the vector field `ẋ` on the old solution `x₀` and adds it to obtain the new solution `x₁` * additional integrators are added in full analogy ] --- .left-column[ ## Time Stepping ### Abstract Integrator ### Simulation ### Explicit Euler ### Discussion ] .right-column[ ### Discussion * as the `run!` function does not explicitly use the integrator and equation fields, `int` and `equ`, of the `Simulation`, except for passing them to the `integrate_step!` function, there is no need to store their type in `Simulation` to obtain optimised code * `integrate_step!` does not operate on a `Simulation` instance but on instances of `Integrator` and `Equation`; when `integrate_step!` is called, Julia can automatically determine the corresponding types and generate optimised code for those types * if this is also true elsewhere, the `Simulation` type can be simplified as follows: ```julia struct Simulation{DT <: Number} equ int x::OffsetArray{DT,3,Array{DT,3}} function Simulation(equ::Equation{DT}, int::Integrator{DT}, nt::Int) where {DT} x = OffsetArray(zeros(DT, size(equ.x₀)..., nt+1), axes(equ.x₀)..., 0:nt) x[:,:,0] .= equ.x₀ new{DT}(equ, int, x) end end ``` * the situation is different for the solution array `x`: as the `run!` function directly operates on this field of `Simulation` its type needs to be fully specified * note that the above version of `Simulation` still enforces that the solution array, the initial conditions in the equation and the time step in the integrator all use the same data type (as it is enforced in the constructor and our struct is immutable) ] --- .left-column[ ## Time Stepping ### Abstract Integrator ### Simulation ### Explicit Euler ### Discussion ### Refactoring ] .right-column[ ### Refactoring In the design of the interface we made some choices for which there could be alternatives. * all temporary arrays needed to compute a time step either need to be allocated in the `integrate_step!` method (which often induces overhead, especially for larger solution vectors, as this function is called repeatedly) or stored as a field in the corresponding integrator type (as we did above) * the latter choice implies that an integrator instance is tied to a specific `Equation` as the size of e.g. the temporary vector that holds the vector field depends on the equation * one could argue that the `Equation` instance should be a field in the `Integrator` rather than the `Simulation` * going one step further, we might not want to store the `Equation` in `Integrator` but only the vector field function `f` (this is what we do in GeometricIntegrators.jl) ] --- .left-column[ ## Time Stepping ### Abstract Integrator ### Simulation ### Explicit Euler ### Discussion ### Refactoring ] .right-column[ * we move the equation field from `Simulation` to `Equation` ```julia struct Simulation{DT <: Number, IT <: Integrator{DT}} int::IT x::OffsetArray{DT,3,Array{DT,3}} function Simulation(x₀::AbstractArray{DT}, int::IT, nt::Int) where {DT, IT <: Integrator{DT}} x = OffsetArray(zeros(DT, size(x₀)..., nt+1), axes(x₀)..., 0:nt) x[:,:,0] .= x₀ new{DT,IT}(int, x) end end function Simulation(equ::Equation, integrator::Type{<:Integrator}, Δt::Real, nt::Int; kwargs...) int = integrator(equ, Δt; kwargs...) Simulation(equ.x₀, int, nt) end ``` ```julia struct ExplicitEuler{DT, ET <: Equation{DT}} <: Integrator{DT} equ::ET Δt::DT ẋ::Vector{DT} function ExplicitEuler(equ::ET, Δt::DT) where {DT, ET <: Equation{DT}} new{DT,ET}(equ, Δt, zeros(DT, ndims(equ))) end end ``` * we added a convenience constructor for `Simulation` that takes an `Integrator` type and creates both the integrator and the simulation; the `kwargs...` argument allows to pass integrator-specific parameters to the corresponding constructor ] --- .left-column[ ## Time Stepping ### Abstract Integrator ### Simulation ### Explicit Euler ### Discussion ### Refactoring ] .right-column[ * in the example scripts, e.g. `scripts/pendulum.jl` the construction of the integrator and simulation is changed from ```julia # create an Equation instance equ = Equation(pendulum!, x₀) # create an Integrator instance int = ExplicitEuler(equ, Δt) # create Simulation instance sim = Simulation(equ, int, nt) ``` to ```julia # create an Equation instance equ = Equation(pendulum!, x₀) # create Simulation instance sim = Simulation(equ, ExplicitEuler, Δt, nt) ``` ] --- .left-column[ ## Time Stepping ### Abstract Integrator ### Simulation ### Explicit Euler ### Discussion ### Refactoring ### Functors ] .right-column[ * to streamline the interface a bit more we could also add functors for both the `Simulation` and the `Integrator`s * the `run!` function could be replaced with ```julia function (sim::Simulation{DT})() where {DT} for n in eachtimestep(sim) for i in eachsample(sim) @views integrate_step!(sim.int, sim.equ, sim.x[:,i,n-1], sim.x[:,i,n]) end end return sim.x end ``` * this allows to run a simulation via ```julia sim() ``` instead of ```julia run!(sim) ``` * similarly, we could have the following functor instead of `integrate_step!`: ```julia function (int::Integrator)(x₀::AbstractVector, x₁::AbstractVector) ``` which would be called by ```julia int(x₀, x₁) ``` * this is particularly elegant since we store the `Equation` in the `Integrator` ] --- .left-column[ ## Traits and Closures ] .right-column[ ## Assignment 5: Traits and Closures Implement a FFT-based 1D Poisson solver for the electrostatic potential. Use this potential to compute the electric field in the charged particle equations. Note that the potential might have to be updated during one time step, e.g., for each internal stage of a Runge-Kutta method. This can be achieved using callbacks, e.g., functions that are called at the beginning or end of a stage or sub-step of an integrator. Extend the equation and integrator types and methods accordingly. ] --- .left-column[ ## Traits and Closures ### Poisson Solver ] .right-column[ * the Poisson solver solves the following equation $$ \Delta \phi = - \rho $$ * create a new file `src/poisson.jl` and include it in `Particles.jl` * we will use the `FFTW.jl` and `StatsBase.jl` packages in the Poisson solver and `StatsPlots.jl` for testing and debugging * go to the package manager and all these dependencies * import `FFTW` and `StatsBase` in `Particles.jl`: ```julia module Particles using FFTW using OffsetArrays using StatsBase export Equation, Simulation, run! export ExplicitEuler export PoissonSolver, solve!, eval_field include("equation.jl") include("integrators.jl") include("simulation.jl") include("poisson.jl") end ``` * we already added the `export` statement for the `PoissonSolver` and its functions `solve!` and `eval_field` (which still need to be implemented) ] --- .left-column[ ## Traits and Closures ### Poisson Solver ] .right-column[ * the Poisson solver type stores everything that is needed to solve the Poisson equation: * the number of grid points `nx`, * the grid step size `Δx`, * the grid `xgrid`, * and two vectors for the right-hand side `ρ` and the solution `ϕ`, respectively ```julia struct PoissonSolver{DT <: Real} nx::Int Δx::DT xgrid::Vector{DT} ρ::Vector{DT} ϕ::Vector{DT} function PoissonSolver{DT}(nx::Int) where {DT} Δx = 1/nx xgrid = collect(0:Δx:1) new(nx, Δx, xgrid, zeros(DT, nx), zeros(DT, nx)) end end ``` * the constructor takes two argumens, the type `DT` and the number of grid points `nx` * all other fields are instantiated by the constructor ] --- .left-column[ ## Traits and Closures ### Poisson Solver ] .right-column[ * we need two implement two functions: * one that solves the Poisson equation, * and one that evaluates the electric field * the Poisson equation is solved using FFT: ```julia function solve!(p::PoissonSolver{DT}, x::AbstractVector{DT}) where {DT} h = fit(Histogram, mod.(x, 1), p.xgrid) p.ρ .= h.weights ./ length(x) ρ̂ = rfft(p.ρ) k² = [(i-1)^2 for i in eachindex(ρ̂)] ϕ̂ = - ρ̂ ./ k² ϕ̂[1] = 0 p.ϕ .= irfft(ϕ̂, length(p.ρ)) return p end ``` * first, we create a histogram of the particles using the `fit` function from `StatsBase` * the result is scaled by the number of particles to obtain the charge density `ρ` * we compute the real FFT of the charge density and compute the Laplace operator * the inverse real FFT provides the electrostatic potential `ϕ`, which is stored in the `Poisson` struct * remark: for a more efficient implementation, all temporary arrays should be stored persistently in `PoissonSolver` and we should create a `plan_fft` and `plan_ifft` (see `AbstractFFTs.jl` for details) * remark: in general, one would not like to restrict the domain to $(0,1)$ ] --- .left-column[ ## Traits and Closures ### Poisson Solver ] .right-column[ * the electric field is evaluated via a rather rough finite difference approach: ```julia function eval_field(p::PoissonSolver{DT}, x::DT) where {DT} y = mod(x, one(x)) i1 = floor(Int, y / p.Δx) + 1 i2 = mod( ceil(Int, y / p.Δx), p.nx) + 1 i1 == i2 && (i1 = i1-1) i1 == 0 && (i1 = lastindex(p.ϕ)) return - (p.ϕ[i2] - p.ϕ[i1]) / p.Δx end ``` * we need to locate the position `x` in the grid and return the finite difference of the neighbouring potential values ] --- .left-column[ ## Traits and Closures ### Poisson Solver ] .right-column[ * in `prototyping/poisson.jl` we write a small test script to verify if our Poisson solver works as expected: ```julia using Particles using StatsBase using StatsPlots # set grid resolution nx = 16 # set number of particles and initialise random particles n = 10000 x = randn(n) v = randn(n) # shift and scale particles positions to the interval [0,1] xmax = ceil(maximum(abs.(v))) x .+= xmax x ./= 2*xmax # solve Poisson equation p = PoissonSolver{eltype(x)}(nx) solve!(p, x) ``` The script is invoked from the `Particles` directory via ``` julia --project=. prototyping/poisson.jl ``` ] --- .left-column[ ## Traits and Closures ### Poisson Solver ] .right-column[ #### Particle Histogram ```julia xgrid = collect(0:p.Δx:1) hx = fit(Histogram, x, xgrid) plot(hx; legend=nothing, xlabel="x", ylabel="n") savefig("poisson_histogram_x.png") ```  ] --- .left-column[ ## Traits and Closures ### Poisson Solver ] .right-column[ #### Charge Density ```julia plot(p.xgrid[1:end-1], p.ρ; legend=nothing, xlabel="x", ylabel="ρ(x)") savefig("poisson_ρ.png") ```  ] --- .left-column[ ## Traits and Closures ### Poisson Solver ] .right-column[ #### Electrostatic Potential ```julia plot(p.xgrid[1:end-1], p.ϕ; legend=nothing, xlabel="x", ylabel="ϕ(x)") savefig("poisson_ϕ.png") ```  ] --- .left-column[ ## Traits and Closures ### Poisson Solver ] .right-column[ #### Electric Field ```julia plot(p.xgrid, x -> eval_field(p, x); legend=nothing, xlabel="x", ylabel="E(x)") savefig("poisson_E.png") ```  ] --- .left-column[ ## Traits and Closures ### Poisson Solver ### Vlasov-Poisson ] .right-column[ * in the next step, we need to modify the charged particle simulation in `scripts/charged_particles.jl` to use the Poisson solver and the self-consistent electric field * copy the script to `scripts/vlasov_poisson.jl`, add a parameter for the grid resolution, and strip down to 1d using the same initial conditions as in the Poisson solver test script ```julia # import Particles package using Particles # parameters Δt = 0.1 # time step size nt = 200 # number of time steps np = 10000 # number of particles nx = 16 # number of grid points # random initial conditions x₀ = randn(np) v₀ = randn(np) # shift x₀ to the interval [0,1] xmax = ceil(maximum(abs.(x₀))) x₀ .+= xmax x₀ ./= 2*xmax # concatenate x₀ and v₀ z₀ = hcat(x₀, v₀)' ``` ] --- .left-column[ ## Traits and Closures ### Poisson Solver ### Vlasov-Poisson ] .right-column[ * modify the Lorentz force, create a Poisson solver and add a closure for the vector field in the equation: ```julia # vector field function lorentz_force!(ż, z, p::PoissonSolver) ż[1] = z[2] ż[2] = eval_field(p, z[1]) end # create Poisson solver p = PoissonSolver{eltype(z₀)}(nx) solve!(p, x₀) # create an Equation instance equ = Equation((ż, z) -> lorentz_force!(ż, z, p), z₀) # create Simulation instance sim = Simulation(equ, ExplicitEuler, Δt, nt) # run simulation z = sim() ``` * note that the integrator expects a vector field that takes two arguments `(ż, z)`, but the `lorentz_force!` function takes three arguments as it needs a `PoissonSolver` instance to compute the electric field * in such situations one just creates a closure, that is an anonymous function, which has the desired interface and forwards the additional arguments: ```julia (ż, z) -> lorentz_force!(ż, z, p) ``` ] --- .left-column[ ## Traits and Closures ### Poisson Solver ### Vlasov-Poisson ] .right-column[ ```julia # load Plots package using Plots # compute plot ranges vmax = ceil(maximum(abs.(v₀))) xlim = (0, 1) vlim = (-vmax, +vmax) # plot initial condition scatter(mod.(z[1,:,0], 1), z[2,:,0], marker = 3, xlim = xlim, ylim = vlim, title = "Vlasov-Poisson", legend = false, size = (800, 600) ) # save figure to file savefig("vlasov_poisson_z₀.png") # create animation anim = @animate for i in 0:nt scatter(mod.(z[1,:,i], 1), z[2,:,i], marker = 3, xlim = xlim, ylim = vlim, title = "Vlasov-Poisson", legend = false, size = (800, 600) ) end # save animation to file gif(anim, "vlasov_poisson_anim.gif", fps=10) ``` ] --- .left-column[ ## Traits and Closures ### Poisson Solver ### Vlasov-Poisson ] .right-column[  ] --- .left-column[ ## Traits and Closures ### Poisson Solver ### Vlasov-Poisson ] .right-column[  ] --- .left-column[ ## Traits and Closures ### Poisson Solver ### Vlasov-Poisson ### Update Potential ] .right-column[ * in order to provide flexible means for automatically updating the potential, we implement support for pre- and post-processing functions that are run before and after each timestep * we extend the equation type by two fields, `f_pre` and `f_post`, that each hold either a function or `nothing`, the latter being the default: ```julia struct Equation{DT <: Number, FT <: Function, FPRE <: Union{Function,Nothing}, FPOST <: Union{Function,Nothing}} f::FT x₀::Array{DT,2} f_pre::FPRE f_post::FPOST function Equation(f::FT, x₀::AbstractArray{DT,2}; f_preproc::FPRE=nothing, f_postproc::FPOST=nothing) where {DT, FT, FPRE, FPOST} new{DT, FT, FPRE, FPOST}(f, Array(x₀), f_preproc, f_postproc) end end ``` * in addition, we add two functions, each with two methods: ```julia preprocessing(::Equation{<:Number, <:Function, Nothing}, ::AbstractArray) = nothing postprocessing(::Equation{<:Number, <:Function, <:Union{Function,Nothing}, Nothing}, ::AbstractArray) = nothing preprocessing(equ::Equation{<:Number, <:Function, <:Function}, x₀::AbstractArray) = equ.f_pre(x₀) postprocessing(equ::Equation{<:Number, <:Function, <:Union{Function,Nothing}, <:Function}, x₁::AbstractArray) = equ.f_post(x₁) ``` * we use multiple dispatch to distinguish between equations that have pre- and post-processing functions and those which don't: if a function is present it is called, otherwise `nothing` happens. ] --- .left-column[ ## Traits and Closures ### Poisson Solver ### Vlasov-Poisson ### Update Potential ] .right-column[ * in the `run!` method of `Simulation` we need to call these functions in order for the pre- and post-processing to occur: ```julia function run!(sim::Simulation{DT}) where {DT} for n in eachtimestep(sim) @views preprocessing(sim.equ, sim.x[:,:,n-1]) for i in eachsample(sim) @views integrate_step!(sim.int, sim.equ, sim.x[:,i,n-1], sim.x[:,i,n]) end @views postprocessing(sim.equ, sim.x[:,:,n]) end return sim.x end ``` * now we only need to add a pre-processing function to our `Equation` instance for the Vlasov-Poisson problem: ```julia # create an Equation with callbacks equ = Equation((ż, z) -> lorentz_force!(ż, z, p), z₀; f_preproc = z -> solve!(p, z[1,:])) ``` ...and we can run the script again... ] --- .left-column[ ## Traits and Closures ### Poisson Solver ### Vlasov-Poisson ### Update Potential ] .right-column[  ] --- .left-column[ ## Traits and Closures ### Poisson Solver ### Vlasov-Poisson ### Update Potential ### Traits ] .right-column[ * an alternative implementation of the pre- and post-processing can be constructed via traits, based on the following functions: ```julia haspreprocessing(::Equation{<:Number, <:Function, Nothing}) = false haspreprocessing(::Equation{<:Number, <:Function, <:Function}) = true haspostprocessing(::Equation{<:Number, <:Function, <:Union{Function,Nothing}, Nothing}) = false haspostprocessing(::Equation{<:Number, <:Function, <:Union{Function,Nothing}, <:Function}) = true ``` * with those we can rewrite the `preprocessing` and `postprocessing` functions: ```julia function preprocessing(equ::Equation, x₀::AbstractArray) if haspreprocessing(equ) equ.f_pre(x₀) else # nothing to do end end function postprocessing(equ::Equation, x₁::AbstractArray) if haspostprocessing(equ) equ.f_post(x₁) end end ``` * in this example there is little benefit by using traits, but in other contexts code can become much short and sometimes easier to read by using traits instead of dispatch, especially when only a small fraction of a method depends on a given property of an object * traits cause no additional overhead: as they are dispatched on type parameters, the compiler will optimise `preprocessing` and `postprocessing` for the equation they are called with and remove all of the code in the branches that are not executed ] --- .left-column[ ## Traits and Closures ### Poisson Solver ### Vlasov-Poisson ### Update Potential ### Traits ### Discussion ] .right-column[ ### Discussion * with the current structure of the code, certain callback functionality cannot be implemented via the pre-/post-processing infrastructure * in particular, if we consider Runge-Kutta or splitting methods, we might want to update the electrostatic potential at the beginning or end of each stage or substep * while it is easy to call the pre- and post-processing functions in the integrator instead of the simulation, our current solution structure and the way the simulation works prevents us from updating the potential in the integrator * only the state of one single particle is ever passed to the integrator but we need all particles to compute the electrostatic potential; therefore, we need to modify our data structure, pass the data for all particles and integrate all particles * this implies that either the integrator has to take care of looping over particles or the vector field function has to provide the vector field for all particles at once * which of the two solutions is preferred depends on the primary purpose of the code: * the first case might make sense if we write a dedicated particle-in-cell enginge * the second case is preferred if we write a general ode solver that is also used for pic simulations, as it is more general * remark: in the first case there also is the drawback that one has to implement the looping over particles, etc., for each integrator, over and over again, and it is less flexible w.r.t. generalisations, e.g., towards advancing both particles and fields ] --- .left-column[ ## Traits and Closures ### Poisson Solver ### Vlasov-Poisson ### Update Potential ### Traits ### Discussion ### Refactoring ] .right-column[ ### Refactoring * we want to restructure and refactor our code in the following ways: * in the simulation, store only the current state of the system instead of its whole history * store the solution as vector of vectors, where elements of the enclosing vector correspond to particles and elements of the inner vector represent a particle's phasespace position * compute the vector-field for all particles at once * add pre- and post-processing callbacks also to the simulation in order to allow for plotting and storing the solution to disk ] --- .left-column[ ## Traits and Closures ### Poisson Solver ### Vlasov-Poisson ### Update Potential ### Traits ### Discussion ### Refactoring ] .right-column[ * in the simulation, store only the current state of the system as vector of vectors ```julia struct Simulation{DT <: Number, IT <: Integrator{DT}} int::IT nt::Int ### explicitly store number of time steps ### x::Vector{Vector{DT}} function Simulation(x₀::AbstractArray{DT}, int::IT, nt::Int) where {DT, IT <: Integrator{DT}} new{DT,IT}(int, nt, copy(x₀)) end end function Simulation(equ::Equation, integrator::Type{<:Integrator}, Δt::Real, nt::Int; kwargs...) int = integrator(equ, Δt; kwargs...) Simulation(equ.x₀, int, nt) end ndims(sim::Simulation) = ndims(sim.int.equ) nsamples(sim::Simulation) = nsamples(sim.int.equ) ntimesteps(sim::Simulation) = sim.nt eachsample(sim::Simulation) = eachindex(sim.x) eachtimestep(sim::Simulation) = 1:ntimesteps(sim) function (sim::Simulation{DT})() where {DT} for n in eachtimestep(sim) preprocessing(sim.int.equ, sim.x) for i in eachsample(sim) sim.int(sim.x[i]) ### no views necessary ### end postprocessing(sim.int.equ, sim.x) end return sim.x end ``` ] --- .left-column[ ## Traits and Closures ### Poisson Solver ### Vlasov-Poisson ### Update Potential ### Traits ### Discussion ### Refactoring ] .right-column[ * the integrators are hardly changed (except for accepting only one argument and updating in place) ```julia struct ExplicitEuler{DT, ET <: Equation{DT}} <: Integrator{DT} equ::ET Δt::DT ẋ::Vector{DT} function ExplicitEuler(equ::ET, Δt::DT) where {DT, ET <: Equation{DT}} new{DT,ET}(equ, Δt, zeros(DT, ndims(equ))) end end function (int::ExplicitEuler)(x::AbstractVector) int.equ.f(int.ẋ, x) x .+= int.Δt .* int.ẋ end ``` ] --- .left-column[ ## Traits and Closures ### Poisson Solver ### Vlasov-Poisson ### Update Potential ### Traits ### Discussion ### Refactoring ] .right-column[ * we need to change the initialisation in our sample scripts * in `scripts/pendulum.jl` (and analogous in `scripts/charged_particles.jl`) from ```julia x₀ = hcat(rand(ns) .* 2π, rand(ns) .* 2 .- 1)' ``` to ```julia x₀ = [ [rand() .* 2π, rand() .* 2 .- 1] for i in 1:ns ] ``` * in `scripts/vlasov-poisson.jl` from ```julia # random initial conditions x₀ = randn(np) v₀ = randn(np) # [...] # concatenate x₀ and v₀ z₀ = hcat(x₀, v₀)' ``` to ```julia # random initial conditions x₀ = randn(np) v₀ = randn(np) # [...] # concatenate x₀ and v₀ z₀ = [ collect(z) for z in zip(x₀, v₀) ] ``` ] --- .left-column[ ## Traits and Closures ### Poisson Solver ### Vlasov-Poisson ### Update Potential ### Traits ### Discussion ### Refactoring ] .right-column[ * we change the integrator to update the whole particle vector at once ```julia struct ExplicitEuler{DT, ET <: Equation{DT}} <: Integrator{DT} equ::ET Δt::DT ẋ::Vector{Vector{DT}} function ExplicitEuler(equ::ET, Δt::DT) where {DT, ET <: Equation{DT}} ẋ = [ zeros(DT, ndims(equ)) for i in 1:nsamples(equ) ] new{DT,ET}(equ, Δt, ẋ) end end ``` * unfortunately, the `zero` function cannot be applied to vectors of vectors thus initialisation of the temporary vector `ẋ` is somewhat tedious, however arithmetic operations and the dot notation work in the expected way for vectors of vectors * thus nothing changes in the integrator functor, except that we move the pre- and post-processing calls from the simulation functor ```julia function (int::ExplicitEuler)(x::AbstractVector) preprocessing(int.equ, x) int.equ.f(int.ẋ, x) x .+= int.Δt .* int.ẋ postprocessing(int.equ, x) end ``` ```julia function (sim::Simulation{DT})() where {DT} for n in eachtimestep(sim) sim.int(sim.x) end end ``` ] --- .left-column[ ## Traits and Closures ### Poisson Solver ### Vlasov-Poisson ### Update Potential ### Traits ### Discussion ### Refactoring ] .right-column[ * we need to adapt our vector field functions * for `scripts/charged_particles.jl` ```julia function lorentz_force!(ż, z) for i in eachindex(ż, z) x = z[i][1:3] v = z[i][4:6] e = E(x) b = B(x) ż[i][1] = v[1] ż[i][2] = v[2] ż[i][3] = v[3] ż[i][4] = e[1] + v[2] * b[3] - v[3] * b[2] ż[i][5] = e[2] + v[3] * b[1] - v[1] * b[3] ż[i][6] = e[3] + v[1] * b[2] - v[2] * b[1] end end ``` * for `scripts/vlasov_poisson.jl` ```julia function lorentz_force!(ż, z, p::PoissonSolver) for i in eachindex(ż, z) ż[i][1] = z[i][2] ż[i][2] = eval_field(p, z[i][1]) end end ``` ] --- .left-column[ ## Traits and Closures ### Poisson Solver ### Vlasov-Poisson ### Update Potential ### Traits ### Discussion ### Refactoring ### More Callbacks ] .right-column[ * add pre- and post-processing functions to `Simulation` ```julia struct Simulation{DT <: Number, IT <: Integrator{DT}, FPRE <: Union{Function,Nothing}, FPOST <: Union{Function,Nothing}} int::IT nt::Int f_pre::FPRE f_post::FPOST x::Vector{Vector{DT}} function Simulation(x₀::Vector{Vector{DT}}, int::IT, nt::Int, f_preproc::FPRE, f_postproc::FPOST) where {DT, IT <: Integrator{DT}, FPRE, FPOST} new{DT, IT, FPRE, FPOST}(int, nt, f_preproc, f_postproc, copy(x₀)) end end function Simulation(equ::Equation, integrator::Type{<:Integrator}, Δt::Real, nt::Int; f_preproc=nothing, f_postproc=nothing, kwargs...) int = integrator(equ, Δt; kwargs...) Simulation(equ.x₀, int, nt, f_preproc, f_postproc) end ``` ] --- .left-column[ ## Traits and Closures ### Poisson Solver ### Vlasov-Poisson ### Update Potential ### Traits ### Discussion ### Refactoring ### More Callbacks ] .right-column[ * add pre- and post-processing functions to `Simulation` ```julia haspreprocessing(::Simulation{<:Any, <:Any, Nothing}) = false haspreprocessing(::Simulation{<:Any, <:Any, <:Function}) = true haspostprocessing(::Simulation{<:Any, <:Any, <:Any, Nothing}) = false haspostprocessing(::Simulation{<:Any, <:Any, <:Any, <:Function}) = true preprocessing(sim::Simulation, x, n) = haspreprocessing(sim) && sim.f_pre(x, n) postprocessing(sim::Simulation, x, n) = haspostprocessing(sim) && sim.f_post(x, n) function (sim::Simulation{DT})() where {DT} for n in eachtimestep(sim) preprocessing(sim, sim.x, n) sim.int(sim.x) postprocessing(sim, sim.x, n) end end ``` * with this new callback functionality we can add HDF5 output (and restore plotting) ] --- .left-column[ ## Interfaces ] .right-column[ ## Assignment 6: Abstract Types and Interfaces Implement an abstract interface for electromagnetic fields that supports prescribed analytical, prescibed numerical as well as self-consistent numerical fields. Use this interface to generalise the Lorentz force in the charged particle equations. Wrap some package like ApproxFun.jl or FrameFun.jl, that allows to project analytical fields onto a numerical representation, to be compatible with your electromagnetic field interface. Adapt the interface of your electrostatic FFT solver accordingly. ] --- .left-column[ ## Abstract Arrays ] .right-column[ ## Assignment 7: Abstract Arrays Implement a custom array type that holds both particle and field data. Implement a Vlasov-Ampere solver that advances both the particles and the electric field in time using a splitting method. Remark: A convenient feature to consider is the possibility to index your array by symbols `:x`, `:v`, `:e`, `:b` instead of `1..4` to access the various components of the solution. ]