Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Boundary conditions in axes? #20

Open
jagot opened this issue Sep 21, 2019 · 10 comments
Open

Boundary conditions in axes? #20

jagot opened this issue Sep 21, 2019 · 10 comments

Comments

@jagot
Copy link
Member

jagot commented Sep 21, 2019

EDIT: This was my original motivation, see next post for a more generalized take on the problem.

The Schrödinger equation in spherical coordinates

image

is singular at the origin. The solution has to obey the following boundary conditions:

image

In finite-differences, it is common to adjust the upper-left corner of the derivative matrices for \ell = 0 to implement this and preserve the convergence order (see refs below). I wonder how I could accomplish this with ContinuumArrays.jl, namely I wish to dispatch an expression like

R = ...
D = Derivative(axes(R,1))
Δ = R'D'D*R

correctly such that I get the correct boundary conditions? One idea could be to introduce

abstract type AbstractDerivative{T} <: LazyQuasiMatrix{T} end

struct Derivative{T,D} <: AbstractDerivative{T}
    axis::Inclusion{T,D}
end
...

# Then I could have my own type
struct SingularDerivative{T} <: AbstractDerivative{T}
    axis::Inclusion{T,D}
end
  • If R is aware of SingularDerivative
    • If \ell == 0, apply correction
    • Else, return uncorrected matrix
  • If R is not aware of SingularDerivative, proceed as normal

The problem with this approach is that suddenly all basis libraries will need to dispatch on AbstractDerivative.

A better alternative would of course be if I could somehow attach boundary conditions to the derivative operator:

D = Derivative(axes(R,1), :singular, :regular)

Are there better approaches?

References

  • Schafer, K. J., Gaarde, M. B., Kulander, K. C., Sheehy, B., &
    DiMauro, L. F. (2000). Calculations of strong field multiphoton
    processes in alkali metal atoms. AIP Conference Proceedings, 525(1),
    45–58. http://dx.doi.org/10.1063/1.1291925

  • Schafer, K. J. (2009). Numerical methods in strong field physics. In
    T. Brabec (Eds.), (pp. 111–145). : Springer.

  • Muller, H. G. (1999). An Efficient Propagation Scheme for the
    Time-Dependent Schrödinger equation in the Velocity Gauge. Laser
    Physics, 9(1), 138–148.

  • Patchkovskii, S., & Muller, H. (2016). Simple, accurate, and
    efficient implementation of 1-electron atomic time-dependent
    Schrödinger equation in spherical coordinates. Computer Physics
    Communications, 199(nil),
    153–169. http://dx.doi.org/10.1016/j.cpc.2015.10.014

@jagot jagot changed the title Derivatives in singular problems Consistent specification of boundary conditions Sep 21, 2019
@jagot
Copy link
Member Author

jagot commented Sep 21, 2019

Thinking about the above, and looking at how things are implemented in DiffEqOperators.jl, I believe what I really want is to be able to supply general boundary conditions, when instantiating derivative operators, i.e. polynomials on the form α + β*u + γ*u' + δ*u'' + ... = 0 for the left/right BCs. These will then be passed on to the different basis libraries, which will have to implement them as they see fit.

This will also solve my outstanding gripe with how asymmetric it feels to have to restrict bases such as BSplinesQuasi.jl and FEDVRQuasi.jl to achieve Dirichlet0 conditions: R = BSplines(t)[:,2:end-1], but not for FiniteDifferencesQuasi.jl: R = FiniteDifferences(10, 0.1). Ideally, the user should not need to "prepare" the basis, anticipating the boundary conditions; that should be the responsibility of the basis implementation, given BC polynomials as above. This would then give the correct basis restriction matrices, change coefficients of the matrices, &c.

The question is then how the boundary conditions should be passed to the implementation. DiffEqOperators.jl uses and extra BC operator Q, such that D*Q*u does the right thing. I think I'd prefer D = Derivative(axes(R,1), left_bc_poly, right_bc_poly), where e.g. left_bc_poly=[0,1] implies 0+u=0, i.e. Dirichlet0 conditions on the left edge.

@jagot
Copy link
Member Author

jagot commented Sep 22, 2019

And I guess another approach would be to implement boundary conditions by adding operators that enforce them.

@dlfivefifty
Copy link
Member

It doesn't really make sense to incorporate boundary conditions into the operator apart from finite-differences since the derivative is an infinitesimal object. Even with Finite differences, it can be interpreted as a restriction or side constraints that are incorporated via elimination. So I don't think SingularDerivative is a well-defined concept.

looking at how things are implemented in DiffEqOperators.jl

Their approach is only applicable to finite differences, and can be replicated with linear algebra via other means. ContinuumArrays.jl makes the fundamental decision that everything is a quasi-array, I don't see how DiffEqOperators.jl's approach fits into this interpretation.

D = Derivative(axes(R,1), left_bc_poly, right_bc_poly)

Absolutely not. This makes no mathematical sense and only supports Dirichlet conditions, and pollutes the definition of Derivative so that it no longer is a quasi-matrix.

Ideally, the user should not need to "prepare" the basis, anticipating the boundary conditions; that should be the responsibility of the basis implementation

I view the ContinuumArrays.jl framework as not necessarily user facing. Other packages can add more convenient usages.

And I guess another approach would be to implement boundary conditions by adding operators that enforce them.

Yes this is much better. This already works in OrthogonalPolynomialsQuasi.jl for Dirichlet conditions:

T = Chebyshev()
C = Ultraspherical(2)
D = Derivative(axes(T,1))
A = (C\(D^2*T))+100(C\T)

n = 100
c = [T[[-1,1],1:n]; A[1:n-2,1:n]] \ [1;2;zeros(n-2)]

Neumann, Robin, integral constraints, etc. all fit nicely. For example, the following would do Neumann:

c = [D*T[[-1,1],1:n]; A[1:n-2,1:n]] \ [1;2;zeros(n-2)]

I'm closing this as I feel this is already a solved problem: existing methods can be interpreted as either bases satisfy the boundary conditions (e.g. restrictions) or as side constraints, both of which are already possible. Making it "user friendly" or more inline with the way people view finite differences is outside the scope.

@dlfivefifty
Copy link
Member

Note that boundary conditions are a property of the space that an operator acts on. So if you really want to incorporate it into Derivative, you could do so via the axes. That is, make a Dirichlet0 that plays the same role as Inclusion but implies that functions vanish at the endpoints. Note that this might be useful for adjoints, that is, we can have

D = Derivative(Dirichlet0(-1..1))
axes(D) == (Inclusion(-1..1), Dirichlet0(-1..1)) # derivatives don't match conditions
R = Restriction(axes(D)...) # represent restriction from `-1..1` to `Dirichlet0`
(R*D)' === -R*D # this is the usual skew-self-adjoint property of derivatives

But needs more thought

@dlfivefifty dlfivefifty reopened this Sep 23, 2019
@dlfivefifty dlfivefifty changed the title Consistent specification of boundary conditions Boundary conditions in axes? Sep 23, 2019
@jagot
Copy link
Member Author

jagot commented Sep 23, 2019

For sure, I don't wish to skew the focus of ContinuumArrays towards finite-differences, and I did not really like the idea to tack the BCs to the derivatives. I think I can live with creating a basis having to specify whether it's a Dirichlet or a Neumann boundary condition (i.e. leaving out first/last elements of the basis for Dirichlet). Conditions on derivatives are then perhaps most easily specified using extra operators.

@jagot
Copy link
Member Author

jagot commented Sep 23, 2019

Would this be the proper way to represent the boundary condition of my original post as an operator? This actually makes sense to me, so I think that's how I would implement it.

julia> using ContinuumArrays

julia> using FiniteDifferencesQuasi

julia> N = 100
100

julia> ρ = 0.1
0.1

julia> R = RadialDifferences(N,ρ)
Radial finite differences basis {Float64} on 0.0..10.05 with 100 points spaced by ρ = 0.1

julia> D = Derivative(axes(R,1))
Derivative{Float64,Interval{:closed,:closed,Float64}}(Inclusion(0.0..10.05))

julia> δ = DiracDelta(0, axes(R,1))
DiracDelta{Int64,Inclusion{Float64,Interval{:closed,:closed,Float64}}}(0, Inclusion(0.0..10.05))

julia> Z = 1
1

julia> δ.*(2Z*D - D'D)
QuasiArrays.BroadcastQuasiArray{Float64,2,typeof(*),Tuple{DiracDelta{Int64,Inclusion{Float64,Interval{:closed,:closed,Float64}}},QuasiArrays.BroadcastQuasiArray{Float64,2,typeof(-),Tuple{QuasiArrays.BroadcastQuasiArray{Float64,2,typeof(*),Tuple{Int64,Derivative{Float64,Interval{:closed,:closed,Float64}}}},QuasiArrays.ApplyQuasiArray{Float64,2,typeof(*),Tuple{QuasiArrays.QuasiAdjoint{Float64,Derivative{Float64,Interval{:closed,:closed,Float64}}},Derivative{Float64,Interval{:closed,:closed,Float64}}}}}}}}(*, (DiracDelta{Int64,Inclusion{Float64,Interval{:closed,:closed,Float64}}}(0, Inclusion(0.0..10.05)), QuasiArrays.BroadcastQuasiArray{Float64,2,typeof(-),Tuple{QuasiArrays.BroadcastQuasiArray{Float64,2,typeof(*),Tuple{Int64,Derivative{Float64,Interval{:closed,:closed,Float64}}}},QuasiArrays.ApplyQuasiArray{Float64,2,typeof(*),Tuple{QuasiArrays.QuasiAdjoint{Float64,Derivative{Float64,Interval{:closed,:closed,Float64}}},Derivative{Float64,Interval{:closed,:closed,Float64}}}}}}(-, (QuasiArrays.BroadcastQuasiArray{Float64,2,typeof(*),Tuple{Int64,Derivative{Float64,Interval{:closed,:closed,Float64}}}}(*, (2, Derivative{Float64,Interval{:closed,:closed,Float64}}(Inclusion(0.0..10.05)))), QuasiArrays.ApplyQuasiArray{Float64,2,typeof(*),Tuple{QuasiArrays.QuasiAdjoint{Float64,Derivative{Float64,Interval{:closed,:closed,Float64}}},Derivative{Float64,Interval{:closed,:closed,Float64}}}}(*, (QuasiArrays.QuasiAdjoint{Float64,Derivative{Float64,Interval{:closed,:closed,Float64}}}(Derivative{Float64,Interval{:closed,:closed,Float64}}(Inclusion(0.0..10.05))), Derivative{Float64,Interval{:closed,:closed,Float64}}(Inclusion(0.0..10.05))))))))

@dlfivefifty
Copy link
Member

Your construction doesn’t seem right as the axes don’t match, and I think it’s an invalid use of adjoint. I think it should just be

(2ZD + D^2)[0,:]

Though there are equivalent variants.

@jagot
Copy link
Member Author

jagot commented Sep 23, 2019

Interesting. Then I need to figure out how to materialize this into a matrix.

I guess this is a bug with QuasiArrays?:

julia> D^2
ERROR: MethodError: no method matching copy(::Interval{:closed,:closed,Float64})
Closest candidates are:
  copy(::Expr) at expr.jl:36
  copy(::Core.CodeInfo) at expr.jl:64
  copy(::BitSet) at bitset.jl:46
  ...
Stacktrace:
 [1] copy(::Inclusion{Float64,Interval{:closed,:closed,Float64}}) at /Users/jagot/.julia/packages/QuasiArrays/pYQIU/src/indices.jl:139
 [2] copy(::Derivative{Float64,Interval{:closed,:closed,Float64}}) at /Users/jagot/.julia/packages/ContinuumArrays/C48LP/src/operators.jl:154
 [3] ^(::Derivative{Float64,Interval{:closed,:closed,Float64}}, ::Int64) at /Users/jagot/.julia/packages/QuasiArrays/pYQIU/src/dense.jl:36
 [4] ^(::Derivative{Float64,Interval{:closed,:closed,Float64}}, ::Int64) at /Users/jagot/.julia/packages/QuasiArrays/pYQIU/src/dense.jl:40
 [5] macro expansion at ./none:0 [inlined]
 [6] literal_pow(::typeof(^), ::Derivative{Float64,Interval{:closed,:closed,Float64}}, ::Val{2}) at ./none:0
 [7] top-level scope at REPL[30]:1

@dlfivefifty
Copy link
Member

That should be fixed on the right branches, working towards merging and tagging this week

@jagot
Copy link
Member Author

jagot commented May 26, 2020

How would this be applied?, i.e. how does this materialize together with the derivative matrix?

I'm thinking something along these lines:

julia> R = StaggeredFiniteDifferences(10, 0.1)
Staggered finite differences basis {Float64} on 0.0..1.0499999999999998 with 10 points spaced by ρ = 0.1

julia> D = Derivative(axes(R,1))
Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}(Inclusion(0.0..1.0499999999999998))

julia> Z = 1
1

julia> bc = (2*Z*D + D^2)
QuasiArrays.BroadcastQuasiArray{Float64,2,typeof(+),Tuple{QuasiArrays.BroadcastQuasiArray{Float64,2,typeof(*),Tuple{Int64,Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}},QuasiArrays.ApplyQuasiArray{Float64,2,typeof(*),Tuple{Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}},Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}}}}(+, (QuasiArrays.BroadcastQuasiArray{Float64,2,typeof(*),Tuple{Int64,Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}}(*, (2, Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}(Inclusion(0.0..1.0499999999999998)))), QuasiArrays.ApplyQuasiArray{Float64,2,typeof(*),Tuple{Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}},Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}}(*, (Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}(Inclusion(0.0..1.0499999999999998)), Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}(Inclusion(0.0..1.0499999999999998))))))

julia> R'*(D'*D+bc)*R
QuasiArrays.ApplyQuasiArray{Float64,2,typeof(*),Tuple{QuasiArrays.QuasiAdjoint{Float64,StaggeredFiniteDifferences{Float64,StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}},Array{Float64,1},Array{Float64,1}}},QuasiArrays.BroadcastQuasiArray{Float64,2,typeof(+),Tuple{QuasiArrays.ApplyQuasiArray{Float64,2,typeof(*),Tuple{QuasiArrays.QuasiAdjoint{Float64,Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}},Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}},QuasiArrays.BroadcastQuasiArray{Float64,2,typeof(+),Tuple{QuasiArrays.BroadcastQuasiArray{Float64,2,typeof(*),Tuple{Int64,Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}},QuasiArrays.ApplyQuasiArray{Float64,2,typeof(*),Tuple{Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}},Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}}}}}},StaggeredFiniteDifferences{Float64,StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}},Array{Float64,1},Array{Float64,1}}}}(*, (QuasiArrays.QuasiAdjoint{Float64,StaggeredFiniteDifferences{Float64,StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}},Array{Float64,1},Array{Float64,1}}}(Staggered finite differences basis {Float64} on 0.0..1.0499999999999998 with 10 points spaced by ρ = 0.1), QuasiArrays.BroadcastQuasiArray{Float64,2,typeof(+),Tuple{QuasiArrays.ApplyQuasiArray{Float64,2,typeof(*),Tuple{QuasiArrays.QuasiAdjoint{Float64,Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}},Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}},QuasiArrays.BroadcastQuasiArray{Float64,2,typeof(+),Tuple{QuasiArrays.BroadcastQuasiArray{Float64,2,typeof(*),Tuple{Int64,Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}},QuasiArrays.ApplyQuasiArray{Float64,2,typeof(*),Tuple{Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}},Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}}}}}}(+, (QuasiArrays.ApplyQuasiArray{Float64,2,typeof(*),Tuple{QuasiArrays.QuasiAdjoint{Float64,Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}},Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}}(*, (QuasiArrays.QuasiAdjoint{Float64,Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}(Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}(Inclusion(0.0..1.0499999999999998))), Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}(Inclusion(0.0..1.0499999999999998)))), QuasiArrays.BroadcastQuasiArray{Float64,2,typeof(+),Tuple{QuasiArrays.BroadcastQuasiArray{Float64,2,typeof(*),Tuple{Int64,Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}},QuasiArrays.ApplyQuasiArray{Float64,2,typeof(*),Tuple{Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}},Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}}}}(+, (QuasiArrays.BroadcastQuasiArray{Float64,2,typeof(*),Tuple{Int64,Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}}(*, (2, Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}(Inclusion(0.0..1.0499999999999998)))), QuasiArrays.ApplyQuasiArray{Float64,2,typeof(*),Tuple{Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}},Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}}(*, (Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}(Inclusion(0.0..1.0499999999999998)), Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}(Inclusion(0.0..1.0499999999999998)))))))), Staggered finite differences basis {Float64} on 0.0..1.0499999999999998 with 10 points spaced by ρ = 0.1))

What's missing from R'*(D'*D+bc)*R is of course that bc should be applied at r=0, but I don't understand how to materialize (2*Z*D + D^2)[0,:] as a matrix, since it seems to be a vector:

julia> axes((2*Z*D + D^2)[0,:])
(Inclusion(0.0..1.0499999999999998),)

The reason for sandwiching the boundary condition together with the derivative operators is that I want the boundary condition to be applied when the derivative matrix is materialized, not afterwards. For explicit finite-differences it does not really matter, but for implicit (or compact) finite-differences, the derivative is computed by multiplying by one matrix and solving by another, and both these matrices need to be changed to apply the boundary conditions (and the latter is of course factorized, so changing it afterwards is not possible).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants