-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Boundary conditions in axes? #20
Comments
Thinking about the above, and looking at how things are implemented in DiffEqOperators.jl, I believe what I really want is to be able to supply general boundary conditions, when instantiating derivative operators, i.e. polynomials on the form This will also solve my outstanding gripe with how asymmetric it feels to have to restrict bases such as BSplinesQuasi.jl and FEDVRQuasi.jl to achieve Dirichlet0 conditions: The question is then how the boundary conditions should be passed to the implementation. DiffEqOperators.jl uses and extra BC operator |
And I guess another approach would be to implement boundary conditions by adding operators that enforce them. |
It doesn't really make sense to incorporate boundary conditions into the operator apart from finite-differences since the derivative is an infinitesimal object. Even with Finite differences, it can be interpreted as a restriction or side constraints that are incorporated via elimination. So I don't think
Their approach is only applicable to finite differences, and can be replicated with linear algebra via other means. ContinuumArrays.jl makes the fundamental decision that everything is a quasi-array, I don't see how DiffEqOperators.jl's approach fits into this interpretation.
Absolutely not. This makes no mathematical sense and only supports Dirichlet conditions, and pollutes the definition of
I view the ContinuumArrays.jl framework as not necessarily user facing. Other packages can add more convenient usages.
Yes this is much better. This already works in OrthogonalPolynomialsQuasi.jl for Dirichlet conditions: T = Chebyshev()
C = Ultraspherical(2)
D = Derivative(axes(T,1))
A = (C\(D^2*T))+100(C\T)
n = 100
c = [T[[-1,1],1:n]; A[1:n-2,1:n]] \ [1;2;zeros(n-2)] Neumann, Robin, integral constraints, etc. all fit nicely. For example, the following would do Neumann: c = [D*T[[-1,1],1:n]; A[1:n-2,1:n]] \ [1;2;zeros(n-2)] I'm closing this as I feel this is already a solved problem: existing methods can be interpreted as either bases satisfy the boundary conditions (e.g. restrictions) or as side constraints, both of which are already possible. Making it "user friendly" or more inline with the way people view finite differences is outside the scope. |
Note that boundary conditions are a property of the space that an operator acts on. So if you really want to incorporate it into D = Derivative(Dirichlet0(-1..1))
axes(D) == (Inclusion(-1..1), Dirichlet0(-1..1)) # derivatives don't match conditions
R = Restriction(axes(D)...) # represent restriction from `-1..1` to `Dirichlet0`
(R*D)' === -R*D # this is the usual skew-self-adjoint property of derivatives But needs more thought |
For sure, I don't wish to skew the focus of ContinuumArrays towards finite-differences, and I did not really like the idea to tack the BCs to the derivatives. I think I can live with creating a basis having to specify whether it's a Dirichlet or a Neumann boundary condition (i.e. leaving out first/last elements of the basis for Dirichlet). Conditions on derivatives are then perhaps most easily specified using extra operators. |
Would this be the proper way to represent the boundary condition of my original post as an operator? This actually makes sense to me, so I think that's how I would implement it. julia> using ContinuumArrays
julia> using FiniteDifferencesQuasi
julia> N = 100
100
julia> ρ = 0.1
0.1
julia> R = RadialDifferences(N,ρ)
Radial finite differences basis {Float64} on 0.0..10.05 with 100 points spaced by ρ = 0.1
julia> D = Derivative(axes(R,1))
Derivative{Float64,Interval{:closed,:closed,Float64}}(Inclusion(0.0..10.05))
julia> δ = DiracDelta(0, axes(R,1))
DiracDelta{Int64,Inclusion{Float64,Interval{:closed,:closed,Float64}}}(0, Inclusion(0.0..10.05))
julia> Z = 1
1
julia> δ.*(2Z*D - D'D)
QuasiArrays.BroadcastQuasiArray{Float64,2,typeof(*),Tuple{DiracDelta{Int64,Inclusion{Float64,Interval{:closed,:closed,Float64}}},QuasiArrays.BroadcastQuasiArray{Float64,2,typeof(-),Tuple{QuasiArrays.BroadcastQuasiArray{Float64,2,typeof(*),Tuple{Int64,Derivative{Float64,Interval{:closed,:closed,Float64}}}},QuasiArrays.ApplyQuasiArray{Float64,2,typeof(*),Tuple{QuasiArrays.QuasiAdjoint{Float64,Derivative{Float64,Interval{:closed,:closed,Float64}}},Derivative{Float64,Interval{:closed,:closed,Float64}}}}}}}}(*, (DiracDelta{Int64,Inclusion{Float64,Interval{:closed,:closed,Float64}}}(0, Inclusion(0.0..10.05)), QuasiArrays.BroadcastQuasiArray{Float64,2,typeof(-),Tuple{QuasiArrays.BroadcastQuasiArray{Float64,2,typeof(*),Tuple{Int64,Derivative{Float64,Interval{:closed,:closed,Float64}}}},QuasiArrays.ApplyQuasiArray{Float64,2,typeof(*),Tuple{QuasiArrays.QuasiAdjoint{Float64,Derivative{Float64,Interval{:closed,:closed,Float64}}},Derivative{Float64,Interval{:closed,:closed,Float64}}}}}}(-, (QuasiArrays.BroadcastQuasiArray{Float64,2,typeof(*),Tuple{Int64,Derivative{Float64,Interval{:closed,:closed,Float64}}}}(*, (2, Derivative{Float64,Interval{:closed,:closed,Float64}}(Inclusion(0.0..10.05)))), QuasiArrays.ApplyQuasiArray{Float64,2,typeof(*),Tuple{QuasiArrays.QuasiAdjoint{Float64,Derivative{Float64,Interval{:closed,:closed,Float64}}},Derivative{Float64,Interval{:closed,:closed,Float64}}}}(*, (QuasiArrays.QuasiAdjoint{Float64,Derivative{Float64,Interval{:closed,:closed,Float64}}}(Derivative{Float64,Interval{:closed,:closed,Float64}}(Inclusion(0.0..10.05))), Derivative{Float64,Interval{:closed,:closed,Float64}}(Inclusion(0.0..10.05)))))))) |
Your construction doesn’t seem right as the axes don’t match, and I think it’s an invalid use of adjoint. I think it should just be (2ZD + D^2)[0,:] Though there are equivalent variants. |
Interesting. Then I need to figure out how to materialize this into a matrix. I guess this is a bug with QuasiArrays?: julia> D^2
ERROR: MethodError: no method matching copy(::Interval{:closed,:closed,Float64})
Closest candidates are:
copy(::Expr) at expr.jl:36
copy(::Core.CodeInfo) at expr.jl:64
copy(::BitSet) at bitset.jl:46
...
Stacktrace:
[1] copy(::Inclusion{Float64,Interval{:closed,:closed,Float64}}) at /Users/jagot/.julia/packages/QuasiArrays/pYQIU/src/indices.jl:139
[2] copy(::Derivative{Float64,Interval{:closed,:closed,Float64}}) at /Users/jagot/.julia/packages/ContinuumArrays/C48LP/src/operators.jl:154
[3] ^(::Derivative{Float64,Interval{:closed,:closed,Float64}}, ::Int64) at /Users/jagot/.julia/packages/QuasiArrays/pYQIU/src/dense.jl:36
[4] ^(::Derivative{Float64,Interval{:closed,:closed,Float64}}, ::Int64) at /Users/jagot/.julia/packages/QuasiArrays/pYQIU/src/dense.jl:40
[5] macro expansion at ./none:0 [inlined]
[6] literal_pow(::typeof(^), ::Derivative{Float64,Interval{:closed,:closed,Float64}}, ::Val{2}) at ./none:0
[7] top-level scope at REPL[30]:1 |
That should be fixed on the right branches, working towards merging and tagging this week |
How would this be applied?, i.e. how does this materialize together with the derivative matrix? I'm thinking something along these lines: julia> R = StaggeredFiniteDifferences(10, 0.1)
Staggered finite differences basis {Float64} on 0.0..1.0499999999999998 with 10 points spaced by ρ = 0.1
julia> D = Derivative(axes(R,1))
Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}(Inclusion(0.0..1.0499999999999998))
julia> Z = 1
1
julia> bc = (2*Z*D + D^2)
QuasiArrays.BroadcastQuasiArray{Float64,2,typeof(+),Tuple{QuasiArrays.BroadcastQuasiArray{Float64,2,typeof(*),Tuple{Int64,Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}},QuasiArrays.ApplyQuasiArray{Float64,2,typeof(*),Tuple{Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}},Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}}}}(+, (QuasiArrays.BroadcastQuasiArray{Float64,2,typeof(*),Tuple{Int64,Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}}(*, (2, Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}(Inclusion(0.0..1.0499999999999998)))), QuasiArrays.ApplyQuasiArray{Float64,2,typeof(*),Tuple{Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}},Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}}(*, (Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}(Inclusion(0.0..1.0499999999999998)), Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}(Inclusion(0.0..1.0499999999999998))))))
julia> R'*(D'*D+bc)*R
QuasiArrays.ApplyQuasiArray{Float64,2,typeof(*),Tuple{QuasiArrays.QuasiAdjoint{Float64,StaggeredFiniteDifferences{Float64,StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}},Array{Float64,1},Array{Float64,1}}},QuasiArrays.BroadcastQuasiArray{Float64,2,typeof(+),Tuple{QuasiArrays.ApplyQuasiArray{Float64,2,typeof(*),Tuple{QuasiArrays.QuasiAdjoint{Float64,Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}},Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}},QuasiArrays.BroadcastQuasiArray{Float64,2,typeof(+),Tuple{QuasiArrays.BroadcastQuasiArray{Float64,2,typeof(*),Tuple{Int64,Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}},QuasiArrays.ApplyQuasiArray{Float64,2,typeof(*),Tuple{Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}},Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}}}}}},StaggeredFiniteDifferences{Float64,StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}},Array{Float64,1},Array{Float64,1}}}}(*, (QuasiArrays.QuasiAdjoint{Float64,StaggeredFiniteDifferences{Float64,StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}},Array{Float64,1},Array{Float64,1}}}(Staggered finite differences basis {Float64} on 0.0..1.0499999999999998 with 10 points spaced by ρ = 0.1), QuasiArrays.BroadcastQuasiArray{Float64,2,typeof(+),Tuple{QuasiArrays.ApplyQuasiArray{Float64,2,typeof(*),Tuple{QuasiArrays.QuasiAdjoint{Float64,Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}},Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}},QuasiArrays.BroadcastQuasiArray{Float64,2,typeof(+),Tuple{QuasiArrays.BroadcastQuasiArray{Float64,2,typeof(*),Tuple{Int64,Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}},QuasiArrays.ApplyQuasiArray{Float64,2,typeof(*),Tuple{Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}},Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}}}}}}(+, (QuasiArrays.ApplyQuasiArray{Float64,2,typeof(*),Tuple{QuasiArrays.QuasiAdjoint{Float64,Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}},Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}}(*, (QuasiArrays.QuasiAdjoint{Float64,Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}(Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}(Inclusion(0.0..1.0499999999999998))), Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}(Inclusion(0.0..1.0499999999999998)))), QuasiArrays.BroadcastQuasiArray{Float64,2,typeof(+),Tuple{QuasiArrays.BroadcastQuasiArray{Float64,2,typeof(*),Tuple{Int64,Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}},QuasiArrays.ApplyQuasiArray{Float64,2,typeof(*),Tuple{Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}},Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}}}}(+, (QuasiArrays.BroadcastQuasiArray{Float64,2,typeof(*),Tuple{Int64,Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}}(*, (2, Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}(Inclusion(0.0..1.0499999999999998)))), QuasiArrays.ApplyQuasiArray{Float64,2,typeof(*),Tuple{Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}},Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}}(*, (Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}(Inclusion(0.0..1.0499999999999998)), Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}(Inclusion(0.0..1.0499999999999998)))))))), Staggered finite differences basis {Float64} on 0.0..1.0499999999999998 with 10 points spaced by ρ = 0.1)) What's missing from julia> axes((2*Z*D + D^2)[0,:])
(Inclusion(0.0..1.0499999999999998),) The reason for sandwiching the boundary condition together with the derivative operators is that I want the boundary condition to be applied when the derivative matrix is materialized, not afterwards. For explicit finite-differences it does not really matter, but for implicit (or compact) finite-differences, the derivative is computed by multiplying by one matrix and solving by another, and both these matrices need to be changed to apply the boundary conditions (and the latter is of course factorized, so changing it afterwards is not possible). |
EDIT: This was my original motivation, see next post for a more generalized take on the problem.
The Schrödinger equation in spherical coordinates
is singular at the origin. The solution has to obey the following boundary conditions:
In finite-differences, it is common to adjust the upper-left corner of the derivative matrices for
\ell = 0
to implement this and preserve the convergence order (see refs below). I wonder how I could accomplish this with ContinuumArrays.jl, namely I wish to dispatch an expression likecorrectly such that I get the correct boundary conditions? One idea could be to introduce
R
is aware ofSingularDerivative
\ell == 0
, apply correctionR
is not aware ofSingularDerivative
, proceed as normalThe problem with this approach is that suddenly all basis libraries will need to dispatch on
AbstractDerivative
.A better alternative would of course be if I could somehow attach boundary conditions to the derivative operator:
Are there better approaches?
References
Schafer, K. J., Gaarde, M. B., Kulander, K. C., Sheehy, B., &
DiMauro, L. F. (2000). Calculations of strong field multiphoton
processes in alkali metal atoms. AIP Conference Proceedings, 525(1),
45–58. http://dx.doi.org/10.1063/1.1291925
Schafer, K. J. (2009). Numerical methods in strong field physics. In
T. Brabec (Eds.), (pp. 111–145). : Springer.
Muller, H. G. (1999). An Efficient Propagation Scheme for the
Time-Dependent Schrödinger equation in the Velocity Gauge. Laser
Physics, 9(1), 138–148.
Patchkovskii, S., & Muller, H. (2016). Simple, accurate, and
efficient implementation of 1-electron atomic time-dependent
Schrödinger equation in spherical coordinates. Computer Physics
Communications, 199(nil),
153–169. http://dx.doi.org/10.1016/j.cpc.2015.10.014
The text was updated successfully, but these errors were encountered: