Skip to content

Commit

Permalink
added test directory in examples
Browse files Browse the repository at this point in the history
broken code for now

changed directory name

necessary fixes

might work

improve kappa model by reducing noise

better formatting

intermediate changes

additional changes

improvement to tutorial

docs build working

fix build

fixed make and added another jl file

removed data from make.jl and updated alt_kappa

added pre-rendered plots

undid changes to Project.toml in docs

small change to comments

added surface flux example
  • Loading branch information
wilsonduan10 committed Jun 30, 2023
1 parent 55ea3fa commit fbb7793
Show file tree
Hide file tree
Showing 8 changed files with 564 additions and 1 deletion.
2 changes: 1 addition & 1 deletion docs/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -9,4 +9,4 @@ Plots = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
Thermodynamics = "b60c26fb-14c3-4610-9d3e-2d17fe7ff00c"

[compat]
CloudMicrophysics = "0.5"
CloudMicrophysics = "0.5"
29 changes: 29 additions & 0 deletions docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,8 @@ using EnsembleKalmanProcesses,
Documenter,
Plots, # so that Literate.jl does not capture precompilation output
Literate
using Downloads
using DelimitedFiles

# Gotta set this environment variable when using the GR run-time on CI machines.
# This happens as examples will use Plots.jl to make plots and movies.
Expand All @@ -19,6 +21,7 @@ examples_for_literation = [
"LossMinimization/loss_minimization.jl",
"SparseLossMinimization/loss_minimization_sparse_eki.jl",
"AerosolActivation/aerosol_activation.jl",
"SurfaceFluxExample/kappa_calibration.jl",
]

if isempty(get(ENV, "CI", ""))
Expand All @@ -42,6 +45,31 @@ for example in examples_for_literation
)
end

# The purpose of the code below is to remove the occurrences of the Documenter @example block
# created by Literate in order to prevent Documenter from evaluating the code blocks from
# kappa_calibration.jl. The reason the code blocks do not work is due to a package compatibility
# mismatch, i.e. aerosol_activation is only compatible with CloudMicrophysics v0.5, and CloudMicrophysics v0.5
# is only compatible with SurfaceFluxes v0.3. However, kappa_calibration.jl requires a new version of SurfaceFluxes,
# version 0.6, making it impossible to evaluate the code blocks in the markdown file kappa_calibration.md without
# running into errors. Thus, we filter out the @example blocks and merely display the code on the docs.

# Another reason we cannot evaluate the code blocks in kappa_calibration is that kappa_calibration depends
# on locally downloaded data. Because we cannot download data to the remote repository, it is never plausible
# to run kappa_calibration remotely.

# read file and copy over modified
kappa_md_file = open("docs/src/literated/kappa_calibration.md", "r")
all_lines = string("")
while (!eof(kappa_md_file))
line = readline(kappa_md_file) * "\n"
line = replace(line, "@example kappa_calibration" => "")
global all_lines *= line
end

# write to file
kappa_md_file = open("docs/src/literated/kappa_calibration.md", "w")
write(kappa_md_file, all_lines)
close(kappa_md_file)
#----------

api = [
Expand All @@ -66,6 +94,7 @@ examples = [
"Aerosol activation" => "literated/aerosol_activation.md",
"TOML interface" => "examples/sinusoid_example_toml.md",
"HPC interfacing example: ClimateMachine" => "examples/ClimateMachine_example.md",
"Surface Fluxes" => "literated/kappa_calibration.md",
"Template" => "examples/template_example.md",
]

Expand Down
Binary file added docs/src/assets/kappa_calibration_plot1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/src/assets/kappa_calibration_plot2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
8 changes: 8 additions & 0 deletions examples/SurfaceFluxExample/Project.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
[deps]
CLIMAParameters = "6eacf6c3-8458-43b9-ae03-caf5306d3d53"
Distributions = "31c24e10-a181-5473-b8eb-7969acd0382f"
EnsembleKalmanProcesses = "aa8a2aa5-91d8-4396-bcef-d4f2ec43552d"
Plots = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
StaticArrays = "90137ffa-7385-5640-81b9-e52037218182"
SurfaceFluxes = "49b00bb7-8bd4-4f2b-b78c-51cd0450215f"
Thermodynamics = "b60c26fb-14c3-4610-9d3e-2d17fe7ff00c"
131 changes: 131 additions & 0 deletions examples/SurfaceFluxExample/alt_kappa_calibration.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,131 @@
# # Alternative Kappa Calibration Example
# ## Overview
#=
In this example, just like in kappa_calibration.jl, we use the inverse problem to calibrate the von-karman constant, κ in
the equation: u(z) = u^* / κ log (z / z0),
which represents the wind profile in Monin-Obukhov
Similarity Theory (MOST) formulations. We use the same dataset: https://turbulence.pha.jhu.edu/Channel_Flow.aspx
Instead of using u^* as an observable, we use the dataset's u, and each ensemble member will estimate u
through the profile equation u(z) = u^* / κ log (z / z0).
=#

# ## Prerequisites
#=
[EnsembleKalmanProcess.jl](https://github.com/CliMA/EnsembleKalmanProcesses.jl),
=#

# ## Example

# First, we import relevant modules.
using LinearAlgebra, Random
using Distributions, Plots
using EnsembleKalmanProcesses
using EnsembleKalmanProcesses.ParameterDistributions
const EKP = EnsembleKalmanProcesses

using Downloads
using DelimitedFiles

FT = Float64

mkpath(joinpath(@__DIR__, "data")) # create data folder if not exists
web_datafile_path = "https://turbulence.oden.utexas.edu/channel2015/data/LM_Channel_5200_mean_prof.dat"
localfile = "data/profiles.dat"
Downloads.download(web_datafile_path, localfile)
data_mean_velocity = readdlm("data/profiles.dat", skipstart = 112) ## We skip 72 lines (header) and 40(laminar layer)

web_datafile_path = "https://turbulence.oden.utexas.edu/channel2015/data/LM_Channel_5200_mean_stdev.dat"
localfile = "data/vel_stdev.dat"
Downloads.download(web_datafile_path, localfile)
# We skip 72 lines (header) and 40(laminar layer)
data_stdev_velocity = readdlm("data/vel_stdev.dat", skipstart = 112)

# We extract the required info for this problem
u_star_obs = 4.14872e-02 # add noise later
z0 = FT(0.0001)
κ = 0.4

# turn u into distributions
u = data_mean_velocity[:, 3] * u_star_obs
z = data_mean_velocity[:, 1]
u = u[1:(length(u) - 1)] # filter out last element because σᵤ is only of length 727, not 728
z = z[1:(length(z) - 1)]

σᵤ = data_stdev_velocity[:, 4] * u_star_obs
dist_u = Array{Normal{Float64}}(undef, length(u))
for i in 1:length(u)
dist_u[i] = Normal(u[i], σᵤ[i])
end

# u(z) = u^* / κ log (z / z0)
function physical_model(parameters, inputs)
κ = parameters[1] # this is being updated by the EKP iterator
(; u_star_obs, z, z0) = inputs
u_profile = u_star_obs ./ κ .* log.(z ./ z0)
return u_profile
end

function G(parameters, inputs, u_profile = nothing)
if (isnothing(u_profile))
u_profile = physical_model(parameters, inputs)
end
return [maximum(u_profile) - minimum(u_profile), mean(u_profile)]
end

Γ = 0.0001 * I
η_dist = MvNormal(zeros(2), Γ)
noisy_u_profile = [rand(dist_u[i]) for i in 1:length(u)]
y = G(nothing, nothing, noisy_u_profile)

parameters = (; κ)
inputs = (; u_star_obs, z, z0)
# y = G(parameters, inputs) .+ rand(η_dist)

# Assume that users have prior knowledge of approximate truth.
# (e.g. via physical models / subset of obs / physical laws.)
prior_u1 = constrained_gaussian("κ", 0.35, 0.25, 0, Inf);
prior = combine_distributions([prior_u1])

# Set up the initial ensembles
N_ensemble = 5;
N_iterations = 10;

rng_seed = 41
rng = Random.MersenneTwister(rng_seed)
initial_ensemble = EKP.construct_initial_ensemble(rng, prior, N_ensemble);

# Define EKP and run iterative solver for defined number of iterations
ensemble_kalman_process = EKP.EnsembleKalmanProcess(initial_ensemble, y, Γ, Inversion(); rng = rng)

for n in 1:N_iterations
params_i = get_ϕ_final(prior, ensemble_kalman_process)
G_ens = hcat([G(params_i[:, m], inputs) for m in 1:N_ensemble]...)
EKP.update_ensemble!(ensemble_kalman_process, G_ens)
end

# Mean values in final ensemble for the two parameters of interest reflect the "truth" within some degree of
# uncertainty that we can quantify from the elements of `final_ensemble`.
final_ensemble = get_ϕ_final(prior, ensemble_kalman_process)
mean(final_ensemble[1, :]) # [param, ens_no]


ENV["GKSwstype"] = "nul"
zrange = z
plot(zrange, noisy_u_profile, c = :black, label = "Truth", linewidth = 2, legend = :bottomright)
plot!(zrange, physical_model(parameters, inputs), c = :green, label = "Model truth", linewidth = 2)# reshape to convert from vector to matrix)
plot!(
zrange,
[physical_model(get_ϕ(prior, ensemble_kalman_process, 1)[:, i]..., inputs) for i in 1:N_ensemble],
c = :red,
label = reshape(vcat(["Initial ensemble"], ["" for i in 1:(N_ensemble - 1)]), 1, N_ensemble), # reshape to convert from vector to matrix
)
plot!(
zrange,
[physical_model(final_ensemble[:, i]..., inputs) for i in 1:N_ensemble],
c = :blue,
label = reshape(vcat(["Final ensemble"], ["" for i in 1:(N_ensemble - 1)]), 1, N_ensemble),
)
xlabel!("Z")
ylabel!("U")
png("profile_plot")
Loading

0 comments on commit fbb7793

Please sign in to comment.