Skip to content

Commit

Permalink
Merge pull request #1454 from rstudio/keras3-1.0
Browse files Browse the repository at this point in the history
Prepare CRAN release of keras3 1.0
  • Loading branch information
t-kalinowski authored May 21, 2024
2 parents 9611031 + 81d920c commit dad84ae
Show file tree
Hide file tree
Showing 731 changed files with 3,025 additions and 2,532 deletions.
2 changes: 1 addition & 1 deletion DESCRIPTION
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
Package: keras3
Type: Package
Title: R Interface to 'Keras'
Version: 0.2.0.9000
Version: 1.0.0
Authors@R: c(
person("Tomasz", "Kalinowski", role = c("aut", "cph", "cre"),
email = "[email protected]"),
Expand Down
15 changes: 14 additions & 1 deletion NEWS.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,23 @@
# keras3 (development version)
# keras3 1.0.0

- Chains of `layer_*` calls with `|>` now instantiate layers in the
same order as `%>%` pipe chains: left-hand-side first (#1440).

- `iterate()`, `iter_next()` and `as_iterator()` are now reexported from reticulate.


User facing changes with upstream Keras v3.3.3:

- new functions: `op_slogdet()`, `op_psnr()`

- `clone_model()` gains new args: `call_function`, `recursive`
Updated example usage.

- `op_ctc_decode()` strategy argument has new default: `"greedy"`.
Updated docs.

- `loss_ctc()` default name fixed, changed to `"ctc"`

User facing changes with upstream Keras v3.3.2:

- new function: `op_ctc_decode()`
Expand Down
4 changes: 2 additions & 2 deletions R/applications.R
Original file line number Diff line number Diff line change
Expand Up @@ -2642,7 +2642,7 @@ function (input_shape = NULL, alpha = 1, include_top = TRUE,
#'
#' # Reference
#' - [Searching for MobileNetV3](
#' https://arxiv.org/pdf/1905.02244.pdf) (ICCV 2019)
#' https://arxiv.org/pdf/1905.02244) (ICCV 2019)
#'
#' The following table describes the performance of MobileNets v3:
#' ------------------------------------------------------------------------
Expand Down Expand Up @@ -2788,7 +2788,7 @@ function (input_shape = NULL, alpha = 1, minimalistic = FALSE,
#'
#' # Reference
#' - [Searching for MobileNetV3](
#' https://arxiv.org/pdf/1905.02244.pdf) (ICCV 2019)
#' https://arxiv.org/pdf/1905.02244) (ICCV 2019)
#'
#' The following table describes the performance of MobileNets v3:
#' ------------------------------------------------------------------------
Expand Down
10 changes: 5 additions & 5 deletions R/losses.R
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,7 @@ function (y_true, y_pred, from_logits = FALSE, label_smoothing = 0,
#' Computes focal cross-entropy loss between true labels and predictions.
#'
#' @description
#' According to [Lin et al., 2018](https://arxiv.org/pdf/1708.02002.pdf), it
#' According to [Lin et al., 2018](https://arxiv.org/pdf/1708.02002), it
#' helps to apply a focal factor to down-weight easy examples and focus more on
#' hard examples. By default, the focal tensor is computed as follows:
#'
Expand All @@ -157,7 +157,7 @@ function (y_true, y_pred, from_logits = FALSE, label_smoothing = 0,
#' when `from_logits=TRUE`) or a probability (i.e, value in `[0., 1.]` when
#' `from_logits=FALSE`).
#'
#' According to [Lin et al., 2018](https://arxiv.org/pdf/1708.02002.pdf), it
#' According to [Lin et al., 2018](https://arxiv.org/pdf/1708.02002), it
#' helps to apply a "focal factor" to down-weight easy examples and focus more
#' on hard examples. By default, the focal tensor is computed as follows:
#'
Expand Down Expand Up @@ -274,13 +274,13 @@ function (y_true, y_pred, from_logits = FALSE, label_smoothing = 0,
#' @param alpha
#' A weight balancing factor for class 1, default is `0.25` as
#' mentioned in reference [Lin et al., 2018](
#' https://arxiv.org/pdf/1708.02002.pdf). The weight for class 0 is
#' https://arxiv.org/pdf/1708.02002). The weight for class 0 is
#' `1.0 - alpha`.
#'
#' @param gamma
#' A focusing parameter used to compute the focal factor, default is
#' `2.0` as mentioned in the reference
#' [Lin et al., 2018](https://arxiv.org/pdf/1708.02002.pdf).
#' [Lin et al., 2018](https://arxiv.org/pdf/1708.02002).
#'
#' @param from_logits
#' Whether to interpret `y_pred` as a tensor of
Expand Down Expand Up @@ -450,7 +450,7 @@ function (y_true, y_pred, from_logits = FALSE, label_smoothing = 0,
#' `class_weights`. We expect labels to be provided in a `one_hot`
#' representation.
#'
#' According to [Lin et al., 2018](https://arxiv.org/pdf/1708.02002.pdf), it
#' According to [Lin et al., 2018](https://arxiv.org/pdf/1708.02002), it
#' helps to apply a focal factor to down-weight easy examples and focus more on
#' hard examples. The general formula for the focal loss (FL)
#' is as follows:
Expand Down
2 changes: 1 addition & 1 deletion R/metrics.R
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
#' Computes the binary focal crossentropy loss.
#'
#' @description
#' According to [Lin et al., 2018](https://arxiv.org/pdf/1708.02002.pdf), it
#' According to [Lin et al., 2018](https://arxiv.org/pdf/1708.02002), it
#' helps to apply a focal factor to down-weight easy examples and focus more on
#' hard examples. By default, the focal tensor is computed as follows:
#'
Expand Down
35 changes: 20 additions & 15 deletions R/model-training.R
Original file line number Diff line number Diff line change
Expand Up @@ -269,17 +269,20 @@ function (object, x = NULL, y = NULL, ..., batch_size = NULL,
verbose = as_model_verbose_arg),
ignore = "object",
force = "verbose")
args[["return_dict"]] <- FALSE

## return_dict=TRUE because object$metrics_names returns wrong value
## (e.g., "compile_metrics" instead of "mae")
args[["return_dict"]] <- TRUE

if(inherits(args$x, "tensorflow.python.data.ops.dataset_ops.DatasetV2") &&
!is.null(args$batch_size))
stop("batch_size can not be specified with a TF Dataset")

result <- do.call(object$evaluate, args)
if (length(result) > 1L) {
result <- as.list(result)
names(result) <- object$metrics_names
}
# if (length(result) > 1L) { ## if return_dict=FALSE
# result <- as.list(result)
# names(result) <- object$metrics_names
# }

tfruns::write_run_metadata("evaluation", unlist(result))

Expand Down Expand Up @@ -761,11 +764,12 @@ function (object, x, y = NULL, sample_weight = NULL, ...)
result <- object$test_on_batch(as_array(x),
as_array(y),
as_array(sample_weight), ...,
return_dict = FALSE)
if (length(result) > 1L) {
result <- as.list(result)
names(result) <- object$metrics_names
} else if (is_scalar(result)) {
return_dict = TRUE)
# if (length(result) > 1L) {
# result <- as.list(result)
# names(result) <- object$metrics_names
# } else
if (is_scalar(result)) {
result <- result[[1L]]
}
result
Expand Down Expand Up @@ -824,11 +828,12 @@ function (object, x, y = NULL, sample_weight = NULL, class_weight = NULL)
as_array(y),
as_array(sample_weight),
class_weight = as_class_weight(class_weight),
return_dict = FALSE)
if (length(result) > 1L) {
result <- as.list(result)
names(result) <- object$metrics_names
} else if (is_scalar(result)) {
return_dict = TRUE)
# if (length(result) > 1L) {
# result <- as.list(result)
# names(result) <- object$metrics_names
# } else
if (is_scalar(result)) {
result <- result[[1L]]
}

Expand Down
2 changes: 1 addition & 1 deletion R/optimizers-schedules.R
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
#' SGDR: Stochastic Gradient Descent with Warm Restarts.
#'
#' For the idea of a linear warmup of our learning rate,
#' see [Goyal et al.](https://arxiv.org/pdf/1706.02677.pdf).
#' see [Goyal et al.](https://arxiv.org/pdf/1706.02677).
#'
#' When we begin training a model, we often want an initial increase in our
#' learning rate followed by a decay. If `warmup_target` is an int, this
Expand Down
2 changes: 1 addition & 1 deletion docs/404.html

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion docs/LICENSE-text.html

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Loading

0 comments on commit dad84ae

Please sign in to comment.