Skip to content

Releases: redpanda-data/connect

v4.21.0

08 Sep 16:21
081144f
Compare
Choose a tag to compare

For installation instructions check out the getting started guide.

Added

  • Fields client_id and rack_id added to the kafka_franz input and output.
  • New experimental command processor.
  • Parameter no_cache added to the file and env Bloblang functions.
  • New file_rel function added to Bloblang.
  • Field endpoint_params added to the oauth2 section of HTTP client components.

Fixed

  • Allow comments in single root and directly imported bloblang mappings.
  • The azure_blob_storage input no longer adds blob_storage_content_type and blob_storage_content_encoding metadata values as string pointer types, and instead adds these values as string types only when they are present.
  • The http_server input now returns a more appropriate 503 service unavailable status code during shutdown instead of the previous 404 status.
  • Fixed a potential panic when closing a pusher output that was never initialised.
  • The sftp output now reconnects upon being disconnected by the Azure idle timeout.
  • The switch output now produces error logs when messages do not pass at least one case with strict_mode enabled, previously these rejected messages were potentially re-processed in a loop without any logs depending on the config. An inaccuracy to the documentation has also been fixed in order to clarify behaviour when strict mode is not enabled.
  • The log processor fields_mapping field should no longer reject metadata queries using @ syntax.
  • Fixed an issue where heavily utilised streams with nested resource based outputs could lock-up when performing heavy resource mutating traffic on the streams mode REST API.
  • The Bloblang zip method no longer produces values that yield an "Unknown data type".

The full change log can be found here.

v4.20.0

22 Aug 20:32
8d88531
Compare
Choose a tag to compare

For installation instructions check out the getting started guide.

Added

  • The amqp1 input now supports anonymous SASL authentication.
  • New JWT Bloblang methods parse_jwt_es256, parse_jwt_es384, parse_jwt_es512, parse_jwt_rs256, parse_jwt_rs384, parse_jwt_rs512, sign_jwt_es256, sign_jwt_es384 and sign_jwt_es512 added.
  • The csv-safe input codec now supports custom delimiters with the syntax csv-safe:x.
  • The open_telemetry_collector tracer now supports secure connections, enabled via the secure field.
  • Function v0_msg_exists_meta added to the javascript processor.

Fixed

  • Fixed an issue where saturated output resources could panic under intense CRUD activity.
  • The config linter no longer raises issues with codec fields containing colons within their arguments.
  • The elasticsearch output should no longer fail to send basic authentication passwords, this fixes a regression introduced in v4.19.0.

The full change log can be found here.

v4.19.0

17 Aug 20:27
a8e75b0
Compare
Choose a tag to compare

For installation instructions check out the getting started guide.

Added

  • Field topics_pattern added to the pulsar input.
  • Both the schema_registry_encode and schema_registry_decode processors now support protobuf schemas.
  • Both the schema_registry_encode and schema_registry_decode processors now support references for AVRO and PROTOBUF schemas.
  • New Bloblang method zip.
  • New Bloblang int8, int16, uint8, uint16, float32 and float64 methods.

Fixed

  • Errors encountered by the gcp_pubsub output should now present more specific logs.
  • Upgraded kafka input and output underlying sarama client library to v1.40.0 at new module path github.com/IBM/sarama
  • The CUE schema for switch processor now correctly reflects that it takes a list of clauses.
  • Fixed the CUE schema for fields that take a 2d-array such as workflow.order.
  • The snowflake_put output has been added back to 32-bit ARM builds since the build incompatibilities have been resolved.
  • The snowflake_put output and the sql_* components no longer trigger a panic when running on a readonly file system with the snowflake driver. This driver still requires access to write temporary files somewhere, which can be configured via the Go TMPDIR environment variable. Details here.
  • The http_server input and output now follow the same multiplexer rules regardless of whether the general http server block is used or a custom endpoint.
  • Config linting should now respect fields sourced via a merge key (<<).
  • The lint subcommand should now lint config files pointed to via -r/--resources flags.

Changed

  • The snowflake_put output is now beta.
  • Endpoints specified by http_server components using both the general http server block or their own custom server addresses should no longer be treated as path prefixes unless the path ends with a slash (/), in which case all extensions of the path will match. This corrects a behavioural change introduced in v4.14.0.

The full change log can be found here.

v4.18.0

02 Jul 11:01
be270fc
Compare
Choose a tag to compare

For installation instructions check out the getting started guide.

Added

  • Field logger.level_name added for customising the name of log levels in the JSON format.
  • Methods sign_jwt_rs256, sign_jwt_rs384 and sign_jwt_rs512 added to Bloblang.

Fixed

  • HTTP components no longer ignore proxy_url settings when OAuth2 is set.
  • The PATCH verb for the streams mode REST API no longer fails to patch over newer components implemented with the latest plugin APIs.
  • The nats_jetstream input no longer fails for configs that set bind to true and do not specify both a stream and durable together.
  • The mongodb processor and output no longer ignores the upsert field.

Changed

  • The old parquet processor (now superseded by parquet_encode and parquet_decode) has been removed from 32-bit ARM builds due to build incompatibilities.
  • The snowflake_put output has been removed from 32-bit ARM builds due to build incompatibilities.
  • Plugin API: The (*BatchError).WalkMessages method has been deprecated in favour of WalkMessagesIndexedBy.

The full change log can be found here.

v4.17.0

13 Jun 09:59
ada9cc9
Compare
Choose a tag to compare

For installation instructions check out the getting started guide.

Added

  • The dynamic input and output have a new endpoint /input/{id}/uptime and /output/{id}/uptime respectively for obtaining the uptime of a given input/output.
  • Field wait_time_seconds added to the aws_sqs input.
  • Field timeout added to the gcp_cloud_storage output.
  • All NATS components now set the name of each connection to the component label when specified.

Fixed

  • Restore message ordering support to gcp_pubsub output. This issue was introduced in 4.16.0 as a result of #1836.
  • Specifying structured metadata values (non-strings) in unit test definitions should no longer cause linting errors.

Changed

  • The nats input default value of prefetch_count has been increased from 32 to a more appropriate 524288.

The full change log can be found here.

v4.16.0

28 May 11:21
269f588
Compare
Choose a tag to compare

For installation instructions check out the getting started guide.

Added

  • Fields auth.user_jwt and auth.user_nkey_seed added to all NATS components.
  • bloblang: added ulid(encoding, random_source) function to generate Universally Unique Lexicographically Sortable Identifiers (ULIDs).
  • Field skip_on added to the cached processor.
  • Field nak_delay added to the nats input.
  • New splunk_hec output.
  • Plugin API: New NewMetadataExcludeFilterField function and accompanying FieldMetadataExcludeFilter method added.
  • The pulsar input and output are now included in the main distribution of Benthos again.
  • The gcp_pubsub input now adds the metadata field gcp_pubsub_delivery_attempt to messages when dead lettering is enabled.
  • The aws_s3 input now adds s3_version_id metadata to versioned messages.
  • All compress/decompress components (codecs, bloblang methods, processors) now support pgzip.
  • Field connection.max_retries added to the websocket input.
  • New sentry_capture processor.

Fixed

  • The open_telemetry_collector tracer option no longer blocks service start up when the endpoints cannot be reached, and instead manages connections in the background.
  • The gcp_pubsub output should see significant performance improvements due to a client library upgrade.
  • The stream builder APIs should now follow logger.file config fields.
  • The experimental cue format in the cli list subcommand no longer introduces infinite recursion for #Processors.
  • Config unit tests no longer execute linting rules for missing env var interpolations.

The full change log can be found here.

v4.15.0

05 May 16:16
8277b60
Compare
Choose a tag to compare

For installation instructions check out the getting started guide.

Added

  • Flag --skip-env-var-check added to the lint subcommand, this disables the new linting behaviour where environment variable interpolations without defaults throw linting errors when the variable is not defined.
  • The kafka_franz input now supports explicit partitions in the field topics.
  • The kafka_franz input now supports batching.
  • New metadata Bloblang function for batch-aware structured metadata queries.
  • Go API: Running the Benthos CLI with a context set with a deadline now triggers graceful termination before the deadline is reached.
  • Go API: New public/service/servicetest package added for functions useful for testing custom Benthos builds.
  • New lru and ttlru in-memory caches.

Fixed

  • Provide msgpack plugins through public/components/msgpack.
  • The kafka_franz input should no longer commit offsets one behind the next during partition yielding.
  • The streams mode HTTP API should no longer route requests to /streams/<stream-ID> to the /streams handler. This issue was introduced in v4.14.0.

The full change log can be found here.

v4.14.0

25 Apr 10:59
891fcb8
Compare
Choose a tag to compare

For installation instructions check out the getting started guide.

Added

  • The -e/--env-file cli flag can now be specified multiple times.
  • New studio pull cli subcommand for running Benthos Studio session deployments.
  • Metadata field kafka_tombstone_message added to the kafka and kafka_franz inputs.
  • Method SetEnvVarLookupFunc added to the stream builder API.
  • The discord input and output now use the official chat client API and no longer rely on poll-based HTTP requests, this should result in more efficient and less erroneous behaviour.
  • New bloblang timestamp methods ts_add_iso8601 and ts_sub_iso8601.
  • All SQL components now support the trino driver.
  • New input codec csv-safe.
  • Added base64rawurl scheme to both the encode and decode Bloblang methods.
  • New find_by and find_all_by Bloblang methods.
  • New skipbom input codec.
  • New javascript processor.

Fixed

  • The find_all bloblang method no longer produces results that are of an unknown type.
  • The find_all and find Bloblang methods no longer fail when the value argument is a field reference.
  • Endpoints specified by HTTP server components using both the general http server block or their own custom server addresses should now be treated as path prefixes. This corrects a behavioural change that was introduced when both respective server options were updated to support path parameters.
  • Prevented a panic caused when using the encrypt_aes and decrypt_aes Bloblang methods with a mismatched key/iv lengths.
  • The snowpipe field of the snowflake_put output can now be omitted from the config without raising an error.
  • Batch-aware processors such as mapping and mutation should now report correct error metrics.
  • Running benthos blobl server should no longer panic when a mapping with variable read/writes is executed in parallel.
  • Speculative fix for the cloudwatch metrics exporter rejecting metrics due to minimum field size of 1, PutMetricDataInput.MetricData[0].Dimensions[0].Value.
  • The snowflake_put output now prevents silent failures under certain conditions. Details here.
  • Reduced the amount of pre-compilation of Bloblang based linting rules for documentation fields, this should dramatically improve the start up time of Benthos (~1s down to ~200ms).
  • Environment variable interpolations with an empty fallback (${FOO:}) are now valid.
  • Fixed an issue where the mongodb output wasn't using bulk send requests according to batching policies.
  • The amqp_1 input now falls back to accessing Message.Value when the data is empty.

Changed

  • When a config contains environment variable interpolations without a default value (i.e. ${FOO}), if that environment variable is not defined a linting error will be emitted. Shutting down due to linting errors can be disabled with the --chilled cli flag, and variables can be specified with an empty default value (${FOO:}) in order to make the previous behaviour explicit and prevent the new linting error.
  • The find and find_all Bloblang methods no longer support query arguments as they were incompatible with supporting value arguments. For query based arguments use the new find_by and find_all_by methods.

The full change log can be found here.

v4.13.0

15 Mar 15:21
790e755
Compare
Choose a tag to compare

For installation instructions check out the getting started guide.

Added

  • New nats_kv processor, input and output.
  • Field partition added to the kafka_franz output, allowing for manual partitioning.

Fixed

  • The broker output with the pattern fan_out_sequential will no longer abandon in-flight requests that are error blocked until the full shutdown timeout has occurred.
  • The broker input no longer reports itself as unavailable when a child input has intentionally closed.
  • Config unit tests that check for structured data should no longer fail in all cases.
  • The http_server input with a custom address now supports path variables.

The full change log can be found here.

v4.12.1

23 Feb 19:05
c999fe1
Compare
Choose a tag to compare

For installation instructions check out the getting started guide.

Fixed

  • Fixed a regression bug in the nats components where panics occur during a flood of messages. This issue was introduced in v4.12.0 (45f785a).

The full change log can be found here.