diff --git a/Dockerfile b/Dockerfile
index 8c76e27..6751394 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -9,9 +9,9 @@ RUN mkdir -p /assets/ && cd /assets && \
curl -OL https://downloads.datastax.com/enterprise/cqlsh-astra.tar.gz && \
tar -xzf ./cqlsh-astra.tar.gz && \
rm ./cqlsh-astra.tar.gz && \
- curl -OL https://archive.apache.org/dist/spark/spark-3.5.2/spark-3.5.2-bin-hadoop3-scala2.13.tgz && \
- tar -xzf ./spark-3.5.2-bin-hadoop3-scala2.13.tgz && \
- rm ./spark-3.5.2-bin-hadoop3-scala2.13.tgz
+ curl -OL https://archive.apache.org/dist/spark/spark-3.5.3/spark-3.5.3-bin-hadoop3-scala2.13.tgz && \
+ tar -xzf ./spark-3.5.3-bin-hadoop3-scala2.13.tgz && \
+ rm ./spark-3.5.3-bin-hadoop3-scala2.13.tgz
RUN apt-get update && apt-get install -y openssh-server vim python3 --no-install-recommends && \
rm -rf /var/lib/apt/lists/* && \
@@ -44,7 +44,7 @@ RUN chmod +x ./get-latest-maven-version.sh && \
rm -rf "$USER_HOME_DIR/.m2"
# Add all migration tools to path
-ENV PATH="${PATH}:/assets/dsbulk/bin/:/assets/cqlsh-astra/bin/:/assets/spark-3.5.2-bin-hadoop3-scala2.13/bin/"
+ENV PATH="${PATH}:/assets/dsbulk/bin/:/assets/cqlsh-astra/bin/:/assets/spark-3.5.3-bin-hadoop3-scala2.13/bin/"
EXPOSE 22
diff --git a/README.md b/README.md
index fb00908..3a209e1 100644
--- a/README.md
+++ b/README.md
@@ -7,7 +7,7 @@
Migrate and Validate Tables between Origin and Target Cassandra Clusters.
-> :warning: Please note this job has been tested with spark version [3.5.2](https://archive.apache.org/dist/spark/spark-3.5.2/)
+> :warning: Please note this job has been tested with spark version [3.5.3](https://archive.apache.org/dist/spark/spark-3.5.3/)
## Install as a Container
- Get the latest image that includes all dependencies from [DockerHub](https://hub.docker.com/r/datastax/cassandra-data-migrator)
@@ -18,10 +18,10 @@ Migrate and Validate Tables between Origin and Target Cassandra Clusters.
### Prerequisite
- Install **Java11** (minimum) as Spark binaries are compiled with it.
-- Install Spark version [`3.5.2`](https://archive.apache.org/dist/spark/spark-3.5.2/spark-3.5.2-bin-hadoop3-scala2.13.tgz) on a single VM (no cluster necessary) where you want to run this job. Spark can be installed by running the following: -
+- Install Spark version [`3.5.3`](https://archive.apache.org/dist/spark/spark-3.5.3/spark-3.5.3-bin-hadoop3-scala2.13.tgz) on a single VM (no cluster necessary) where you want to run this job. Spark can be installed by running the following: -
```
-wget https://archive.apache.org/dist/spark/spark-3.5.2/spark-3.5.2-bin-hadoop3-scala2.13.tgz
-tar -xvzf spark-3.5.2-bin-hadoop3-scala2.13.tgz
+wget https://archive.apache.org/dist/spark/spark-3.5.3/spark-3.5.3-bin-hadoop3-scala2.13.tgz
+tar -xvzf spark-3.5.3-bin-hadoop3-scala2.13.tgz
```
> :warning: If the above Spark and Scala version is not properly installed, you'll then see a similar exception like below when running the CDM jobs,
diff --git a/RELEASE.md b/RELEASE.md
index 8229bf7..e3b604e 100644
--- a/RELEASE.md
+++ b/RELEASE.md
@@ -1,4 +1,7 @@
# Release Notes
+## [4.4.2] - 2024-10-TBD
+- Upgraded to use Spark `3.5.3`.
+
## [4.4.1] - 2024-09-20
- Added two new codecs `STRING_BLOB` and `ASCII_BLOB` to allow migration from `TEXT` and `ASCII` fields to `BLOB` fields. These codecs can also be used to convert `BLOB` to `TEXT` or `ASCII`, but in such cases the `BLOB` value must be TEXT based in nature & fit within the applicable limits.
diff --git a/pom.xml b/pom.xml
index d1101d9..92cc69f 100644
--- a/pom.xml
+++ b/pom.xml
@@ -10,7 +10,7 @@
UTF-8
2.13.14
2.13
- 3.5.2
+ 3.5.3
3.5.1
5.0-rc1
5.9.1