Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inconsistency behavior of the EncryptionClient wrt the regular client with range queries past the file length #301

Closed
NathanEckert opened this issue Jun 20, 2024 · 2 comments

Comments

@NathanEckert
Copy link

Problem:

I have found an inconsistency between the regular client and the encryption client.
With the regular client, a ranged query past the size of the files raises and exception (software.amazon.awssdk.services.s3.model.S3Exception: The requested range is not satisfiable), while the encryption client does not.
The encryption client actually blocks when performing the call the to ResponseInputStream<GetObjectResponse>.read()

package com.test.aws;

import java.io.InputStream;
import java.nio.ByteBuffer;
import java.security.KeyFactory;
import java.security.KeyPair;
import java.security.PrivateKey;
import java.security.PublicKey;
import java.security.spec.PKCS8EncodedKeySpec;
import java.security.spec.X509EncodedKeySpec;
import java.time.Duration;
import java.util.Base64;
import org.junit.jupiter.api.Test;
import software.amazon.awssdk.auth.credentials.DefaultCredentialsProvider;
import software.amazon.awssdk.core.ResponseInputStream;
import software.amazon.awssdk.core.sync.RequestBody;
import software.amazon.awssdk.http.apache.ApacheHttpClient;
import software.amazon.awssdk.http.async.SdkAsyncHttpClient;
import software.amazon.awssdk.http.nio.netty.NettyNioAsyncHttpClient;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3.S3AsyncClient;
import software.amazon.awssdk.services.s3.S3Client;
import software.amazon.awssdk.services.s3.model.DeleteObjectRequest;
import software.amazon.awssdk.services.s3.model.GetObjectRequest;
import software.amazon.awssdk.services.s3.model.GetObjectResponse;
import software.amazon.awssdk.services.s3.model.PutObjectRequest;
import software.amazon.encryption.s3.S3EncryptionClient;

public class TestEndOfStreamBehavior {
	private static final Region DEFAULT_REGION = AwsTestUtil.DEFAULT_REGION;
	private static final String BUCKET = AwsTestUtil.AWS_TEST_BUCKET;
	private static final String KEY = "filename.txt";
	private static final byte[] CONTENT = "abcdefghijklmnopqrstuvwxyz0123456789".repeat(4).getBytes();

	/** The encryption key to use in client-side encryption tests. */
	protected static final KeyPair KEY_PAIR;


	static {
		final String publicKeyString = "yourPublicKey";
		final String privateKeyString = "yourPrivateKey";
		try {
			final KeyFactory factory = KeyFactory.getInstance("RSA");
			final PublicKey publicKey =
					factory.generatePublic(
							new X509EncodedKeySpec(Base64.getDecoder().decode(publicKeyString.getBytes())));
			final PrivateKey privateKey =
					factory.generatePrivate(
							new PKCS8EncodedKeySpec(Base64.getDecoder().decode(privateKeyString.getBytes())));
			KEY_PAIR = new KeyPair(publicKey, privateKey);
		} catch (Exception e) {
			throw new RuntimeException(e);
		}
	}

	@Test
	void testEndOfStreamBehavior() throws Exception {

		// Pick the client to use, inconsistent behavior between the two
		final S3Client client = getClient(DEFAULT_REGION);
		// final S3Client client = getEncryptionClient(KEY_PAIR, DEFAULT_REGION);

		// Delete the data if it exists
		final DeleteObjectRequest deleteRequest = DeleteObjectRequest.builder()
				.bucket(BUCKET)
				.key(KEY)
				.build();

		client.deleteObject(deleteRequest);

		// Upload the data
		final PutObjectRequest uploadRequest =
				PutObjectRequest.builder().bucket(BUCKET).key(KEY).build();
		client.putObject(uploadRequest, RequestBody.fromBytes(CONTENT));
		// wait for the data to be uploaded
		Thread.sleep(Duration.ofSeconds(5));

		// Actual test

		final GetObjectRequest downloadRequest =
				GetObjectRequest.builder()
						.bucket(BUCKET)
						.key(KEY)
						.range("bytes=144-160") // files ends at 143
						.build();

                // this throws with the regular client (expected behavior), it does not with the encryption client
		final InputStream stream = client.getObject(downloadRequest);

		final ByteBuffer buffer = ByteBuffer.allocate(16);
		final byte[] underlyingBuffer = buffer.array();
		final int capacity = buffer.capacity();

		stream.read(underlyingBuffer, 0, capacity);
	}



	public static S3Client getEncryptionClient(final KeyPair keyPair, final Region region) {

		return S3EncryptionClient.builder()
				.rsaKeyPair(keyPair)
				.enableLegacyUnauthenticatedModes(true)
				.wrappedClient(getClient(region))
				.wrappedAsyncClient(getAsyncClient(region))
				.build();
	}


	public static S3Client getClient(final Region region) {

		return S3Client.builder()
				.region(region)
				.credentialsProvider(DefaultCredentialsProvider.create())
				.httpClientBuilder(
						ApacheHttpClient.builder().maxConnections(128) // Default is 50
				)
				.build();
	}


	public static S3AsyncClient getAsyncClient(final Region region) {

		final SdkAsyncHttpClient nettyHttpClient =
				NettyNioAsyncHttpClient.builder().maxConcurrency(100).build();

		return S3AsyncClient.builder()
				.region(region)
				.credentialsProvider(DefaultCredentialsProvider.create())
				.httpClient(nettyHttpClient)
				.build();
	}
}

Workaround

I am not blocked by this issue, as i can check my range beforehand. I just wanted to signal it as it was a change of behavior with regards to the AWS SDK v1

@NathanEckert
Copy link
Author

This is the same issue as #200

@texastony
Copy link
Contributor

Thanks for sharing. As you have already noted, #200 describes this issue.
Closing as duplicate.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants