We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Please fill out the form below.
Currently Python SageMaker SDK supports local mode.
import numpy from sagemaker.mxnet import MXNetModel model_location = 's3://mybucket/my_model.tar.gz' code_location = 's3://mybucket/sourcedir.tar.gz' image_url = get_image_uri(sess.boto_region_name, 'image', repo_version="latest") s3_model = MXNetModel(model_data=model_location, role='SageMakerRole', image=image_url, entry_point='mnist.py', source_dir=code_location) predictor = s3_model.deploy(initial_instance_count=1, instance_type='local') data = numpy.zeros(shape=(1, 1, 28, 28)) predictor.predict(data) # Tear down the endpoint container and delete the corresponding endpoint configuration predictor.delete_endpoint() # Deletes the model predictor.delete_model()
And right now this SDK forces us to create endpoint when we define model.
Is there any plan to support local mode for inferencing?
The text was updated successfully, but these errors were encountered:
Hi @miaekim,
sagemaker-spark does not provide hosting/inference in local mode or other SageMaker services in local mode like Python SDK.
Sorry, something went wrong.
Got it. Please update this issue when you plan to implement it :)
No branches or pull requests
Please fill out the form below.
System Information
Describe the problem
Currently Python SageMaker SDK supports local mode.
And right now this SDK forces us to create endpoint when we define model.
Is there any plan to support local mode for inferencing?
The text was updated successfully, but these errors were encountered: