This project provides comprehensive study materials, code examples, and a robust development environment for preparing for the AWS Certified AI Practitioner (AIF-C01) exam, announced in August 2024. While the primary focus is on the AIF-C01 exam, the project also lays groundwork for the AWS Certified Machine Learning Engineer – Associate (MLA-C01) certification.
Key features of this project:
- Structured learning paths covering all AIF-C01 exam domains
- Secondary information supporting MLA-C01 / ME1-C01
- Hands-on code examples using AWS AI/ML services
- Local development environment with LocalStack for AWS service simulation
- Integration of Python and Clojure for a comprehensive learning experience
- Emphasis on best practices in AI/ML development and responsible AI
Whether you’re an executive looking to understand AI/ML capabilities in AWS or a practitioner aiming for certification, this project provides the resources you need to succeed.
graph TD
A[Start] --> B[direnv allow]
B --> C[nix-shell]
C --> D{Development Path}
D -->|Python| E[poetry shell]
D -->|Clojure| F[lein repl/clj]
D -->|Emacs| G[emacs]
E --> H[Development]
F --> H
G --> H
H --> I[End]
- Enable direnv
- Enter nix-shell
- Choose development path
- Python Development
- Poetry Shell
- AI/ML Libraries
- AWS Integration
- Clojure Development
- REPL Session
- AWS Integration
- Emacs + CIDER
graph TD
A[System] --> B[Nix Shell]
B --> C[direnv]
B --> D[Core Tools]
D --> E[Python Stack]
E --> E1[poetry]
E --> E2[AI/ML libs]
D --> F[Clojure Stack]
F --> F1[leiningen]
F --> F2[REPL]
D --> G[AWS Tools]
G --> G1[aws-cli]
G --> G2[localstack]
C --> I[Environments]
I --> I1[local]
I --> I2[aws]
- Poetry for dependencies
- Virtual environment
- AI/ML libraries
- Leiningen
- REPL-driven development
- Core libraries
# Start LocalStack services
localstack start
# Test AWS access
aws sts get-caller-identity
- Clone this repository
- Run the setup script:
make setup
- Install project dependencies:
make deps
- Initialize the project:
make init
- Choose your profile:
For LocalStack:
make switch-profile-lcl
make localstack-up
For AWS Dev:
make switch-profile-dev
To start exploring the concepts:
- Start the REPL:
make run
- In the REPL, you can require and use the namespaces for each domain:
(require '[aif-c01.d0-setup.environment :as d0])
(d0/check-environment)
(require '[aif-c01.d0-setup.environment :as d0])
(d0/check-aws-credentials)
(require '[aif-c01.d1-fundamentals.basics :as d1])
(d1/explain-ai-term :ml)
(d1/list-ml-types)
(require '[aif-c01.d2-generative-ai.concepts :as d2])
(d2/explain-gen-ai-concept :prompt-engineering)
(d2/list-gen-ai-use-cases)
(require '[aif-c01.d3-foundation-models.applications :as d3])
(d3/describe-rag)
(d3/list-model-selection-criteria)
(require '[aif-c01.d4-responsible-ai.practices :as d4])
(d4/list-responsible-ai-features)
(d4/describe-bias-effects)
(require '[aif-c01.d5-security-compliance.governance :as d5])
(d5/list-aws-security-services)
(d5/describe-data-governance-strategies)
This project uses a Makefile to manage common development tasks. To see all available commands and their descriptions, run:
make help
This will display a list of commands with inline descriptions, making it easy to understand and use the project’s development workflow.
This project supports LocalStack for local development and testing. To use LocalStack:
- Ensure Docker is installed and running on your system.
- Switch to the LocalStack profile:
make switch-profile-lcl
- Start LocalStack:
make localstack-up
- Run the REPL:
make run
- When finished, stop LocalStack:
make localstack-down
This project uses Poetry for Python dependency management. The AWS CLI and other Python dependencies are installed within the project’s virtual environment. To use Python or the AWS CLI:
- Activate the Poetry shell:
poetry shell
- Run Python scripts or AWS CLI commands as needed
Example of using boto3 to interact with AWS services:
import boto3
def list_s3_buckets():
s3 = boto3.client('s3')
response = s3.list_buckets()
return [bucket['Name'] for bucket in response['Buckets']]
print(list_s3_buckets())
If you encounter issues:
- Ensure your AWS credentials are correctly set up in
~/.aws/credentials
or environment variables. - For LocalStack issues, check that Docker is running and ports are not conflicting.
- If REPL startup fails, try running
make deps
to ensure all dependencies are fetched. - For Python-related issues, ensure you’re in the Poetry shell (
poetry shell
) before running commands.
This project includes examples and study materials for the following AWS services relevant to the AIF-C01 exam.
Each service is explored in the context of AI/ML workflows and best practices.
Create a bucket and upload a file:
aws s3 mb s3://aif-c01
aws s3 cp resources/test-image.png s3://aif-c01
List contents of the bucket:
aws s3 ls s3://aif-c01
For more S3 examples, refer to the S3 AWS CLI Examples.
(format "aif-c01-%s" (downcase (or (getenv "USER") (user-login-name))))
Create a bucket and enable versioning:
aws s3 mb s3://$BUCKET
aws s3api put-bucket-versioning --bucket $BUCKET --versioning-configuration Status=Enabled
Upload PDF files to the papers/ prefix:
aws s3 sync resources/papers s3://$BUCKET/papers/ --exclude "*" --include "*.pdf"
List contents of the papers/ prefix:
aws s3 ls s3://$BUCKET/papers/
Upload a new version of a file and list versions:
# Create a markdown file with the content
cat << EOF > example.md
# Example Document
This is a new version of the document with updated content.
## Details
- Filename: 2310.07064.pdf
- Bucket: $BUCKET
- Path: papers/2310.07064.pdf
## Content
New content
EOF
# Convert markdown to PDF
pandoc example.md -o 2310.07064.pdf
# Upload the PDF to S3
aws s3 cp 2310.07064.pdf s3://$BUCKET/papers/
aws s3api list-object-versions \
--bucket "$BUCKET" \
--prefix "papers/" \
--query 'Versions[*].[Key, VersionId, LastModified, Size, ETag, StorageClass, IsLatest]' \
--output json | jq -r '.[] | @tsv'
- Amazon Bedrock is a fully managed service that provides access to foundation models (FMs) from leading AI companies.
- It offers a single API to work with various FMs for different use cases.
To list available foundation models:
aws bedrock list-foundation-models | jq -r '.modelSummaries[]|.modelId' | head
- Amazon
- AI21 Labs
- Anthropic
- Cohere
- Meta
- Stability AI
To describe a specific base model:
aws bedrock get-foundation-model --model-id anthropic.claude-v2
Custom models are not directly supported in Bedrock. Users typically fine-tune base models for specific use cases.
Bedrock doesn’t support direct model importing. It focuses on providing access to pre-trained models from various providers.
Bedrock provides a chat interface for interactive model testing, but this is primarily accessed through the AWS Console.
For text generation using CLI:
aws bedrock invoke-model --model-id anthropic.claude-v2 --body '{"prompt": "Tell me a joke", "max_tokens_to_sample": 100}'
For image generation (example with Stable Diffusion):
aws bedrock invoke-model --model-id stability.stable-diffusion-xl-v0 --body '{"text_prompts":[{"text":"A serene landscape with mountains and a lake"}]}'
Prompt management is typically done through the AWS Console. CLI operations for this feature are limited.
Guardrails are configured in the AWS Console. They help ensure responsible AI use.
Watermark detection helps identify AI-generated content. This feature is accessed through the AWS Console.
To create a provisioned throughput configuration:
aws bedrock create-provisioned-model-throughput --model-id anthropic.claude-v2 --throughput-capacity 1
Batch inference jobs can be created using the AWS SDK or through integrations with services like AWS Batch.
Model evaluation is typically performed using custom scripts or through the AWS Console. There are no direct CLI commands for this in Bedrock.
To request access to a model:
aws bedrock create-model-access --model-id anthropic.claude-v2
Bedrock settings are primarily managed through the AWS Console. CLI operations for general settings are limited.
Some features like Bedrock Studio, Knowledge bases, Agents, Prompt flows, and Cross-region inference are marked as Preview or New. These features may have limited CLI support and are best accessed through the AWS Console.
List applications:
aws qbusiness list-applications | jq .applications
Detect sentiment in text:
aws comprehend detect-sentiment --text "I love using AWS services" --language-code en | jq -r .Sentiment
For more Comprehend examples, see the Comprehend AWS CLI Examples.
Translate text:
aws translate translate-text --text "Hello, world" --source-language-code en --target-language-code es | jq -r '.TranslatedText'
For more Translate examples, check the Translate AWS CLI Examples.
List transcription jobs:
aws transcribe list-transcription-jobs | jq -r '.TranscriptionJobSummaries[]|.TranscriptionJobName'
Start a new transcription job:
aws transcribe start-transcription-job --transcription-job-name "AIFC03TranscriptionJob$((RANDOM % 9000 + 1000))" --language-code en-US --media-format mp3 --media '{"MediaFileUri": "s3://aif-c01/test-audio.mp3"}' | jq
For more Transcribe examples, refer to the Transcribe AWS CLI Examples.
Start a speech synthesis task:
aws polly start-speech-synthesis-task --output-format mp3 --output-s3-bucket-name aif-c01 --text "Hello, welcome to AWS AI services" --voice-id Joanna
List speech synthesis tasks and check the output in S3:
aws polly list-speech-synthesis-tasks | jq .SynthesisTasks
For more Polly examples, see the Polly AWS CLI Examples.
Detect labels in an image:
aws rekognition detect-labels \
--image '{"S3Object":{"Bucket":"aif-c01","Name":"test-image.png"}}' \
--max-labels 10 \
--region us-east-1 \
--output json | jq -r '.Labels[]|.Name'
aws rekognition create-collection --collection-id mla-collection-01 | jq -r 'keys[]'
For more Rekognition examples, check the Rekognition AWS CLI Examples.
List Kendra indices:
aws kendra list-indices | jq .IndexConfigurationSummaryItems
For more Kendra examples, see the Kendra AWS CLI Examples.
aws sagemaker list-notebook-instances | jq -r '.NotebookInstances[] | select(.NotebookInstanceName | test("aif|mla|iacs")) | .NotebookInstanceName'
aws sagemaker list-training-jobs | jq -r '.TrainingJobSummaries[] | .TrainingJobName'
aws sagemaker list-models | jq -r '.Models[]'
aws sagemaker list-endpoints | jq -r '.Endpoints'
aws sagemaker list-pipelines | jq .PipelineSummaries
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "sagemaker.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
aws iam create-role --role-name mla-sagemaker-role --assume-role-policy-document file://trust-policy-sagemaker.json
aws iam attach-role-policy --role-name mla-sagemaker-role --policy-arn arn:aws:iam::aws:policy/AmazonSageMakerFullAccess
aws sagemaker create-model --model-name <model-name> --primary-container file://container-config.json --execution-role-arn <role-arn>
aws sagemaker create-endpoint-config --endpoint-config-name <config-name> --production-variants file://production-variant.json
aws sagemaker create-endpoint --endpoint-name <endpoint-name> --endpoint-config-name <config-name>
aws sagemaker describe-endpoint --endpoint-name <endpoint-name>
aws sagemaker describe-training-job --training-job-name <job-name>
aws logs get-log-events --log-group-name /aws/sagemaker/TrainingJobs --log-stream-name <training-job-name>/algo-1-<timestamp>
aws sagemaker create-transform-job --transform-job-name <job-name> --model-name <model-name> --transform-input file://transform-input.json --transform-output file://transform-output.json --transform-resources file://transform-resources.json
aws sagemaker describe-transform-job --transform-job-name <job-name>
aws sagemaker create-hyper-parameter-tuning-job --hyper-parameter-tuning-job-name <job-name> --hyper-parameter-tuning-job-config file://tuning-job-config.json --training-job-definition file://training-job-definition.json
aws sagemaker list-hyper-parameter-tuning-jobs
aws sagemaker create-pipeline --pipeline-name <pipeline-name> --pipeline-definition file://pipeline-definition.json --role-arn <role-arn>
aws sagemaker list-pipeline-executions --pipeline-name <pipeline-name>
aws sagemaker delete-endpoint --endpoint-name <endpoint-name>
For more SageMaker examples, refer to the SageMaker AWS CLI Examples.
List Lambda functions:
aws lambda list-functions | jq -r '.Functions[]|.FunctionName'
List Lambda functions with certification prefixes in the name:
aws lambda list-functions | jq '.Functions[] | select(.FunctionName | test("mla|aif"))'
For more Lambda examples, check the Lambda AWS CLI Examples.
List metrics for SageMaker:
aws cloudwatch list-metrics --namespace "AWS/SageMaker" | jq .Metrics
For more CloudWatch examples, see the CloudWatch AWS CLI Examples.
List Kinesis streams:
aws kinesis list-streams | jq .StreamNames
For more Kinesis examples, refer to the Kinesis AWS CLI Examples.
List Glue databases:
aws glue get-databases | jq .DatabaseList
Create required roles:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "glue.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
cat trust-policy-glue.json | jq -r 'keys[]'
aws iam create-role --role-name AWSGlueServiceRole --assume-role-policy-document file://trust-policy-glue.json | jq -r 'keys[]'
aws iam attach-role-policy --role-name AWSGlueServiceRole --policy-arn arn:aws:iam::aws:policy/service-role/AWSGlueServiceRole | jq -r 'keys[]'
(setq role_arn (org-babel-eval "sh" "cat /tmp/role_arn_glue.txt"))
(message "role_arn value: %s" role_arn)
echo "$role_arn"
from awsglue.context import GlueContext
from awsglue.job import Job
## @params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME'])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
## @type: DataSource
## @args: [database = "default", table_name = "legislators", transformation_ctx = "datasource0"]
## @return: datasource0
## @inputs: []
datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "default", table_name = "legislators", transformation_ctx = "datasource0")
## @type: ApplyMapping
## @args: [mapping = [("leg_id", "long", "leg_id", "long"), ("full_name", "string", "full_name", "string"), ("first_name", "string", "first_name", "string"), ("last_name", "string", "last_name", "string"), ("gender", "string", "gender", "string"), ("type", "string", "type", "string"), ("state", "string", "state", "string"), ("party", "string", "party", "string")], transformation_ctx = "applymapping1"]
## @return: applymapping1
## @inputs: [frame = datasource0]
applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = [("leg_id", "long", "leg_id", "long"), ("full_name", "string", "full_name", "string"), ("first_name", "string", "first_name", "string"), ("last_name", "string", "last_name", "string"), ("gender", "string", "gender", "string"), ("type", "string", "type", "string"), ("state", "string", "state", "string"), ("party", "string", "party", "string")], transformation_ctx = "applymapping1")
## @type: DataSink
## @args: [connection_type = "s3", connection_options = {"path": "s3://aif-c01-jasonwalsh/legislators_data"}, format = "parquet", transformation_ctx = "datasink2"]
## @return: datasink2
## @inputs: [frame = applymapping1]
datasink2 = glueContext.write_dynamic_frame.from_options(frame = applymapping1, connection_type = "s3", connection_options = {"path": "s3://aif-c01-jasonwalsh/legislators_data"}, format = "parquet", transformation_ctx = "datasink2")
job.commit()
aws s3 cp glue-script.py s3://aif-c01-jasonwalsh/scripts/glue-script.py
aws glue create-job \
--name mla-job \
--role arn:aws:iam::107396990521:role/AWSGlueServiceRole \
--command Name=glueetl,ScriptLocation=s3://aif-c01-jasonwalsh/scripts/glue-script.py \
--output text
For more Glue examples, check the Glue AWS CLI Examples.
List DynamoDB tables:
aws dynamodb list-tables | jq -r '.TableNames[] | select(. | test("mla|aif"))'
aws dynamodb create-table --table-name mla-test-01 --attribute-definitions AttributeName=Id,AttributeType=S --key-schema AttributeName=Id,KeyType=HASH --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 | jq -r 'keys[]'
For more DynamoDB examples, see the DynamoDB AWS CLI Examples.
List Forecast datasets:
aws forecast list-datasets | jq .Datasets
List Lex bots:
aws lexv2-models list-bots | jq .botSummaries
List Personalize datasets:
aws personalize list-datasets | jq .datasets
Analyze a document (replace `YOUR_BUCKET_NAME` and `YOUR_DOCUMENT_NAME` with actual values):
aws textract analyze-document --document '{"S3Object":{"Bucket":"YOUR_BUCKET_NAME","Name":"YOUR_DOCUMENT_NAME"}}' --feature-types "TABLES" "FORMS"
Detect entities in medical text:
aws comprehendmedical detect-entities --text "The patient was prescribed 500mg of acetaminophen for fever."
List IAM roles with “SageMaker” in the name:
aws iam list-roles | jq '.Roles[] | select(.RoleName | contains("SageMaker"))'
Describe EC2 instances with GPU (useful for ML workloads):
aws ec2 describe-instances --filters "Name=instance-type,Values=p*,g*" | jq .Reservations[].Instances[]
aws iam list-users
aws iam create-user --user-name newuser
aws iam attach-user-policy --user-name newuser --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess
aws macie2 list-sessions
aws macie2 create-custom-data-identifier --name "Custom-PII" --regex "(\d{3}-\d{2}-\d{4})" --description "Identifies Social Security Numbers"
aws inspector list-assessment-targets
aws inspector create-assessment-target --assessment-target-name "MyTarget" --resource-group-arn arn:aws:inspector:us-west-2:123456789012:resourcegroup/0-AB6DMKnv
aws cloudtrail list-trails
aws cloudtrail create-trail --name my-trail --s3-bucket-name my-bucket
aws artifact list-agreement-offers
aws artifact get-agreement --agreement-type "ENTERPRISE" --agreement-id "agreement-id"
aws auditmanager list-assessments
aws auditmanager create-assessment --name "MyAssessment" --assessment-reports-destination "S3" --scope "AWS_ACCOUNT" --aws-account "123456789012"
aws support describe-trusted-advisor-checks --language en
aws support describe-trusted-advisor-check-result --check-id checkId
aws ec2 describe-vpcs
aws ec2 create-vpc --cidr-block 10.0.0.0/16
aws ec2 create-subnet --vpc-id vpc-1234567890abcdef0 --cidr-block 10.0.1.0/24
aws --version
aws configure
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
kubectl version --client
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
eksctl version
eksctl create cluster --name my-cluster --region us-west-2 --nodegroup-name standard-workers --node-type t3.medium --nodes 3 --nodes-min 1 --nodes-max 4
eksctl get cluster --name my-cluster --region us-west-2
aws eks update-kubeconfig --name my-cluster --region us-west-2
eksctl get nodegroup --cluster my-cluster --region us-west-2
eksctl scale nodegroup --cluster my-cluster --name standard-workers --nodes 5 --region us-west-2
kubectl create deployment nginx --image=nginx
kubectl get deployments
kubectl expose deployment nginx --port=80 --type=LoadBalancer
kubectl get services
eksctl utils describe-stacks --cluster my-cluster --region us-west-2
eksctl utils write-kubeconfig --cluster my-cluster --region us-west-2
kubectl logs deployment/nginx
kubectl delete deployment nginx
kubectl delete service nginx
eksctl delete cluster --name my-cluster --region us-west-2
For more detailed information and advanced configurations, refer to the following resources:
- Get started with Amazon EKS – eksctl
- Get started with Amazon EKS – AWS Management Console and AWS CLI
- EKS Cluster Setup on AWS Community
aws stepfunctions list-state-machines | jq -r '.stateMachines[]|.name'
aws stepfunctions create-state-machine \
--name "MyStateMachine" \
--definition '{"Comment":"A Hello World example of the Amazon States Language using a Pass state","StartAt":"HelloWorld","States":{"HelloWorld":{"Type":"Pass","Result":"Hello World!","End":true}}}' \
--role-arn arn:aws:iam::123456789012:role/service-role/StepFunctions-MyStateMachine-role-0123456789
aws stepfunctions start-execution \
--state-machine-arn arn:aws:states:us-west-2:123456789012:stateMachine:MyStateMachine \
--input '{"key1": "value1", "key2": "value2"}'
aws athena list-work-groups | jq '.WorkGroups[]|.Name'
aws athena create-work-group \
--name "MyWorkGroup" \
--configuration '{"ResultConfiguration":{"OutputLocation":"s3://my-athena-results/"}}'
aws athena start-query-execution \
--query-string "SELECT * FROM my_database.my_table LIMIT 10" \
--query-execution-context Database=my_database \
--result-configuration OutputLocation=s3://my-athena-results/
aws athena get-query-results --query-execution-id QueryExecutionId
aws quicksight list-users --aws-account-id 123456789012 --namespace default
aws quicksight create-data-set \
--aws-account-id 123456789012 \
--data-set-id MyDataSet \
--name "My Data Set" \
--physical-table-map file://physical-table-map.json \
--logical-table-map file://logical-table-map.json \
--import-mode SPICE
aws quicksight create-analysis \
--aws-account-id 123456789012 \
--analysis-id MyAnalysis \
--name "My Analysis" \
--source-entity file://source-entity.json
aws neptune describe-db-clusters | jq .DBClusters
aws neptune create-db-cluster \
--db-cluster-identifier my-neptune-cluster \
--engine neptune \
--vpc-security-group-ids sg-1234567890abcdef0 \
--db-subnet-group-name my-db-subnet-group
aws neptune create-db-instance \
--db-instance-identifier my-neptune-instance \
--db-instance-class db.r5.large \
--engine neptune \
--db-cluster-identifier my-neptune-cluster
curl -X POST \
-H 'Content-Type: application/json' \
https://your-neptune-endpoint:8182/gremlin \
-d '{"gremlin": "g.V().limit(1)"}'
aws dataexchange list-data-sets | jq .DataSets
aws dataexchange create-data-set \
--asset-type "S3_SNAPSHOT" \
--description "My sample data set" \
--name "My Data Set"
aws dataexchange create-revision \
--data-set-id "data-set-id" \
--comment "Initial revision"
aws neptune-db load-from-s3 \
--source s3://bucket-name/object-key-name \
--format csv \
--region us-west-2 \
--endpoint https://your-cluster-endpoint:8182
curl -X POST \
-H 'Content-Type: application/x-www-form-urlencoded' \
https://your-neptune-endpoint:8182/sparql \
-d 'query=SELECT ?s ?p ?o WHERE { ?s ?p ?o } LIMIT 10'
aws deeplens list-projects
aws deeplens create-project \
--project-name "MyProject" \
--project-description "My DeepLens project"
aws codeguru-reviewer associate-repository \
--repository CodeCommit={Name=my-repo}
aws codeguruprofiler list-profiling-groups
aws greengrass list-groups
aws greengrass create-group --name "MyGreengrassGroup"
aws greengrass create-core-definition --name "MyCoreDefinition"
aws forecast create-dataset-group \
--dataset-group-name my-dataset-group \
--domain CUSTOM \
--dataset-arns arn:aws:forecast:us-west-2:123456789012:dataset/my-dataset
aws forecast create-predictor \
--predictor-name my-predictor \
--algorithm-arn arn:aws:forecast:::algorithm/ARIMA \
--forecast-horizon 10 \
--input-data-config '{"DatasetGroupArn":"arn:aws:forecast:us-west-2:123456789012:dataset-group/my-dataset-group"}' \
--featurization-config '{"ForecastFrequency": "D"}'
aws forecast create-forecast \
--forecast-name my-forecast \
--predictor-arn arn:aws:forecast:us-west-2:123456789012:predictor/my-predictor
aws personalize create-dataset-group --name my-dataset-group
aws personalize create-solution \
--name my-solution \
--dataset-group-arn arn:aws:personalize:us-west-2:123456789012:dataset-group/my-dataset-group \
--recipe-arn arn:aws:personalize:::recipe/aws-user-personalization
aws personalize create-campaign \
--name my-campaign \
--solution-version-arn arn:aws:personalize:us-west-2:123456789012:solution/my-solution/1 \
--min-provisioned-tps 1
aws personalize-runtime get-recommendations \
--campaign-arn arn:aws:personalize:us-west-2:123456789012:campaign/my-campaign \
--user-id user123
aws lakeformation list-data-lake-settings
aws lakeformation grant-permissions \
--principal DataLakePrincipalIdentifier=arn:aws:iam::123456789012:user/data-analyst \
--resource '{"Table":{"DatabaseName":"my_database","Name":"my_table"}}' \
--permissions SELECT
aws lakeformation register-resource \
--resource-arn arn:aws:s3:::my-bucket \
--use-service-linked-role
aws kafka list-clusters
aws kafka create-cluster \
--cluster-name MyMSKCluster \
--kafka-version 2.6.2 \
--number-of-broker-nodes 3 \
--broker-node-group-info file://broker-node-group-info.json \
--encryption-info file://encryption-info.json
aws kafka describe-cluster --cluster-arn ClusterArn
A key focus of this project is on responsible AI practices. We cover:
- Ethical considerations in AI/ML development
- Bias detection and mitigation strategies
- Fairness and inclusivity in AI systems
- Robustness and safety measures
- Compliance and governance in AI projects
In addition to code examples, this project includes:
- Curated lists of AWS documentation and whitepapers
- Links to relevant AWS training materials
- Practice questions for each exam domain
- Glossary of key AI/ML terms in the context of AWS
This project is licensed under the MIT License - see the LICENSE file for details.
This project is not affiliated with or endorsed by Amazon Web Services. All AWS service names and trademarks are property of Amazon.com, Inc. or its affiliates.