Skip to content
/ aif-c01 Public

AWS Certified AI Practitioner (AIF-C01) exam preparation

License

Notifications You must be signed in to change notification settings

jwalsh/aif-c01

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

44 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AWS Certified AI Practitioner (AIF-C01) Exam Preparation

Introduction

This project provides comprehensive study materials, code examples, and a robust development environment for preparing for the AWS Certified AI Practitioner (AIF-C01) exam, announced in August 2024. While the primary focus is on the AIF-C01 exam, the project also lays groundwork for the AWS Certified Machine Learning Engineer – Associate (MLA-C01) certification.

Key features of this project:

  • Structured learning paths covering all AIF-C01 exam domains
  • Secondary information supporting MLA-C01 / ME1-C01
  • Hands-on code examples using AWS AI/ML services
  • Local development environment with LocalStack for AWS service simulation
  • Integration of Python and Clojure for a comprehensive learning experience
  • Emphasis on best practices in AI/ML development and responsible AI

Whether you’re an executive looking to understand AI/ML capabilities in AWS or a practitioner aiming for certification, this project provides the resources you need to succeed.

resources/test-image-640x.png

Project Workflow

Development Flow

graph TD
    A[Start] --> B[direnv allow]
    B --> C[nix-shell]
    C --> D{Development Path}
    D -->|Python| E[poetry shell]
    D -->|Clojure| F[lein repl/clj]
    D -->|Emacs| G[emacs]
    E --> H[Development]
    F --> H
    G --> H
    H --> I[End]
Loading

Core Steps

Environment Setup

  • Enable direnv
  • Enter nix-shell
  • Choose development path

Development Paths

  • Python Development
    • Poetry Shell
    • AI/ML Libraries
    • AWS Integration
  • Clojure Development
    • REPL Session
    • AWS Integration
    • Emacs + CIDER

Architecture

graph TD
    A[System] --> B[Nix Shell]
    B --> C[direnv]
    B --> D[Core Tools]
    
    D --> E[Python Stack]
    E --> E1[poetry]
    E --> E2[AI/ML libs]
    
    D --> F[Clojure Stack]
    F --> F1[leiningen]
    F --> F2[REPL]
    
    D --> G[AWS Tools]
    G --> G1[aws-cli]
    G --> G2[localstack]
    
    C --> I[Environments]
    I --> I1[local]
    I --> I2[aws]
Loading

Core Components

Development Tools

Python Environment
  • Poetry for dependencies
  • Virtual environment
  • AI/ML libraries
Clojure Environment
  • Leiningen
  • REPL-driven development
  • Core libraries

AWS Integration

Local Development
# Start LocalStack services
localstack start
Cloud Development
# Test AWS access
aws sts get-caller-identity

Setup

  1. Clone this repository
  2. Run the setup script:
make setup
  1. Install project dependencies:
make deps
  1. Initialize the project:
make init
  1. Choose your profile:

    For LocalStack:

make switch-profile-lcl
make localstack-up

For AWS Dev:

make switch-profile-dev

Usage

To start exploring the concepts:

  1. Start the REPL:
make run
  1. In the REPL, you can require and use the namespaces for each domain:
(require '[aif-c01.d0-setup.environment :as d0])
(d0/check-environment)

Example Usage for Each Domain

Domain 0: Environment Setup and Connection Checks

(require '[aif-c01.d0-setup.environment :as d0])
(d0/check-aws-credentials)

Domain 1: Fundamentals of AI and ML

(require '[aif-c01.d1-fundamentals.basics :as d1])
(d1/explain-ai-term :ml)
(d1/list-ml-types)

Domain 2: Fundamentals of Generative AI

(require '[aif-c01.d2-generative-ai.concepts :as d2])
(d2/explain-gen-ai-concept :prompt-engineering)
(d2/list-gen-ai-use-cases)

Domain 3: Applications of Foundation Models

(require '[aif-c01.d3-foundation-models.applications :as d3])
(d3/describe-rag)
(d3/list-model-selection-criteria)

Domain 4: Guidelines for Responsible AI

(require '[aif-c01.d4-responsible-ai.practices :as d4])
(d4/list-responsible-ai-features)
(d4/describe-bias-effects)

Domain 5: Security, Compliance, and Governance for AI Solutions

(require '[aif-c01.d5-security-compliance.governance :as d5])
(d5/list-aws-security-services)
(d5/describe-data-governance-strategies)

Development

This project uses a Makefile to manage common development tasks. To see all available commands and their descriptions, run:

make help

This will display a list of commands with inline descriptions, making it easy to understand and use the project’s development workflow.

LocalStack Usage

This project supports LocalStack for local development and testing. To use LocalStack:

  1. Ensure Docker is installed and running on your system.
  2. Switch to the LocalStack profile: make switch-profile-lcl
  3. Start LocalStack: make localstack-up
  4. Run the REPL: make run
  5. When finished, stop LocalStack: make localstack-down

Python Integration

This project uses Poetry for Python dependency management. The AWS CLI and other Python dependencies are installed within the project’s virtual environment. To use Python or the AWS CLI:

  1. Activate the Poetry shell: poetry shell
  2. Run Python scripts or AWS CLI commands as needed

Example of using boto3 to interact with AWS services:

import boto3

def list_s3_buckets():
    s3 = boto3.client('s3')
    response = s3.list_buckets()
    return [bucket['Name'] for bucket in response['Buckets']]

print(list_s3_buckets())

Troubleshooting

If you encounter issues:

  1. Ensure your AWS credentials are correctly set up in ~/.aws/credentials or environment variables.
  2. For LocalStack issues, check that Docker is running and ports are not conflicting.
  3. If REPL startup fails, try running make deps to ensure all dependencies are fetched.
  4. For Python-related issues, ensure you’re in the Poetry shell (poetry shell) before running commands.

AWS Services Covered

This project includes examples and study materials for the following AWS services relevant to the AIF-C01 exam.

Each service is explored in the context of AI/ML workflows and best practices.

Amazon S3 (static)

Create a bucket and upload a file:

aws s3 mb s3://aif-c01
aws s3 cp resources/test-image.png s3://aif-c01

List contents of the bucket:

aws s3 ls s3://aif-c01

For more S3 examples, refer to the S3 AWS CLI Examples.

Amazon S3 (dynamic)

(format "aif-c01-%s" (downcase (or (getenv "USER") (user-login-name))))

Create a bucket and enable versioning:

aws s3 mb s3://$BUCKET
aws s3api put-bucket-versioning --bucket $BUCKET --versioning-configuration Status=Enabled

Upload PDF files to the papers/ prefix:

aws s3 sync resources/papers s3://$BUCKET/papers/ --exclude "*" --include "*.pdf"

List contents of the papers/ prefix:

aws s3 ls s3://$BUCKET/papers/

Upload a new version of a file and list versions:

# Create a markdown file with the content
cat << EOF > example.md
# Example Document

This is a new version of the document with updated content.

## Details
- Filename: 2310.07064.pdf
- Bucket: $BUCKET
- Path: papers/2310.07064.pdf

## Content
New content
EOF

# Convert markdown to PDF
pandoc example.md -o 2310.07064.pdf

# Upload the PDF to S3
aws s3 cp 2310.07064.pdf s3://$BUCKET/papers/
aws s3api list-object-versions \
    --bucket "$BUCKET" \
    --prefix "papers/" \
    --query 'Versions[*].[Key, VersionId, LastModified, Size, ETag, StorageClass, IsLatest]' \
    --output json | jq -r '.[] | @tsv'

Amazon Bedrock

Getting Started

Overview

  • Amazon Bedrock is a fully managed service that provides access to foundation models (FMs) from leading AI companies.
  • It offers a single API to work with various FMs for different use cases.

Examples

To list available foundation models:

aws bedrock list-foundation-models | jq -r '.modelSummaries[]|.modelId' | head

Providers

  • Amazon
  • AI21 Labs
  • Anthropic
  • Cohere
  • Meta
  • Stability AI

Foundation Models

Base Models

To describe a specific base model:

aws bedrock get-foundation-model --model-id anthropic.claude-v2

Custom Models

Custom models are not directly supported in Bedrock. Users typically fine-tune base models for specific use cases.

Imported Models

Bedrock doesn’t support direct model importing. It focuses on providing access to pre-trained models from various providers.

Playgrounds

Chat

Bedrock provides a chat interface for interactive model testing, but this is primarily accessed through the AWS Console.

Text

For text generation using CLI:

aws bedrock invoke-model --model-id anthropic.claude-v2 --body '{"prompt": "Tell me a joke", "max_tokens_to_sample": 100}'

Image

For image generation (example with Stable Diffusion):

aws bedrock invoke-model --model-id stability.stable-diffusion-xl-v0 --body '{"text_prompts":[{"text":"A serene landscape with mountains and a lake"}]}'

Builder Tools

Prompt Management

Prompt management is typically done through the AWS Console. CLI operations for this feature are limited.

Safeguards

Guardrails

Guardrails are configured in the AWS Console. They help ensure responsible AI use.

Watermark Detection

Watermark detection helps identify AI-generated content. This feature is accessed through the AWS Console.

Inference

Provisioned Throughput

To create a provisioned throughput configuration:

aws bedrock create-provisioned-model-throughput --model-id anthropic.claude-v2 --throughput-capacity 1

Batch Inference

Batch inference jobs can be created using the AWS SDK or through integrations with services like AWS Batch.

Assessment

Model Evaluation

Model evaluation is typically performed using custom scripts or through the AWS Console. There are no direct CLI commands for this in Bedrock.

Bedrock Configurations

Model Access

To request access to a model:

aws bedrock create-model-access --model-id anthropic.claude-v2

Settings

Bedrock settings are primarily managed through the AWS Console. CLI operations for general settings are limited.

Note

Some features like Bedrock Studio, Knowledge bases, Agents, Prompt flows, and Cross-region inference are marked as Preview or New. These features may have limited CLI support and are best accessed through the AWS Console.

Amazon Q Business

List applications:

aws qbusiness list-applications | jq .applications

Amazon Comprehend

Detect sentiment in text:

aws comprehend detect-sentiment --text "I love using AWS services" --language-code en | jq -r .Sentiment

For more Comprehend examples, see the Comprehend AWS CLI Examples.

Amazon Translate

Translate text:

aws translate translate-text --text "Hello, world" --source-language-code en --target-language-code es | jq -r '.TranslatedText'

For more Translate examples, check the Translate AWS CLI Examples.

Amazon Transcribe

List transcription jobs:

aws transcribe list-transcription-jobs | jq -r '.TranscriptionJobSummaries[]|.TranscriptionJobName'

Start a new transcription job:

aws transcribe start-transcription-job --transcription-job-name "AIFC03TranscriptionJob$((RANDOM % 9000 + 1000))" --language-code en-US --media-format mp3 --media '{"MediaFileUri": "s3://aif-c01/test-audio.mp3"}' | jq

For more Transcribe examples, refer to the Transcribe AWS CLI Examples.

Amazon Polly

Start a speech synthesis task:

aws polly start-speech-synthesis-task --output-format mp3 --output-s3-bucket-name aif-c01 --text "Hello, welcome to AWS AI services" --voice-id Joanna

List speech synthesis tasks and check the output in S3:

aws polly list-speech-synthesis-tasks | jq .SynthesisTasks

For more Polly examples, see the Polly AWS CLI Examples.

Amazon Rekognition

Detect labels in an image:

aws rekognition detect-labels \
    --image '{"S3Object":{"Bucket":"aif-c01","Name":"test-image.png"}}' \
    --max-labels 10 \
    --region us-east-1 \
    --output json | jq -r '.Labels[]|.Name'
aws rekognition create-collection --collection-id mla-collection-01 | jq -r 'keys[]'

For more Rekognition examples, check the Rekognition AWS CLI Examples.

Amazon Kendra

List Kendra indices:

aws kendra list-indices | jq .IndexConfigurationSummaryItems

For more Kendra examples, see the Kendra AWS CLI Examples.

Amazon SageMaker

List Resources

List notebook instances

aws sagemaker list-notebook-instances | jq -r '.NotebookInstances[] | select(.NotebookInstanceName | test("aif|mla|iacs")) | .NotebookInstanceName'

List training jobs

aws sagemaker list-training-jobs | jq -r '.TrainingJobSummaries[] | .TrainingJobName'

List models

aws sagemaker list-models | jq -r '.Models[]'

List endpoints

aws sagemaker list-endpoints | jq -r '.Endpoints'

List SageMaker pipelines

aws sagemaker list-pipelines | jq .PipelineSummaries

Create and Manage Resources

Create required roles

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "sagemaker.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Create IAM role and attach policy

aws iam create-role --role-name mla-sagemaker-role --assume-role-policy-document file://trust-policy-sagemaker.json
aws iam attach-role-policy --role-name mla-sagemaker-role --policy-arn arn:aws:iam::aws:policy/AmazonSageMakerFullAccess

Create a model

aws sagemaker create-model --model-name <model-name> --primary-container file://container-config.json --execution-role-arn <role-arn>

Create an endpoint configuration

aws sagemaker create-endpoint-config --endpoint-config-name <config-name> --production-variants file://production-variant.json

Create an endpoint

aws sagemaker create-endpoint --endpoint-name <endpoint-name> --endpoint-config-name <config-name>

Describe and Monitor

Describe a specific endpoint

aws sagemaker describe-endpoint --endpoint-name <endpoint-name>

Describe training job (includes logs)

aws sagemaker describe-training-job --training-job-name <job-name>

Get CloudWatch logs for a training job

aws logs get-log-events --log-group-name /aws/sagemaker/TrainingJobs --log-stream-name <training-job-name>/algo-1-<timestamp>

Batch Transform

Create a batch transform job

aws sagemaker create-transform-job --transform-job-name <job-name> --model-name <model-name> --transform-input file://transform-input.json --transform-output file://transform-output.json --transform-resources file://transform-resources.json

Check batch transform job status

aws sagemaker describe-transform-job --transform-job-name <job-name>

Hyperparameter Tuning

Create a hyperparameter tuning job

aws sagemaker create-hyper-parameter-tuning-job --hyper-parameter-tuning-job-name <job-name> --hyper-parameter-tuning-job-config file://tuning-job-config.json --training-job-definition file://training-job-definition.json

List hyperparameter tuning jobs

aws sagemaker list-hyper-parameter-tuning-jobs

SageMaker Pipeline

Create a pipeline

aws sagemaker create-pipeline --pipeline-name <pipeline-name> --pipeline-definition file://pipeline-definition.json --role-arn <role-arn>

List pipeline executions

aws sagemaker list-pipeline-executions --pipeline-name <pipeline-name>

Cleanup

Delete an endpoint

aws sagemaker delete-endpoint --endpoint-name <endpoint-name>

Additional Resources

For more SageMaker examples, refer to the SageMaker AWS CLI Examples.

AWS Lambda

List Lambda functions:

aws lambda list-functions | jq -r '.Functions[]|.FunctionName'

List Lambda functions with certification prefixes in the name:

aws lambda list-functions | jq '.Functions[] | select(.FunctionName | test("mla|aif"))'

For more Lambda examples, check the Lambda AWS CLI Examples.

Amazon CloudWatch

List metrics for SageMaker:

aws cloudwatch list-metrics --namespace "AWS/SageMaker" | jq .Metrics

For more CloudWatch examples, see the CloudWatch AWS CLI Examples.

Amazon Kinesis

List Kinesis streams:

aws kinesis list-streams | jq .StreamNames

For more Kinesis examples, refer to the Kinesis AWS CLI Examples.

AWS Glue

List Glue databases:

aws glue get-databases | jq .DatabaseList

Create required roles:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "glue.amazonaws.com" 
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
cat trust-policy-glue.json | jq -r 'keys[]'
aws iam create-role --role-name AWSGlueServiceRole --assume-role-policy-document file://trust-policy-glue.json | jq -r 'keys[]'
aws iam attach-role-policy --role-name AWSGlueServiceRole --policy-arn arn:aws:iam::aws:policy/service-role/AWSGlueServiceRole | jq -r 'keys[]'
(setq role_arn (org-babel-eval "sh" "cat /tmp/role_arn_glue.txt"))
(message "role_arn value: %s" role_arn)
echo "$role_arn" 
from awsglue.context import GlueContext
from awsglue.job import Job

## @params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME'])

sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)

## @type: DataSource
## @args: [database = "default", table_name = "legislators", transformation_ctx = "datasource0"]
## @return: datasource0
## @inputs: []
datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "default", table_name = "legislators", transformation_ctx = "datasource0")

## @type: ApplyMapping
## @args: [mapping = [("leg_id", "long", "leg_id", "long"), ("full_name", "string", "full_name", "string"), ("first_name", "string", "first_name", "string"), ("last_name", "string", "last_name", "string"), ("gender", "string", "gender", "string"), ("type", "string", "type", "string"), ("state", "string", "state", "string"), ("party", "string", "party", "string")], transformation_ctx = "applymapping1"]
## @return: applymapping1
## @inputs: [frame = datasource0]
applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = [("leg_id", "long", "leg_id", "long"), ("full_name", "string", "full_name", "string"), ("first_name", "string", "first_name", "string"), ("last_name", "string", "last_name", "string"), ("gender", "string", "gender", "string"), ("type", "string", "type", "string"), ("state", "string", "state", "string"), ("party", "string", "party", "string")], transformation_ctx = "applymapping1")

## @type: DataSink
## @args: [connection_type = "s3", connection_options = {"path": "s3://aif-c01-jasonwalsh/legislators_data"}, format = "parquet", transformation_ctx = "datasink2"]
## @return: datasink2
## @inputs: [frame = applymapping1]
datasink2 = glueContext.write_dynamic_frame.from_options(frame = applymapping1, connection_type = "s3", connection_options = {"path": "s3://aif-c01-jasonwalsh/legislators_data"}, format = "parquet", transformation_ctx = "datasink2")

job.commit()
aws s3 cp glue-script.py s3://aif-c01-jasonwalsh/scripts/glue-script.py
aws glue create-job \
  --name mla-job \
  --role arn:aws:iam::107396990521:role/AWSGlueServiceRole \
  --command Name=glueetl,ScriptLocation=s3://aif-c01-jasonwalsh/scripts/glue-script.py \
  --output text

For more Glue examples, check the Glue AWS CLI Examples.

Amazon DynamoDB

List DynamoDB tables:

aws dynamodb list-tables | jq -r '.TableNames[] | select(. | test("mla|aif"))'
aws dynamodb create-table --table-name mla-test-01 --attribute-definitions AttributeName=Id,AttributeType=S --key-schema AttributeName=Id,KeyType=HASH --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 | jq -r 'keys[]'

For more DynamoDB examples, see the DynamoDB AWS CLI Examples.

Amazon Forecast

List Forecast datasets:

aws forecast list-datasets | jq .Datasets

Amazon Lex

List Lex bots:

aws lexv2-models list-bots | jq .botSummaries

Amazon Personalize

List Personalize datasets:

aws personalize list-datasets | jq .datasets

Amazon Textract

Analyze a document (replace `YOUR_BUCKET_NAME` and `YOUR_DOCUMENT_NAME` with actual values):

aws textract analyze-document --document '{"S3Object":{"Bucket":"YOUR_BUCKET_NAME","Name":"YOUR_DOCUMENT_NAME"}}' --feature-types "TABLES" "FORMS"

Amazon Comprehend Medical

Detect entities in medical text:

aws comprehendmedical detect-entities --text "The patient was prescribed 500mg of acetaminophen for fever."

AWS Security Services for AI/ML

List IAM roles with “SageMaker” in the name:

aws iam list-roles | jq '.Roles[] | select(.RoleName | contains("SageMaker"))'

Describe EC2 instances with GPU (useful for ML workloads):

aws ec2 describe-instances --filters "Name=instance-type,Values=p*,g*" | jq .Reservations[].Instances[]

IAM (Identity and Access Management)

List IAM users

aws iam list-users

Create a new IAM user

aws iam create-user --user-name newuser

Attach a policy to a user

aws iam attach-user-policy --user-name newuser --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess

Amazon Macie

List Macie sessions

aws macie2 list-sessions

Create a custom data identifier

aws macie2 create-custom-data-identifier --name "Custom-PII" --regex "(\d{3}-\d{2}-\d{4})" --description "Identifies Social Security Numbers"

Amazon Inspector

List Inspector assessment targets

aws inspector list-assessment-targets

Create an assessment target

aws inspector create-assessment-target --assessment-target-name "MyTarget" --resource-group-arn arn:aws:inspector:us-west-2:123456789012:resourcegroup/0-AB6DMKnv

AWS CloudTrail

List trails

aws cloudtrail list-trails

Create a trail

aws cloudtrail create-trail --name my-trail --s3-bucket-name my-bucket

AWS Artifact

List agreement offers

aws artifact list-agreement-offers

Get an agreement

aws artifact get-agreement --agreement-type "ENTERPRISE" --agreement-id "agreement-id"

AWS Audit Manager

List assessments

aws auditmanager list-assessments

Create an assessment

aws auditmanager create-assessment --name "MyAssessment" --assessment-reports-destination "S3" --scope "AWS_ACCOUNT" --aws-account "123456789012"

AWS Trusted Advisor

List Trusted Advisor checks

aws support describe-trusted-advisor-checks --language en

Get results of a specific check

aws support describe-trusted-advisor-check-result --check-id checkId

VPC (Virtual Private Cloud)

List VPCs

aws ec2 describe-vpcs

Create a VPC

aws ec2 create-vpc --cidr-block 10.0.0.0/16

Create a subnet

aws ec2 create-subnet --vpc-id vpc-1234567890abcdef0 --cidr-block 10.0.1.0/24

Amazon EKS

Prerequisites

Install and configure AWS CLI

aws --version
aws configure

Install kubectl

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
kubectl version --client

Install eksctl

curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
eksctl version

Create and Manage EKS Cluster

Create EKS cluster

eksctl create cluster --name my-cluster --region us-west-2 --nodegroup-name standard-workers --node-type t3.medium --nodes 3 --nodes-min 1 --nodes-max 4

Get cluster information

eksctl get cluster --name my-cluster --region us-west-2

Update kubeconfig

aws eks update-kubeconfig --name my-cluster --region us-west-2

Manage Node Groups

List node groups

eksctl get nodegroup --cluster my-cluster --region us-west-2

Scale node group

eksctl scale nodegroup --cluster my-cluster --name standard-workers --nodes 5 --region us-west-2

Deploy and Manage Applications

Deploy a sample application

kubectl create deployment nginx --image=nginx
kubectl get deployments

Expose the deployment

kubectl expose deployment nginx --port=80 --type=LoadBalancer
kubectl get services

Monitor and Troubleshoot

Get cluster health

eksctl utils describe-stacks --cluster my-cluster --region us-west-2

View cluster logs

eksctl utils write-kubeconfig --cluster my-cluster --region us-west-2
kubectl logs deployment/nginx

Clean Up

Delete the sample application

kubectl delete deployment nginx
kubectl delete service nginx

Delete the EKS cluster

eksctl delete cluster --name my-cluster --region us-west-2

Additional Resources

For more detailed information and advanced configurations, refer to the following resources:

AWS Step Functions

List state machines

aws stepfunctions list-state-machines | jq -r '.stateMachines[]|.name'

Create a state machine

aws stepfunctions create-state-machine \
    --name "MyStateMachine" \
    --definition '{"Comment":"A Hello World example of the Amazon States Language using a Pass state","StartAt":"HelloWorld","States":{"HelloWorld":{"Type":"Pass","Result":"Hello World!","End":true}}}' \
    --role-arn arn:aws:iam::123456789012:role/service-role/StepFunctions-MyStateMachine-role-0123456789

Start execution of a state machine

aws stepfunctions start-execution \
    --state-machine-arn arn:aws:states:us-west-2:123456789012:stateMachine:MyStateMachine \
    --input '{"key1": "value1", "key2": "value2"}'

Amazon Athena

List workgroups

aws athena list-work-groups | jq '.WorkGroups[]|.Name'

Create a workgroup

aws athena create-work-group \
    --name "MyWorkGroup" \
    --configuration '{"ResultConfiguration":{"OutputLocation":"s3://my-athena-results/"}}'

Run a query

aws athena start-query-execution \
    --query-string "SELECT * FROM my_database.my_table LIMIT 10" \
    --query-execution-context Database=my_database \
    --result-configuration OutputLocation=s3://my-athena-results/

Get query results

aws athena get-query-results --query-execution-id QueryExecutionId

Amazon QuickSight

List users

aws quicksight list-users --aws-account-id 123456789012 --namespace default

Create a dataset

aws quicksight create-data-set \
    --aws-account-id 123456789012 \
    --data-set-id MyDataSet \
    --name "My Data Set" \
    --physical-table-map file://physical-table-map.json \
    --logical-table-map file://logical-table-map.json \
    --import-mode SPICE

Create an analysis

aws quicksight create-analysis \
    --aws-account-id 123456789012 \
    --analysis-id MyAnalysis \
    --name "My Analysis" \
    --source-entity file://source-entity.json

Amazon Neptune

List Neptune clusters

aws neptune describe-db-clusters | jq .DBClusters

Create a Neptune cluster

aws neptune create-db-cluster \
    --db-cluster-identifier my-neptune-cluster \
    --engine neptune \
    --vpc-security-group-ids sg-1234567890abcdef0 \
    --db-subnet-group-name my-db-subnet-group

Create a Neptune instance

aws neptune create-db-instance \
    --db-instance-identifier my-neptune-instance \
    --db-instance-class db.r5.large \
    --engine neptune \
    --db-cluster-identifier my-neptune-cluster

Run a Gremlin query (using curl)

curl -X POST \
     -H 'Content-Type: application/json' \
     https://your-neptune-endpoint:8182/gremlin \
     -d '{"gremlin": "g.V().limit(1)"}'

AWS Data Exchange

List data sets

aws dataexchange list-data-sets | jq .DataSets

Create a data set

aws dataexchange create-data-set \
    --asset-type "S3_SNAPSHOT" \
    --description "My sample data set" \
    --name "My Data Set"

Create a revision

aws dataexchange create-revision \
    --data-set-id "data-set-id" \
    --comment "Initial revision"

Amazon Neptune (Additional Examples)

Load data into Neptune

aws neptune-db load-from-s3 \
    --source s3://bucket-name/object-key-name \
    --format csv \
    --region us-west-2 \
    --endpoint https://your-cluster-endpoint:8182

Run a SPARQL query (using curl)

curl -X POST \
     -H 'Content-Type: application/x-www-form-urlencoded' \
     https://your-neptune-endpoint:8182/sparql \
     -d 'query=SELECT ?s ?p ?o WHERE { ?s ?p ?o } LIMIT 10'

AWS DeepLens

List DeepLens projects

aws deeplens list-projects

Create a DeepLens project

aws deeplens create-project \
    --project-name "MyProject" \
    --project-description "My DeepLens project"

Amazon CodeGuru

Create a CodeGuru Reviewer association

aws codeguru-reviewer associate-repository \
    --repository CodeCommit={Name=my-repo}

List CodeGuru Profiler profiling groups

aws codeguruprofiler list-profiling-groups

AWS IoT Greengrass

List Greengrass groups

aws greengrass list-groups

Create a Greengrass group

aws greengrass create-group --name "MyGreengrassGroup"

Create a Greengrass core definition

aws greengrass create-core-definition --name "MyCoreDefinition"

Amazon Forecast (Expanded)

Create a dataset group

aws forecast create-dataset-group \
    --dataset-group-name my-dataset-group \
    --domain CUSTOM \
    --dataset-arns arn:aws:forecast:us-west-2:123456789012:dataset/my-dataset

Create a predictor

aws forecast create-predictor \
    --predictor-name my-predictor \
    --algorithm-arn arn:aws:forecast:::algorithm/ARIMA \
    --forecast-horizon 10 \
    --input-data-config '{"DatasetGroupArn":"arn:aws:forecast:us-west-2:123456789012:dataset-group/my-dataset-group"}' \
    --featurization-config '{"ForecastFrequency": "D"}'

Create a forecast

aws forecast create-forecast \
    --forecast-name my-forecast \
    --predictor-arn arn:aws:forecast:us-west-2:123456789012:predictor/my-predictor

Amazon Personalize (Expanded)

Create a dataset group

aws personalize create-dataset-group --name my-dataset-group

Create a solution

aws personalize create-solution \
    --name my-solution \
    --dataset-group-arn arn:aws:personalize:us-west-2:123456789012:dataset-group/my-dataset-group \
    --recipe-arn arn:aws:personalize:::recipe/aws-user-personalization

Create a campaign

aws personalize create-campaign \
    --name my-campaign \
    --solution-version-arn arn:aws:personalize:us-west-2:123456789012:solution/my-solution/1 \
    --min-provisioned-tps 1

Get recommendations

aws personalize-runtime get-recommendations \
    --campaign-arn arn:aws:personalize:us-west-2:123456789012:campaign/my-campaign \
    --user-id user123

AWS Lake Formation

List data lake settings

aws lakeformation list-data-lake-settings

Grant permissions

aws lakeformation grant-permissions \
    --principal DataLakePrincipalIdentifier=arn:aws:iam::123456789012:user/data-analyst \
    --resource '{"Table":{"DatabaseName":"my_database","Name":"my_table"}}' \
    --permissions SELECT

Register a new location

aws lakeformation register-resource \
    --resource-arn arn:aws:s3:::my-bucket \
    --use-service-linked-role

Amazon Managed Streaming for Apache Kafka (MSK)

List MSK clusters

aws kafka list-clusters

Create an MSK cluster

aws kafka create-cluster \
    --cluster-name MyMSKCluster \
    --kafka-version 2.6.2 \
    --number-of-broker-nodes 3 \
    --broker-node-group-info file://broker-node-group-info.json \
    --encryption-info file://encryption-info.json

Describe a cluster

aws kafka describe-cluster --cluster-arn ClusterArn

Responsible AI

A key focus of this project is on responsible AI practices. We cover:

  • Ethical considerations in AI/ML development
  • Bias detection and mitigation strategies
  • Fairness and inclusivity in AI systems
  • Robustness and safety measures
  • Compliance and governance in AI projects

Study Resources

In addition to code examples, this project includes:

  • Curated lists of AWS documentation and whitepapers
  • Links to relevant AWS training materials
  • Practice questions for each exam domain
  • Glossary of key AI/ML terms in the context of AWS

Workshops

Best practices for prompt engineering with Meta Llama 3 for Text-to-SQL use cases

https://aws.amazon.com/blogs/machine-learning/best-practices-for-prompt-engineering-with-meta-llama-3-for-text-to-sql-use-cases/

Using Amazon Bedrock Agents to interactively generate infrastructure as code

https://aws.amazon.com/blogs/machine-learning/using-agents-for-amazon-bedrock-to-interactively-generate-infrastructure-as-code/

Evaluating prompts at scale with Prompt Management and Prompt Flows for Amazon Bedrock

https://aws.amazon.com/blogs/machine-learning/evaluating-prompts-at-scale-with-prompt-management-and-prompt-flows-for-amazon-bedrock/

Build an ecommerce product recommendation chatbot with Amazon Bedrock Agents

https://aws.amazon.com/blogs/machine-learning/build-an-ecommerce-product-recommendation-chatbot-with-amazon-bedrock-agents/

[#A] Secure RAG applications using prompt engineering on Amazon Bedrock

https://aws.amazon.com/blogs/machine-learning/secure-rag-applications-using-prompt-engineering-on-amazon-bedrock/

License

This project is licensed under the MIT License - see the LICENSE file for details.

Disclaimer

This project is not affiliated with or endorsed by Amazon Web Services. All AWS service names and trademarks are property of Amazon.com, Inc. or its affiliates.

About

AWS Certified AI Practitioner (AIF-C01) exam preparation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published