Skip to content

Latest commit

 

History

History
249 lines (199 loc) · 7.1 KB

runbook.md

File metadata and controls

249 lines (199 loc) · 7.1 KB

Running prompts

help

bb -m prompts --help

run without --host-dir

bb -m prompts

Plain prompt Generation

bb -m prompts /Users/slim/docker/labs-ai-tools-for-devs jimclark106 darwin prompts/docker
bb -m prompts /Users/slim/docker/labs-ai-tools-for-devs jimclark106 darwin prompts/docker --pretty-print-prompts
bb -m prompts --host-dir /Users/slim/docker/labs-ai-tools-for-devs \
              --platform darwin \
              --prompts-dir prompts/docker \
              --pretty-print-prompts
bb -m prompts --host-dir /Users/slim/docker/labs-ai-tools-for-devs \
              --platform darwin \
              --prompts-dir prompts/project_type/ \
              --pretty-print-prompts

Running prompts/dockerfiles Conversation Loops

test prompts/project_type

Make sure the prompts/project_type prompts work on their own.

bb -m prompts run /Users/slim/docker/labs-make-runbook jimclark106 darwin prompts/project_type --debug
bb -m prompts run /Users/slim/docker/labs-make-runbook jimclark106 darwin prompts/project_type --nostream
bb -m prompts run \
              --host-dir /Users/slim/docker/labs-make-runbook \
              --platform darwin \
              --prompts-dir prompts/project_type \
              --nostream \
              --model "llama3.1" \
              --url http://localhost:11434/v1/chat/completions

TODO - this should fail better because the prompts-dir is not valid.

bb -m prompts run \
              --host-dir /Users/slim/docker/labs-make-runbook \
              --platform darwin \
              --prompts-dir prompts \
              --nostream \
              --model "llama3.1" \
              --url http://localhost:11434/v1/chat/completions

test prompts/dockerfiles (which uses prompts/project_type)

Now, verify that the prompts/dockerfiles prompts work with gpt-4.

bb -m prompts run /Users/slim/docker/labs-make-runbook jimclark106 darwin prompts/dockerfiles

Now, let's do the same thing using gpt-4 but without streaming.

bb -m prompts run /Users/slim/docker/labs-make-runbook jimclark106 darwin prompts/dockerfiles --nostream

Now, let's try with llama3.1.

# docker:command=llama
bb -m prompts run \
              --host-dir /Users/slim/docker/labs-make-runbook \
              --user jimclark106 \
              --platform darwin \
              --prompts-dir prompts/dockerfiles_llama3.1 \
              --url http://localhost:11434/v1/chat/completions \
              --model "llama3.1" \
              --nostream \

Now, let's try with mistral-nemo

bb -m prompts run \
              --host-dir /Users/slim/docker/labs-make-runbook \
              --user jimclark106 \
              --platform darwin \
              --prompts-dir prompts/dockerfiles_mistral-nemo \
              --url http://localhost:11434/v1/chat/completions \
              --model "mistral-nemo" \
              --nostream \

Mistral is kind of doing function calls but not openai compatible ones. It's listing a set of functions to call and not getting the arguments correct.

bb -m prompts run /Users/slim/docker/labs-make-runbook jimclark106 darwin prompts/dockerfiles \
              --url http://localhost:11434/v1/chat/completions \
              --model "mistral:latest" \
              --pretty-print-prompts

llama3-groq-tool-use:latest is writing functions but with a mix of xml and json markup. It's not compatible with openai currently. Also, the finish-reason is stop, instead of "tool_calls". So the conversation loop ends too.

bb -m prompts run \
              --host-dir /Users/slim/docker/labs-make-runbook \
              --user jimclark106 \
              --platform darwin \
              --prompts-dir prompts/dockerfiles \
              --url http://localhost:11434/v1/chat/completions \
              --model "llama3-groq-tool-use:latest" 

Test single file prompts

rm ~/docker/labs-make-runbook/qrcode.png
bb -m prompts run \
              --host-dir /Users/slim/docker/labs-make-runbook \
              --user jimclark106 \
              --platform darwin \
              --prompts-file prompts/qrencode/README.md
bb -m prompts run \
              --host-dir /Users/slim/docker/labs-make-runbook \
              --user jimclark106 \
              --platform darwin \
              --prompts-file /Users/slim/docker/labs-ai-tools-for-devs/prompts/curl/README.md \
              --debug
open ~/docker/labs-make-runbook/qrcode.png
bb -m prompts run \
              --host-dir /Users/slim/docker/labs-make-runbook \
              --user jimclark106 \
              --platform darwin \
              --prompts-file prompts/qrencode/README.md \
              --url http://localhost:11434/v1/chat/completions \
              --model "llama3.1" \
              --nostream \
              --debug

Using Containerized runner

docker run 
  --rm \
  --pull=always \
  -it \
  -v /var/run/docker.sock:/var/run/docker.sock \
  --mount type=bind,source=$PWD,target=/app/local \
  --workdir /app \
  --mount type=volume,source=docker-prompts,target=/prompts \
  --mount type=bind,source=$HOME/.openai-api-key,target=/root/.openai-api-key \
  vonwig/prompts:local \
    run \
    --host-dir /Users/slim/docker/labs-make-runbook \
    --user jimclark106 \
    --platform "$(uname -o)" \
    --prompts-dir local/prompts/dockerfiles

Clean up local images

#docker:command=clean-local-images
bb -m clean-local-images
docker run --rm \
           -it \
           -v /var/run/docker.sock:/var/run/docker.sock \
           --mount type=bind,source=$PWD,target=/app/local \
           --workdir /app \
           --mount type=volume,source=docker-prompts,target=/prompts \
           --mount type=bind,source=$HOME/.openai-api-key,target=/root/.openai-api-key \
           vonwig/prompts:local \
                                 run \
                                 /Users/slim/repo/labs-eslint-violations \
                                 jimclark106 \
                                 "$(uname -o)" \
                                 local/prompts/eslint \
                                 --pat "$(cat ~/.secrets/dockerhub-pat-ai-tools-for-devs.txt)" \
                                 --thread-id "something"

Test bad commands

  1. remove the openai key
  2. break the url with the --url flag
  3. choose a bad prompts dir
docker run --rm \
           -it \
           -v /var/run/docker.sock:/var/run/docker.sock \
           --mount type=volume,source=docker-prompts,target=/prompts \
           --mount type=bind,source=$PWD,target=/app/local \
           --mount type=bind,source=$HOME/.openai-api-key,target=/root/.openai-api-key \
           --workdir /app \
           vonwig/prompts:local \
                                 run \
                                 --host-dir $PWD \
                                 --user $USER \
                                 --platform "$(uname -o)" \
                                 --prompts-dir local/prompts/poem \