diff --git a/docs/tutorial_llamaspeak.md b/docs/tutorial_llamaspeak.md
new file mode 100644
index 00000000..93ca885b
--- /dev/null
+++ b/docs/tutorial_llamaspeak.md
@@ -0,0 +1,15 @@
+# Tutorial - llamaspeak
+
+Talk live with Llama using Riva ASR/TTS, and chat about images with Llava!
+
+
+
+* [`llamaspeak:v1`](https://github.com/dusty-nv/jetson-containers/tree/master/packages/llm/llamaspeak) - uses text-generation-webui loaders for LLM models (llama.cpp, exllama, AutoGPTQ, Transformers)
+* [`llamaspeak:v2`](https://github.com/dusty-nv/jetson-containers/tree/master/packages/llm/local_llm) - uses AWQ/MLC from [`local_llm`](https://github.com/dusty-nv/jetson-containers/tree/master/packages/llm/local_llm) package, web chat voice agent
+
+llamaspeak v2 has multimodal support for chatting about images with quantized Llava-1.5:
+
+
+> [Multimodal Voice Chat with LLaVA-1.5 13B on NVIDIA Jetson AGX Orin](https://www.youtube.com/watch?v=9ObzbbBTbcc) (container: [`local_llm`](https://github.com/dusty-nv/jetson-containers/tree/master/packages/llm/local_llm))
+
+See the [`Voice Chat`](https://github.com/dusty-nv/jetson-containers/tree/master/packages/llm/local_llm#voice-chat) section of the [`local_llm`](https://github.com/dusty-nv/jetson-containers/tree/master/packages/llm/local_llm) documentation to run llamaspeak v2.
\ No newline at end of file
diff --git a/mkdocs.yml b/mkdocs.yml
index 6bb1797c..45f83ada 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -69,8 +69,7 @@ nav:
- About NVIDIA Jetson: tutorial-intro.md
- Text (LLM):
- text-generation-webui: tutorial_text-generation.md
- # - LLM + ASR/TTS:
- # - LlamaSpeak: tutorial_llamaspeak.md
+ - llamaspeak: tutorial_llamaspeak.md
- Text + Vision (VLM):
- Mini-GPT4: tutorial_minigpt4.md
- LLaVA: tutorial_llava.md