This project demonstrates how to chat with your architecture using Amazon Bedrock's Converse API, tool use, and a knowledge base. Implemented in Python, the demo allows users to analyze architecture diagrams, evaluate effectiveness, get recommendations, and make informed decisions about their system architecture.
The application interacts with a foundation model on Amazon Bedrock to provide information based on an architecture diagram and user input. It utilizes three custom tools to gather information:
- Audit Info Tool: Provides audit information about a system based on the system name inferred from the architecture diagram file name.
- Joy Count Tool: Provides joy count data about a system.
- Best Practices Tool: Provides a company's best practices information, including best practices around how much joy the application is generating.
This demo is based on the Amazon Bedrock Tool Use Demo and parts of Amazon Bedrock: Enhance HR Support with Function Calling & Knowledge Bases blog post.
Running this app may result in charges to your AWS account.
architecture_chat_demo.py
: Main entry point for the demo application.audit_info_tool.py
: Implementation of the Audit Info Tool.best_practices_tool.py
: Implementation of the Best Practices Tool.joy_count_tool.py
: Implementation of the Joy Count Tool.demo/
: Directory containing sample data files.audit-info.json
: Sample audit information for the Fluffy Puppy Joy Generator system.best-practices-data.md
: Sample best practices data for the organizationjoy-count.json
: Sample joy count data for the Fluffy Puppy Joy Generator system.fluffy-puppy-joy-generator.png
: Sample architecture diagram image for the Fluffy Puppy Joy Generator system.fluffy-puppy-joy-generator.drawio
: Sample architecture diagram Draw.io format for the Fluffy Puppy Joy Generator system.
util/
: Directory containing utility functions.demo_print_utils.py
: Utility functions for printing demo-related messages.
README.md
: This file, containing project documentation.
To run this demo, you'll need a few bits set up first:
- An AWS account. You can create your account here.
- Request access to an AI model (we'll use Claude Sonnet) on Amazon Bedrock before you can use it. Learn about model access here.
- Python 3.10.16 or later setup and configured on your system.
- A python virtual environment setup with packages installed via requirements.txt.
Set up your custom environmeng variables by creating a .env
file in the project root directory with the following content:
AWS_REGION=<your-aws-region>
KNOWLEDGE_BASE_ID=<your-knowledge-base-id>
- To run the app, run the following command in your virtual environment:
python architecture_chat_demo.py
-
When prompted, enter
fluffy-puppy-joy-generator.png
when prompted for a diagram to chat with (or check out the next section to use your own). -
Then enter one of the example queries to interact with the diagram or ask your questions about the architecture.
-
To exit the demo, type
x
and press Enter.
Want to chat with your own diagram? Drop an image file (jpg, jpeg, or png) into the demo
folder and rerun the app. When prompted, enter the full name (excluding the path) of that diagram to chat with.
Below are some sample queries you could use to chat with an architecture diagram in this app:
- List the AWS Services used in the architecture diagram by official AWS name and excluding any sub-titles.
- What are the recommended strategies for unit testing this architecture?
- How well does this architecture adhere to the AWS Well Architected Framework?
- What improvements should be made to the resiliency of this architecture?
- Convert the data flow from this architecture into a Mermaid formatted sequence diagram.
- What are the quotas or limits in this architecture?
Depending on the type of diagram you're chatting with, you could also enter the following query to generate the infrastructure code:
Can you generate the Terraform code to provision this architecture?
- User Input: The user provides input through the command-line interface.
- Architecture Chat Demo: The main
ArchitectureChatDemo
class processes the user input and manages the conversation flow. - Amazon Bedrock: The user's input is sent to Amazon Bedrock's Converse API along with the system prompt and tool configurations.
- Tool Invocation: Based on the model's response, the appropriate tool (Audit Info, Joy Count, or Best Practices) is invoked.
- Tool Processing: The invoked tool fetches data from its respective source (JSON files or knowledge base).
- Response Generation: The tool's output is sent back to Amazon Bedrock for further processing and response generation.
- User Output: The final response is displayed to the user through the command-line interface.
See sequence diagram.
- If you encounter authentication errors, ensure your AWS credentials are correctly set up in your environment or AWS credentials file.
- If the demo fails to start, check that all required environment variables are set in the
.env
file. - For issues with tool invocations, verify that the JSON files in the
demo/
directory are present and correctly formatted.
To enable debug mode, set the logging
level to DEBUG
in the architecture_chat_demo.py
file:
logging.basicConfig(level=logging.DEBUG, format="%(message)s")
This will provide more detailed output about the conversation flow and tool invocations.
See CONTRIBUTING for more information.
This library is licensed under the MIT-0 License. See the LICENSE file.