Skip to content

IBM/prompt-declaration-language

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PDL (Prompt Declaration Language)

tests night tests documentation PyPI version shields.io Quay Publish Imports: isort Code style: black linting: pylint security: bandit License CII Best Practices

Overview

PDL is a declarative language designed for developers to create reliable, composable LLM prompts and integrate them into software systems. It provides a structured way to specify prompt templates, enforce validation, and compose LLM calls with traditional rule-based systems.

Basic LLM Call

Minimum installation.

pip install prompt-declaration-language

See here for instructions on how to install an Ollama model locally.

You can create a PDL file (YAML format):

description: Simple LLM interaction
text:
- "write a hello world example\n"
- model: ollama/granite-code:8b
  parameters:
    stop_sequences: '!'
    temperature: 0

and run it:

pdl <path/to/example.pdl>

Key Features

  • LLM Integration: Compatible with any LLM, including IBM watsonx
  • Prompt Engineering:
    • Template system for single/multi-shot prompting
    • Composition of multiple LLM calls
    • Integration with tools (code execution & APIs)
  • Development Tools:
    • Type checking for model I/O
    • Python SDK
    • Chat API support
    • Live document visualization for debugging
  • Control Flow: Variables, conditionals, loops, and functions
  • I/O Operations: File/stdin reading, JSON parsing
  • API Integration: Native REST API support (Python)

Documentation

Quick Reference

PDL Quick Reference

Quick Start Guide

Installation

Requires Python 3.11+ (Windows users should use WSL)

# Basic installation
pip install prompt-declaration-language

# Development installation with examples
pip install 'prompt-declaration-language[examples]'

Environment Setup

You can run PDL with LLM models in local using Ollama, or other cloud service.

If you use WatsonX:

export WATSONX_URL="https://{region}.ml.cloud.ibm.com"
export WATSONX_APIKEY="your-api-key"
export WATSONX_PROJECT_ID="your-project-id"

If you use Replicate:

export REPLICATE_API_TOKEN="your-token"

IDE Configuration

VSCode setup for syntax highlighting and validation:

// .vscode/settings.json
{
    "yaml.schemas": {
        "https://ibm.github.io/prompt-declaration-language/dist/pdl-schema.json": "*.pdl"
    },
    "files.associations": {
        "*.pdl": "yaml",
    }
}

Code Examples

Variable Definition & Template Usage

In this example we use external content imput.yaml and watonsx as a LLM provider.

description: Template with variables
defs:
  USER_INPUT:
    read: ../examples/code/data.yaml
    parser: yaml
text:
- model: watsonx/ibm/granite-34b-code-instruct
  input: |
    Process this input: ${USER_INPUT}
    Format the output as JSON.

Python Code Integration

description: Code execution example
text:
- "\nFind a random number between 1 and 20\n"
- def: N
  lang: python
  code: |
    import random
    result = random.randint(1, 20)
- "\nthe result is (${ N })\n"

Chat

chat interactions:

description: chatbot
text:
- read:
  def: user_input
  message: "hi? [/bye to exit]\n"
  contribute: [context]
- repeat:
    text:
    - model: ollama/granite-code:8b
    - read:
      def: user_input
      message: "> "
      contribute: [context]
  until: ${ user_input == '/bye'}

Debugging Tools

Log Inspection

pdl --log <my-logfile> <my-example.pdl>

Trace Generation and Live Document Visualization

pdl --trace <file.json> <my-example.pdl> 

Upload trace files to the Live Document Viewer for visual debugging.

Contributing

See the contribution guidelines for details on:

  • Code style
  • Testing requirements
  • PR process
  • Issue reporting

References