Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixes a bug in fix_memory_keys, Adds OpenAI ConversationalAgent #753

Merged
merged 16 commits into from
Aug 10, 2023

Conversation

ogabrielluiz
Copy link
Contributor

No description provided.

…instance of AgentExecutor to prevent unnecessary fix

🔀 chore(base.py): merge changes from langchain.agents.agent module to base module
…rt custom memory implementation in Langchain interface

📝 WHY: The addition of BaseMemory to LANGCHAIN_BASE_TYPES allows for the customization of the memory component in the Langchain interface. This enables users to implement their own memory functionality according to their specific needs.
…handle conversational interactions using OpenAI's function calling API

This commit adds a new file `ConversationalAgent.py` to the `src/backend/langflow/components/agents` directory. The `ConversationalAgent` class is a custom component that represents a conversational agent capable of using OpenAI's function calling API.

The `ConversationalAgent` class has the following features:
- It inherits from the `CustomComponent` class.
- It has a `display_name` attribute set to "OpenaAI Conversational Agent".
- It has a `description` attribute set to "Conversational Agent that can use OpenAI's function calling API".
- It implements the `build_config` method to define the configuration options for the agent.
- It implements the `build` method to create an instance of the `AgentExecutor` class, which represents the agent's execution environment.
- The `build` method takes several parameters, including `model_name`, `tools`, `memory`, `system_message`, and `max_token_limit`.
- It uses the `ChatOpenAI` class from the `langchain.chat_models` module to create an instance of the OpenAI language model.
- It uses the `ConversationTokenBufferMemory` class from the `langchain.memory.token_buffer` module to handle conversation history and token buffering.
- It uses the `OpenAIFunctionsAgent` class from the `langchain.agents.openai_functions_agent.base` module to create an instance of the OpenAI functions agent.
- It returns an instance of the `AgentExecutor` class with the agent, tools, memory, verbose, and return_intermediate_steps parameters set.

📝 feat(__init__.py): add empty __init__.py file to the agents directory

This commit adds an empty `__init__.py` file to the `src/backend/langflow/components/agents` directory. The `__init__.py` file is necessary to make the `agents` directory a Python package.
…ionalAgent class for OpenAI conversational agent
…_name to model to improve clarity and consistency
…t_memory module to add support for chat memory in custom interfaces
…y, system_message, prompt, agent, and tools variables

✨ feat(OpenAIConversationalAgent.py): add support for return_intermediate_steps parameter in AgentExecutor constructor to enable returning intermediate steps during conversation
… 'memory', 'system_message', and 'max_token_limit' parameters to improve readability and user experience
…t module to use it in LANGCHAIN_BASE_TYPES dictionary

🔀 chore(constants.py): remove unnecessary import statements
…and maintainability

🐛 fix(base.py): change the error message when _built_object is None to provide more specific information and handle the case when _built_object is an instance of UnbuiltObject
@ogabrielluiz ogabrielluiz added the Release Label to be set only on release PR label Aug 10, 2023
@ogabrielluiz ogabrielluiz merged commit 9bc6166 into main Aug 10, 2023
1 of 2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Release Label to be set only on release PR
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant