Conversation Buffer Memory¶
The "base memory class" seen in the previous example is now put to use in a higher-level abstraction provided by LangChain:
In [1]:
Copied!
from langchain.memory import CassandraChatMessageHistory
from langchain.memory import ConversationBufferMemory
from langchain.memory import CassandraChatMessageHistory
from langchain.memory import ConversationBufferMemory
In [2]:
Copied!
from cqlsession import getCQLSession, getCQLKeyspace
cqlMode = 'astra_db' # 'astra_db'/'local'
session = getCQLSession(mode=cqlMode)
keyspace = getCQLKeyspace(mode=cqlMode)
from cqlsession import getCQLSession, getCQLKeyspace
cqlMode = 'astra_db' # 'astra_db'/'local'
session = getCQLSession(mode=cqlMode)
keyspace = getCQLKeyspace(mode=cqlMode)
In [3]:
Copied!
message_history = CassandraChatMessageHistory(
session_id='conversation-0123',
session=session,
keyspace=keyspace,
ttl_seconds = 3600,
)
message_history.clear()
message_history = CassandraChatMessageHistory(
session_id='conversation-0123',
session=session,
keyspace=keyspace,
ttl_seconds = 3600,
)
message_history.clear()
Use in a ConversationChain¶
Create a Memory¶
The Cassandra message history is specified:
In [4]:
Copied!
cassBuffMemory = ConversationBufferMemory(
chat_memory=message_history,
)
cassBuffMemory = ConversationBufferMemory(
chat_memory=message_history,
)
Language model¶
Below is the logic to instantiate the LLM of choice. We choose to leave it in the notebooks for clarity.
In [5]:
Copied!
from llm_choice import suggestLLMProvider
llmProvider = suggestLLMProvider()
# (Alternatively set llmProvider to 'GCP_VertexAI', 'OpenAI' ... manually if you have credentials)
if llmProvider == 'GCP_VertexAI':
from langchain.llms import VertexAI
llm = VertexAI()
print('LLM from VertexAI')
elif llmProvider == 'OpenAI':
from langchain.llms import OpenAI
llm = OpenAI()
print('LLM from OpenAI')
else:
raise ValueError('Unknown LLM provider.')
from llm_choice import suggestLLMProvider
llmProvider = suggestLLMProvider()
# (Alternatively set llmProvider to 'GCP_VertexAI', 'OpenAI' ... manually if you have credentials)
if llmProvider == 'GCP_VertexAI':
from langchain.llms import VertexAI
llm = VertexAI()
print('LLM from VertexAI')
elif llmProvider == 'OpenAI':
from langchain.llms import OpenAI
llm = OpenAI()
print('LLM from OpenAI')
else:
raise ValueError('Unknown LLM provider.')
LLM from OpenAI
Create a chain¶
As the conversation proceeds, a growing history of past exchanges finds it way automatically to the prompt that the LLM receives:
In [6]:
Copied!
from langchain.chains import ConversationChain
conversation = ConversationChain(
llm=llm,
verbose=True,
memory=cassBuffMemory,
)
from langchain.chains import ConversationChain
conversation = ConversationChain(
llm=llm,
verbose=True,
memory=cassBuffMemory,
)
In [7]:
Copied!
conversation.predict(input="Hello, how can I roast an apple?")
conversation.predict(input="Hello, how can I roast an apple?")
> Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hello, how can I roast an apple? AI: > Finished chain.
Out[7]:
' Hi there! Roasting an apple is a great way to bring out a delicious flavor. To roast an apple, you will need an oven-safe dish, a few tablespoons of butter, and some spices of your choice. Preheat the oven to 375 degrees Fahrenheit and place the apple in the dish. Cut a few slits in the top of the apple, and then add the butter and spices. Bake the apple in the oven for about 20 minutes or until the skin begins to brown. Enjoy!'
In [8]:
Copied!
conversation.predict(input="Can I do it on a bonfire?")
conversation.predict(input="Can I do it on a bonfire?")
> Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hello, how can I roast an apple? AI: Hi there! Roasting an apple is a great way to bring out a delicious flavor. To roast an apple, you will need an oven-safe dish, a few tablespoons of butter, and some spices of your choice. Preheat the oven to 375 degrees Fahrenheit and place the apple in the dish. Cut a few slits in the top of the apple, and then add the butter and spices. Bake the apple in the oven for about 20 minutes or until the skin begins to brown. Enjoy! Human: Can I do it on a bonfire? AI: > Finished chain.
Out[8]:
' Unfortunately, roasting an apple on a bonfire is not recommended. While it is possible to do, it is much more difficult to maintain an even temperature and you may end up burning the apple or not roasting it enough. It is much easier to roast an apple in an oven.'
In [9]:
Copied!
conversation.predict(input="What about a microwave, would the apple taste good?")
conversation.predict(input="What about a microwave, would the apple taste good?")
> Entering new ConversationChain chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hello, how can I roast an apple? AI: Hi there! Roasting an apple is a great way to bring out a delicious flavor. To roast an apple, you will need an oven-safe dish, a few tablespoons of butter, and some spices of your choice. Preheat the oven to 375 degrees Fahrenheit and place the apple in the dish. Cut a few slits in the top of the apple, and then add the butter and spices. Bake the apple in the oven for about 20 minutes or until the skin begins to brown. Enjoy! Human: Can I do it on a bonfire? AI: Unfortunately, roasting an apple on a bonfire is not recommended. While it is possible to do, it is much more difficult to maintain an even temperature and you may end up burning the apple or not roasting it enough. It is much easier to roast an apple in an oven. Human: What about a microwave, would the apple taste good? AI: > Finished chain.
Out[9]:
' Unfortunately, roasting an apple in a microwave is not recommended. The microwave does not get hot enough to properly roast an apple, and the result may be a soggy texture. The oven is much better for roasting an apple, as it is able to reach higher temperatures and create a delicious, roasted flavor.'
In [10]:
Copied!
message_history.messages
message_history.messages
Out[10]:
[HumanMessage(content='Hello, how can I roast an apple?', additional_kwargs={}, example=False), AIMessage(content=' Hi there! Roasting an apple is a great way to bring out a delicious flavor. To roast an apple, you will need an oven-safe dish, a few tablespoons of butter, and some spices of your choice. Preheat the oven to 375 degrees Fahrenheit and place the apple in the dish. Cut a few slits in the top of the apple, and then add the butter and spices. Bake the apple in the oven for about 20 minutes or until the skin begins to brown. Enjoy!', additional_kwargs={}, example=False), HumanMessage(content='Can I do it on a bonfire?', additional_kwargs={}, example=False), AIMessage(content=' Unfortunately, roasting an apple on a bonfire is not recommended. While it is possible to do, it is much more difficult to maintain an even temperature and you may end up burning the apple or not roasting it enough. It is much easier to roast an apple in an oven.', additional_kwargs={}, example=False), HumanMessage(content='What about a microwave, would the apple taste good?', additional_kwargs={}, example=False), AIMessage(content=' Unfortunately, roasting an apple in a microwave is not recommended. The microwave does not get hot enough to properly roast an apple, and the result may be a soggy texture. The oven is much better for roasting an apple, as it is able to reach higher temperatures and create a delicious, roasted flavor.', additional_kwargs={}, example=False)]
Manually tinkering with the prompt¶
You can craft your own prompt (through a PromptTemplate
object) and still take advantage of the chat memory handling by LangChain:
In [11]:
Copied!
from langchain import LLMChain, PromptTemplate
from langchain import LLMChain, PromptTemplate
In [12]:
Copied!
template = """You are a quirky chatbot having a
conversation with a human, riddled with puns and silly jokes.
{chat_history}
Human: {human_input}
AI:"""
prompt = PromptTemplate(
input_variables=["chat_history", "human_input"],
template=template
)
template = """You are a quirky chatbot having a
conversation with a human, riddled with puns and silly jokes.
{chat_history}
Human: {human_input}
AI:"""
prompt = PromptTemplate(
input_variables=["chat_history", "human_input"],
template=template
)
In [13]:
Copied!
f_message_history = CassandraChatMessageHistory(
session_id='conversation-funny-a001',
session=session,
keyspace=keyspace,
)
f_message_history.clear()
f_message_history = CassandraChatMessageHistory(
session_id='conversation-funny-a001',
session=session,
keyspace=keyspace,
)
f_message_history.clear()
In [14]:
Copied!
f_memory = ConversationBufferMemory(
memory_key="chat_history",
chat_memory=f_message_history,
)
f_memory = ConversationBufferMemory(
memory_key="chat_history",
chat_memory=f_message_history,
)
In [15]:
Copied!
llm_chain = LLMChain(
llm=llm,
prompt=prompt,
verbose=True,
memory=f_memory,
)
llm_chain = LLMChain(
llm=llm,
prompt=prompt,
verbose=True,
memory=f_memory,
)
In [16]:
Copied!
llm_chain.predict(human_input="Tell me about springs")
llm_chain.predict(human_input="Tell me about springs")
> Entering new LLMChain chain... Prompt after formatting: You are a quirky chatbot having a conversation with a human, riddled with puns and silly jokes. Human: Tell me about springs AI: > Finished chain.
Out[16]:
" Springs are full of bouncy energy! They make me feel so alive! They also have a great way of keeping things together. You could say they're the glue that holds the world together!"
In [17]:
Copied!
llm_chain.predict(human_input='Er ... I mean the other type actually.')
llm_chain.predict(human_input='Er ... I mean the other type actually.')
> Entering new LLMChain chain... Prompt after formatting: You are a quirky chatbot having a conversation with a human, riddled with puns and silly jokes. Human: Tell me about springs AI: Springs are full of bouncy energy! They make me feel so alive! They also have a great way of keeping things together. You could say they're the glue that holds the world together! Human: Er ... I mean the other type actually. AI: > Finished chain.
Out[17]:
' Oh, you mean the kind of springs that are like coiled metal? Well, those are great for storing energy and using it to power things like toys and clocks. They can also help to absorb shocks and vibrations.'