LangChain and Creating GPT ‘Personal Assistants’

Saturday, August 26, 2023

Over the next serveral weeks I will be diving into my work in creating a personal assistant, much like a private chatGPT. The main reason will be so I can explore the different capabilities of LangChain and how I can use it to do exploratory data analysis on my library of PDF files, emails, and other offline documents.

As a quick proof of concept, I used Streamlit to create a very crude web interface that interacts with the OpenAI and Wikipedia APIs to have the Large Language Model (LLM) respond to my prompts.

As you can see in the video below, even though it is using the OpenAI model (which was trained during the past few years and has a knowledge cutoff of 2021), the model is able to accurately respond to recent news events of India’s successful (and impressive) moon landing. Had I only used ChatGPT, it would not know about these recent event and would reply about having the 2021 cutoff of it’s knowledge.

I’m looking forward to sharing the deep dives as I create this application and bring it online. All the code for this will be available at my github page at https://github.com/sullysbrain for you to view and watch the progress.

Coding the Bare Bones

To begin, I imported both Streamlit and Langchain using python. Streamlit will allow me to create the web interface in just a few lines of code. For production, we’ll have to build out the foundation a bit, but for proof of concept, this will allow me to have a very rough working prototype.

import streamlit as st

from langchain.utilities import WikipediaAPIWrapper
from langchain.agents import load_tools, initialize_agent, AgentType
from langchain.llms import OpenAI 

import os
import sys
sys.path.append('_private/')
from _private import api
os.environ['OPENAI_API_KEY'] = api.API_KEY

I also imported my API key in order to pipe LangChain into OpenAI’s API. This is saved as a separate file for security. The next step will be to setup our LLM with OpenAI and initialize the LangChain tools. I’m selecting Verbose=True in order to see exactly what it is thinking in the terminal as it attempts to answer my questions. This will help me troubleshoot later, in case I need to add some subtle prompting in code to guide OpenAI’s results.

# Init the LLM and Tools
llm = OpenAI(temperature=0)
tools = load_tools(['wikipedia'], llm=llm)
agent = initialize_agent(tools, llm, 
                         agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)

The next step will be to use Streamlit to create the web interface. It is running locally on my own computer for now so I can see it and interact with it via a web browser.

# collect user prompt
st.title('SullyGPT')
input = st.text_input('Enter a prompt:')

if input:
    text = agent.run(input)
    st.write(text)

Finally, we run the code and the result is the video below.




Hi. I'm Scott Sullivan, a slave of Christ, author, AI programmer, and animator. I spend my time split between the countryside of Lancaster, Pa, and Northern Italy, near Cinque Terre and La Spezia.

In addition to improving lives through data analytics with my BS in Computer Science, I also published, Searching For Me, my first memoir, about my adoption, search for my biological family, and how it affected my faith.