Efficient Few-Shot Prompting in LangChain-Part 1

Jayant Pal
6 min readFeb 25, 2024

--

Photo by Mojahid Mottakin on Unsplash

There has been a lot of rapid advancements going on in the AI and LLM space. These are driven not only by the underlying algorithms but also by the effective use of prompting techniques. Prompting serves as an important interface guiding AI models to unlock their full potential. It becomes more important in cases where fine tuning is difficult as there is a computational cost associated with it.

What is Prompt Engineering?

Prompt Engineering is a technique as well an art where we craft a specific set of instructions and context to the LLM in order to obtain the desired result. It’s like giving details and directions to an assistant(in this case, an LLM) ensuring that it understands the intent and generates the best possible response. There are several elements that are associated with respect to Prompt Engineering.

  • Instruction: Clearly defines the task that the LLM needs to perform, such as language translation, question-answering, etc.
  • Context: Provides the relevant information and examples to help the model understand specific domain.
  • Input Data: Can be text, images, audio, video, etc. Usually this serves as a starting point for LLM’s content generation.
  • Output Format: Specifies the desired output format of the LLM response.

Different types of Prompt Engineering

  • Few Shot Prompting: Providing the LLM with a few examples of desired outputs
  • Template Based Prompting: Using pre-defined templates with placeholders streamlining prompt creation and reuse.
  • Instructional Prompting: Explicitly instructing the LLM to how to perform the task, including steps and guidelines.
  • Contrastive Prompting: Providing the LLM with contrastive examples of good and bad outputs for it to distinguish between desired and undesired results.
  • Meta-Prompting: Training a separate model to generate the prompts for the main LLM.

We will typically focus on the combination of Few shot, Template based as well as Instructional Prompting.

LangChain

LangChain is an open-source framework that provides tools and abstractions that makes it easier to develop applications leveraging the capabilities of LLMs. It’s focus on simplicity and modularity enables us to build intelligent and conversational applications very quickly.

Right now. Let’s dive into the main topic of interest.

We would need to have the OpenAI API key to call the model. You can get it in their website. Note that OpenAI gives you $5 worth of API credits when you first create an OpenAI account.

#Setting the API key to the environment variable

with open('openai_api_key.txt', 'r') as f:
api_key = f.read()
os.environ['OPENAI_API_KEY'] = api_key

Let’s load a chat model, which we will use as a question answering model and lets create an instance of the chat model.

# Loading the chat model
from langchain.chat_models import ChatOpenAI

chat = ChatOpenAI()

The chat model interface is based around messages rather than raw text. We will be using the following for our use case.

SystemMessage: This is the role assigned to the AI or model.

HumanMessage: This is the request or the prompt to the model.

AIMessage: This is the AI response as per the request and role.

Let’s first pass only the human message that is the prompt to the model. We pass the message as a list to the model instance.

from langchain.schema import SystemMessage, HumanMessage, AIMessage
messages = [HumanMessage(content = 'Tell me something about the rising impact of AI over GDP of a country. ')]
response = chat(messages = messages, max_tokens = 100)
print(response.content)

The rising impact of AI on the GDP of a country is becoming increasingly significant as AI technologies continue to advance and become more integrated into various industries. AI has the potential to increase productivity, efficiency, and innovation in sectors such as healthcare, finance, manufacturing

Now, with the same human message, let’s add a system message to the prompt, and see the response. We explicitly instruct the LLM to behave as a movie director.

messages = [
SystemMessage(content = 'Consider yourself as a movie director'),
HumanMessage(content = "Tell me something about the rising impact of AI over GDP of a country. '")
]
response = chat(messages = messages, max_tokens = 50)
print(response.content)

As a movie director, I would frame this topic as a futuristic sci-fi thriller. The movie would be set in a world where artificial intelligence has become so advanced that it begins to significantly impact the GDP of a country.

We can see that the response is tightly based on the instructions or the rules that we set for the LLM even though the ask was same.

Now, there are some parameters we can experiment with, that can further modify the responses.

  • temperature: this influences the creativity and randomness of the model’s outputs. Lower temperature(around 0) prioritizes determinism, selecting most probable word, potentially lacking creativity. With higher temperature(above 1) model becomes probabilistic, leading to more diverse and creative outputs.
  • presence_penalty: it acts as a control mechanism to penalize the model from repeatedly generating the same words or phrases. Higher presence penalty leads to stricter model, reducing the chance of repitition and lower penalty allows more flexibility for the model.
  • max_tokens: to limit the number of tokens responded by the LLM.

Alright, now let’s add the AI message, where in we show the LLM some examples of user message and the AI response.

system_message = "Consider yourself as a movie director"

user_dialogue1 = "Sci-fi film, opening scene. Introduce a lone astronaut exploring a deserted spaceship."
sample_response1 = "We see the astronaut's gloved hand slowly open a hatch, revealing the desolate interior of the spaceship. Dust motes dance in the faint light filtering through cracks in the hull. Silence hangs heavy, broken only by the rhythmic hiss of the failing life support system."

user_dialogue2 = "Comedy, two detectives with contrasting styles meet for the first time. "
sample_response2 = "The door swings open, revealing a gruff detective in a rumpled trench coat. He's followed by a tech-savvy partner, their brightly colored gadgets contrasting sharply with the detective's old-school demeanor. A beat of awkward silence hangs in the air before the techie breaks the tension with a quip, eliciting a gruff chuckle from the seasoned detective."

user_dialogue3 = "Emotional scene, two friends reunite after a long estrangement. "
sample_response3 = "A hesitant knock at the door. It creaks open, revealing a weary face etched with years of unspoken longing. Tears well up in their eyes as they meet the gaze of their old friend, a silent understanding passing between them. A warm embrace, a lifetime of unspoken words exchanged in a single moment."

messages = [
SystemMessage(content = system_message),

HumanMessage(content = user_dialogue1),
AIMessage(content=sample_response1),

HumanMessage(content = user_dialogue2),
AIMessage(content=sample_response2),

HumanMessage(content = user_dialogue3),
AIMessage(content=sample_response3),

HumanMessage(content = 'Tell me something about the rising impact of AI over GDP of a country.')
]


response = chat(messages = messages, temperature = 1, presence_penalty = 0, max_tokens = 100)
print(response.content)

Here, we provide 3 sets of examples of HumanMessage and AIMessage. We expect the model to generate response in the similar lines.

We open with sweeping shots of a bustling city, its skyline dominated by sleek skyscrapers and bustling factories. Overlay graphics show data charts rising steadily as AI technologies become increasingly integrated into every aspect of the economy. Narration highlights the transformative impact of AI on productivity, innovation, and economic growth as the country’s GDP skyrockets to new heights. Cut to a montage of futuristic AI-powered industries, showcasing how the country’s economy has been revolutionized by the unstoppable rise of artificial intelligence.

So this is how we can do Few shot prompting leveraging LangChain and its modules.

The entire code can be found in this GitHub link. Let me know if you have any questions or comments.

Thank you for reading this article. Keep learning!

Connect with me on LinkedIn: Jayanta Kumar Pal

References

https://js.langchain.com/docs/modules/

--

--

Jayant Pal
Jayant Pal

Written by Jayant Pal

Data Scientist @ Euromonitor | Learner | Investor | Ardent Sports Fan | Github: https://github.com/jayantkp

No responses yet