Deploy a Live Text Translation App using Gradio and Hugging Face Spaces
Many a times, making a model is just not enough. To make it really usable, we need to deploy a model, so that it can be tested by the end user. Deploying a model can be difficult as there are several bottlenecks that we can come across both in front-end as well as in back-end.
Gradio, a Python library, simplifies UI creation for your machine learning models. Hugging Face Spaces provides a seamless deployment platform. This efficient combination empowers rapid iteration and streamlines your development workflow.
Let’s build a real time text translation app using LangChain and deploy it using Gradio and HuggingFace spaces!
For this, first we need to create a script app.py
Before that make sure you have all the packages installed. We will use the following packages
import gradio as gr
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain.prompts import HumanMessagePromptTemplate, ChatPromptTemplate
from langchain.output_parsers import PydanticOutputParser
from langchain_openai import ChatOpenAI
We would need to use Pydantic Output Parser, and some Prompt Templates from LangChain. I have covered all the LangChain components that we need briefly in my previous articles. I would highly recommend you to have a read.
First, let’s create an instance of the chat model, that we will be using. Next, we define our pydantic model that would will specify the output format. Finally, we will use the output parser to parse the output in the required format.
chat = ChatOpenAI()
class TextTranslator(BaseModel):
output: str = Field(description="Python string containing the output text translated in the desired language")
output_parser = PydanticOutputParser(pydantic_object=TextTranslator)
format_instructions = output_parser.get_format_instructions()
Now, we define the text_translator
method that takes in two inputs: the input text and target language. We first template the prompts, add the instructions. This would be the input to the chat model(LLM). Finally, we return the parsed output.
def text_translator(input_text : str, language : str) -> str:
human_template = """Enter the text that you want to translate:
{input_text}, and enter the language that you want it to translate to {language}. {format_instructions}"""
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages([human_message_prompt])
prompt = chat_prompt.format_prompt(input_text = input_text, language = language, format_instructions = format_instructions)
messages = prompt.to_messages()
response = chat(messages = messages)
output = output_parser.parse(response.content)
output_text = output.output
return output_text
We are almost done with our back-end code. Now let’s create an interface for our app. Here we add two text boxes that will have the input and one text box for the translated text. We also add a button to generate the output.
with gr.Blocks() as demo:
gr.HTML("<h1 align = 'center'> Text Translator </h1>")
gr.HTML("<h4 align = 'center'> Translate to any language </h4>")
inputs = [gr.Textbox(label = "Enter the text that you want to translate"), gr.Textbox(label = "Enter the language that you want it to translate to", placeholder = "Example : Hindi,French,Bengali,etc")]
generate_btn = gr.Button(value = 'Generate')
outputs = [gr.Textbox(label = "Translated text")]
generate_btn.click(fn = text_translator, inputs= inputs, outputs = outputs)
if __name__ == '__main__':
demo.launch()
After testing this, we can deploy it to Spaces by HuggingFace. We first create a Space and next we need to upload two files: app.py
and requirements.txt
in the files section of the space. Now, if the build gets successful, we will have our interface ready.
You can try out the app here:
So, this is how quickly we can build and deploy any application. I have added all the scripts here.
Let me know if you have any questions.