使用Gradio构建大模型应用:Building Generative AI Applications with Gradio

Building Generative AI Applications with Gradio

本文是学习 https://www.deeplearning.ai/short-courses/building-generative-ai-applications-with-gradio/ 这门课的学习笔记。

在这里插入图片描述

What you’ll learn in this course

Join our new short course, Building Generative AI Applications with Gradio! Learn from Apolinário Passos, Machine Learning Art Engineer at Hugging Face.

What you’ll do:

  • With a few lines of code, create a user-friendly app (usable for non-coders) to take input text, summarize it with an open-source large language model, and display the summary.
  • Create an app that allows the user to upload an image, which uses an image to text (image captioning) to describe the uploaded image, and display both the image and the caption in the app.
  • Create an app that takes text and generates an image with a diffusion model, then displays the generated image within the app.
  • Combine what you learned in the previous two lessons: Upload an image, caption the image, and use the caption to generate a new image.
  • Create an interface to chat with an open source LLM using Falcon, the best-ranking open source LLM on the Open LLM Leaderboard.

By the end of the course, you’ll gain the practical knowledge to rapidly build interactive apps and demos to validate your project and ship faster

文章目录

  • Building Generative AI Applications with Gradio
    • What you’ll learn in this course
  • L1: NLP tasks with a simple interface 🗞️
    • Building a text summarization app
    • Building a Named Entity Recognition app
        • gr.interface()
      • Adding a helper function to merge tokens
  • L2: Image captioning app 🖼️📝
    • Building an image captioning app
    • Captioning with `gr.Interface()`
  • L3: Image generation app 🎨
    • Building an image generation app
    • Generating with `gr.Interface()`
    • Building a more advanced interface
        • gr.Slider()
        • `gr.Blocks()`
        • scale
        • gr.Accordion()
  • L4: Describe-and-Generate game 🖍️
    • Building your game with `gr.Blocks()`
      • First attempt, just captioning
      • Let's add generation
      • Doing it all at once
  • L5: Chat with any LLM! 💬
    • Building an app to chat with any LLM
    • `gr.Chatbot()`
      • Adding other advanced features
      • Streaming
  • Afterword

L1: NLP tasks with a simple interface 🗞️

在这里插入图片描述

Load your HF API key and relevant Python libraries.

import os
import io
from IPython.display import Image, display, HTML
from PIL import Image
import base64 
from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv()) # read local .env file
hf_api_key = os.environ['HF_API_KEY']
# Helper function
import requests, json

#Summarization endpoint
def get_completion(inputs, parameters=None,ENDPOINT_URL=os.environ['HF_API_SUMMARY_BASE']): 
    headers = {
      "Authorization": f"Bearer {hf_api_key}",
      "Content-Type": "application/json"
    }
    data = { "inputs": inputs }
    if parameters is not None:
        data.update({"parameters": parameters})
    response = requests.request("POST",
                                ENDPOINT_URL, headers=headers,
                                data=json.dumps(data)
                               )
    return json.loads(response.content.decode("utf-8"))

How about running it locally?

在这里插入图片描述

The code would look very similar if you were running it locally instead of from an API. The same is true for all the models in the rest of the course, make sure to check the Pipelines documentation page

from transformers import pipeline

get_completion = pipeline("summarization", model="shleifer/distilbart-cnn-12-6")

def summarize(input):
    output = get_completion(input)
    return output[0]['summary_text']
    

Building a text summarization app

text = ('''The tower is 324 metres (1,063 ft) tall, about the same height
        as an 81-storey building, and the tallest structure in Paris. 
        Its base is square, measuring 125 metres (410 ft) on each side. 
        During its construction, the Eiffel Tower surpassed the Washington 
        Monument to become the tallest man-made structure in the world,
        a title it held for 41 years until the Chrysler Building
        in New York City was finished in 1930. It was the first structure 
        to reach a height of 300 metres. Due to the addition of a broadcasting 
        aerial at the top of the tower in 1957, it is now taller than the 
        Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the 
        Eiffel Tower is the second tallest free-standing structure in France 
        after the Millau Viaduct.''')

get_completion(text)

Output

[{'summary_text': ' The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building . It is the tallest structure in Paris and the second tallest free-standing structure in France after the Millau Viaduct . It was the first structure in the world to reach a height of 300 metres .'}]

Getting started with Gradio gr.Interface

import gradio as gr
def summarize(input):
    output = get_completion(input)
    return output[0]['summary_text']
    
gr.close_all()
demo = gr.Interface(fn=summarize, inputs="text", outputs="text")
demo.launch(share=True, server_port=int(os.environ['PORT1']))

demo.launch(share=True) lets you create a public link to share with your team or friends.

import gradio as gr

def summarize(input):
    output = get_completion(input)
    return output[0]['summary_text']

gr.close_all()
demo = gr.Interface(fn=summarize, 
                    inputs=[gr.Textbox(label="Text to summarize", lines=6)],
                    outputs=[gr.Textbox(label="Result", lines=3)],
                    title="Text summarization with distilbart-cnn",
                    description="Summarize any text using the `shleifer/distilbart-cnn-12-6` model under the hood!"
                   )
demo.launch(share=True, server_port=int(os.environ['PORT2']))

Output

在这里插入图片描述

Building a Named Entity Recognition app

We are using this Inference Endpoint for dslim/bert-base-NER, a 108M parameter fine-tuned BART model on the NER task.

在这里插入图片描述

How about running it locally?

from transformers import pipeline

get_completion = pipeline("ner", model="dslim/bert-base-NER")

def ner(input):
    output = get_completion(input)
    return {"text": input, "entities": output}
    
API_URL = os.environ['HF_API_NER_BASE'] #NER endpoint
text = "My name is Andrew, I'm building DeepLearningAI and I live in California"
get_completion(text, parameters=None, ENDPOINT_URL= API_URL)

Output

[{'entity': 'B-PER',
  'score': 0.9990625,
  'index': 4,
  'word': 'Andrew',
  'start': 11,
  'end': 17},
 {'entity': 'B-ORG',
  'score': 0.9927857,
  'index': 10,
  'word': 'Deep',
  'start': 32,
  'end': 36},
 {'entity': 'I-ORG',
  'score': 0.99677867,
  'index': 11,
  'word': '##L',
  'start': 36,
  'end': 37},
 {'entity': 'I-ORG',
  'score': 0.9954496,
  'index': 12,
  'word': '##ear',
  'start': 37,
  'end': 40},
 {'entity': 'I-ORG',
  'score': 0.9959293,
  'index': 13,
  'word': '##ning',
  'start': 40,
  'end': 44},
 {'entity': 'I-ORG',
  'score': 0.8917463,
  'index': 14,
  'word': '##A',
  'start': 44,
  'end': 45},
 {'entity': 'I-ORG',
  'score': 0.50361204,
  'index': 15,
  'word': '##I',
  'start': 45,
  'end': 46},
 {'entity': 'B-LOC',
  'score': 0.99969244,
  'index': 20,
  'word': 'California',
  'start': 61,
  'end': 71}]
gr.interface()
  • Notice below that we pass in a list [] to inputs and to outputs because the function fn (in this case, ner(), can take in more than one input and return more than one output.
  • The number of objects passed to inputs list should match the number of parameters that the fn function takes in, and the number of objects passed to the outputs list should match the number of objects returned by the fn function.
def ner(input):
    output = get_completion(input, parameters=None, ENDPOINT_URL=API_URL)
    return {"text": input, "entities": output}

gr.close_all()
demo = gr.Interface(fn=ner,
                    inputs=[gr.Textbox(label="Text to find entities", lines=2)],
                    outputs=[gr.HighlightedText(label="Text with entities")],
                    title="NER with dslim/bert-base-NER",
                    description="Find entities using the `dslim/bert-base-NER` model under the hood!",
                    allow_flagging="never",
                    #Here we introduce a new tag, examples, easy to use examples for your application
                    examples=["My name is Andrew and I live in California", "My name is Poli and work at HuggingFace"])
demo.launch(share=True, server_port=int(os.environ['PORT3']))

Output

在这里插入图片描述

Adding a helper function to merge tokens

def merge_tokens(tokens):
    merged_tokens = []
    for token in tokens:
        if merged_tokens and token['entity'].startswith('I-') and merged_tokens[-1]['entity'].endswith(token['entity'][2:]):
            # If current token continues the entity of the last one, merge them
            last_token = merged_tokens[-1]
            last_token['word'] += token['word'].replace('##', '')
            last_token['end'] = token['end']
            last_token['score'] = (last_token['score'] + token['score']) / 2
        else:
            # Otherwise, add the token to the list
            merged_tokens.append(token)

    return merged_tokens

def ner(input):
    output = get_completion(input, parameters=None, ENDPOINT_URL=API_URL)
    merged_tokens = merge_tokens(output)
    return {"text": input, "entities": merged_tokens}

gr.close_all()
demo = gr.Interface(fn=ner,
                    inputs=[gr.Textbox(label="Text to find entities", lines=2)],
                    outputs=[gr.HighlightedText(label="Text with entities")],
                    title="NER with dslim/bert-base-NER",
                    description="Find entities using the `dslim/bert-base-NER` model under the hood!",
                    allow_flagging="never",
                    examples=["My name is Andrew, I'm building DeeplearningAI and I live in California", "My name is Poli, I live in Vienna and work at HuggingFace"])

demo.launch(share=True, server_port=int(os.environ['PORT4']))

Output

在这里插入图片描述

L2: Image captioning app 🖼️📝

import os
import io
import IPython.display
from PIL import Image
import base64 
from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv()) # read local .env file
hf_api_key = os.environ['HF_API_KEY']
# Helper functions
import requests, json

#Image-to-text endpoint
def get_completion(inputs, parameters=None, ENDPOINT_URL=os.environ['HF_API_ITT_BASE']):
    headers = {
      "Authorization": f"Bearer {hf_api_key}",
      "Content-Type": "application/json"
    }
    data = { "inputs": inputs }
    if parameters is not None:
        data.update({"parameters": parameters})
    response = requests.request("POST",
                                ENDPOINT_URL,
                                headers=headers,
                                data=json.dumps(data))
    return json.loads(response.content.decode("utf-8"))

Building an image captioning app

Here we’ll be using an Inference Endpoint for Salesforce/blip-image-captioning-base a 14M parameter captioning model.

The free images are available on: https://free-images.com/

image_url = "https://free-images.com/sm/9596/dog_animal_greyhound_983023.jpg"
display(IPython.display.Image(url=image_url))
get_completion(image_url)

Output

在这里插入图片描述

Captioning with gr.Interface()

gr.Image()

  • The type parameter is the format that the fn function expects to receive as its input. If type is numpy or pil, gr.Image() will convert the uploaded file to this format before sending it to the fn function.
  • If type is filepath, gr.Image() will temporarily store the image and provide a string path to that image location as input to the fn function.
import gradio as gr 

def image_to_base64_str(pil_image):
    byte_arr = io.BytesIO()
    pil_image.save(byte_arr, format='PNG')
    byte_arr = byte_arr.getvalue()
    return str(base64.b64encode(byte_arr).decode('utf-8'))

def captioner(image):
    base64_image = image_to_base64_str(image)
    result = get_completion(base64_image)
    return result[0]['generated_text']

gr.close_all()
demo = gr.Interface(fn=captioner,
                    inputs=[gr.Image(label="Upload image", type="pil")],
                    outputs=[gr.Textbox(label="Caption")],
                    title="Image Captioning with BLIP",
                    description="Caption any image using the BLIP model",
                    allow_flagging="never",
                    examples=["christmas_dog.jpeg", "bird_flight.jpeg", "cow.jpeg"])

demo.launch(share=True, server_port=int(os.environ['PORT1']))

Output

在这里插入图片描述

L3: Image generation app 🎨

import os
import io
import IPython.display
from PIL import Image
import base64 
from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv()) # read local .env file
hf_api_key = os.environ['HF_API_KEY']
# Helper function
import requests, json

#Text-to-image endpoint
def get_completion(inputs, parameters=None, ENDPOINT_URL=os.environ['HF_API_TTI_BASE']):
    headers = {
      "Authorization": f"Bearer {hf_api_key}",
      "Content-Type": "application/json"
    }   
    data = { "inputs": inputs }
    if parameters is not None:
        data.update({"parameters": parameters})
    response = requests.request("POST",
                                ENDPOINT_URL,
                                headers=headers,
                                data=json.dumps(data))
    return json.loads(response.content.decode("utf-8"))

Building an image generation app

Here we are going to run runwayml/stable-diffusion-v1-5 using the 🧨 diffusers library.

prompt = "a dog in a park"

result = get_completion(prompt)
IPython.display.HTML(f'<img src="data:image/png;base64,{result}" />')

Output

在这里插入图片描述

Generating with gr.Interface()

import gradio as gr 

#A helper function to convert the PIL image to base64
#so you can send it to the API
def base64_to_pil(img_base64):
    base64_decoded = base64.b64decode(img_base64)
    byte_stream = io.BytesIO(base64_decoded)
    pil_image = Image.open(byte_stream)
    return pil_image

def generate(prompt):
    output = get_completion(prompt)
    result_image = base64_to_pil(output)
    return result_image

gr.close_all()
demo = gr.Interface(fn=generate,
                    inputs=[gr.Textbox(label="Your prompt")],
                    outputs=[gr.Image(label="Result")],
                    title="Image Generation with Stable Diffusion",
                    description="Generate any image with Stable Diffusion",
                    allow_flagging="never",
                    examples=["the spirit of a tamagotchi wandering in the city of Vienna","a mecha robot in a favela"])

demo.launch(share=True, server_port=int(os.environ['PORT1']))

Input

A photo of a lovely bike

Output

在这里插入图片描述

Building a more advanced interface

import gradio as gr 

#A helper function to convert the PIL image to base64 
# so you can send it to the API
def base64_to_pil(img_base64):
    base64_decoded = base64.b64decode(img_base64)
    byte_stream = io.BytesIO(base64_decoded)
    pil_image = Image.open(byte_stream)
    return pil_image

def generate(prompt, negative_prompt, steps, guidance, width, height):
    params = {
        "negative_prompt": negative_prompt,
        "num_inference_steps": steps,
        "guidance_scale": guidance,
        "width": width,
        "height": height
    }
    
    output = get_completion(prompt, params)
    pil_image = base64_to_pil(output)
    return pil_image
gr.Slider()
  • You can set the minimum, maximum, and starting value for a gr.Slider().
  • If you want the slider to increment by integer values, you can set step=1.
gr.close_all()
demo = gr.Interface(fn=generate,
                    inputs=[
                        gr.Textbox(label="Your prompt"),
                        gr.Textbox(label="Negative prompt"),
                        gr.Slider(label="Inference Steps", minimum=1, maximum=100, value=25,
                                 info="In how many steps will the denoiser denoise the image?"),
                        gr.Slider(label="Guidance Scale", minimum=1, maximum=20, value=7, 
                                  info="Controls how much the text prompt influences the result"),
                        gr.Slider(label="Width", minimum=64, maximum=512, step=64, value=512),
                        gr.Slider(label="Height", minimum=64, maximum=512, step=64, value=512),
                    ],
                    outputs=[gr.Image(label="Result")],
                    title="Image Generation with Stable Diffusion",
                    description="Generate any image with Stable Diffusion",
                    allow_flagging="never"
                    )

demo.launch(share=True, server_port=int(os.environ['PORT2']))

Output

在这里插入图片描述

gr.Blocks()
  • Within gr.Blocks(), you can define multiple gr.Row()s, or multiple gr.Column()s.

  • Note that if the jupyter notebook is very narrow, the layout may change to better display the objects. If you define two columns but don’t see the two columns in the app, try expanding the width of your web browser, and the screen containing this jupyter notebook.

  • When using gr.Blocks(), you’ll need to explicitly define the “Submit” button using gr.Button(), whereas the ‘Clear’ and ‘Submit’ buttons are automatically added when using gr.Interface().

with gr.Blocks() as demo:
    gr.Markdown("# Image Generation with Stable Diffusion")
    prompt = gr.Textbox(label="Your prompt")
    with gr.Row():
        with gr.Column():
            negative_prompt = gr.Textbox(label="Negative prompt")
            steps = gr.Slider(label="Inference Steps", minimum=1, maximum=100, value=25,
                      info="In many steps will the denoiser denoise the image?")
            guidance = gr.Slider(label="Guidance Scale", minimum=1, maximum=20, value=7,
                      info="Controls how much the text prompt influences the result")
            width = gr.Slider(label="Width", minimum=64, maximum=512, step=64, value=512)
            height = gr.Slider(label="Height", minimum=64, maximum=512, step=64, value=512)
            btn = gr.Button("Submit")
        with gr.Column():
            output = gr.Image(label="Result")

    btn.click(fn=generate, inputs=[prompt,negative_prompt,steps,guidance,width,height], outputs=[output])
gr.close_all()
demo.launch(share=True, server_port=int(os.environ['PORT2']))
scale
  • To choose how much relative width to give to each column, set the scale parameter of each gr.Column().
  • If one column has scale=4 and the second column has scale=1, then the first column takes up 4/5 of the total width, and the second column takes up 1/5 of the total width.
gr.Accordion()
  • The gr.Accordion() can show/hide the app options with a mouse click.
  • Set open=True to show the contents of the Accordion by default, or False to hide it by default.
with gr.Blocks() as demo:
    gr.Markdown("# Image Generation with Stable Diffusion")
    with gr.Row():
        with gr.Column(scale=4):
            prompt = gr.Textbox(label="Your prompt") #Give prompt some real estate
        with gr.Column(scale=1, min_width=50):
            btn = gr.Button("Submit") #Submit button side by side!
    with gr.Accordion("Advanced options", open=False): #Let's hide the advanced options!
            negative_prompt = gr.Textbox(label="Negative prompt")
            with gr.Row():
                with gr.Column():
                    steps = gr.Slider(label="Inference Steps", minimum=1, maximum=100, value=25,
                      info="In many steps will the denoiser denoise the image?")
                    guidance = gr.Slider(label="Guidance Scale", minimum=1, maximum=20, value=7,
                      info="Controls how much the text prompt influences the result")
                with gr.Column():
                    width = gr.Slider(label="Width", minimum=64, maximum=512, step=64, value=512)
                    height = gr.Slider(label="Height", minimum=64, maximum=512, step=64, value=512)
    output = gr.Image(label="Result") #Move the output up too
            
    btn.click(fn=generate, inputs=[prompt,negative_prompt,steps,guidance,width,height], outputs=[output])

gr.close_all()
demo.launch(share=True, server_port=int(os.environ['PORT4']))

Output

在这里插入图片描述

L4: Describe-and-Generate game 🖍️

Construct a game app

在这里插入图片描述

import os
import io
from IPython.display import Image, display, HTML
from PIL import Image
import base64 

from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv()) # read local .env file
hf_api_key = os.environ['HF_API_KEY']
#### Helper function
import requests, json

#Here we are going to call multiple endpoints!
def get_completion(inputs, parameters=None, ENDPOINT_URL=""):
    headers = {
      "Authorization": f"Bearer {hf_api_key}",
      "Content-Type": "application/json"
    }   
    data = { "inputs": inputs }
    if parameters is not None:
        data.update({"parameters": parameters})
    response = requests.request("POST",
                                ENDPOINT_URL,
                                headers=headers,
                                data=json.dumps(data))
    return json.loads(response.content.decode("utf-8"))
#text-to-image
TTI_ENDPOINT = os.environ['HF_API_TTI_BASE']
#image-to-text
ITT_ENDPOINT = os.environ['HF_API_ITT_BASE']

Building your game with gr.Blocks()

#Bringing the functions from lessons 3 and 4!
def image_to_base64_str(pil_image):
    byte_arr = io.BytesIO()
    pil_image.save(byte_arr, format='PNG')
    byte_arr = byte_arr.getvalue()
    return str(base64.b64encode(byte_arr).decode('utf-8'))

def base64_to_pil(img_base64):
    base64_decoded = base64.b64decode(img_base64)
    byte_stream = io.BytesIO(base64_decoded)
    pil_image = Image.open(byte_stream)
    return pil_image

def captioner(image):
    base64_image = image_to_base64_str(image)
    result = get_completion(base64_image, None, ITT_ENDPOINT)
    return result[0]['generated_text']

def generate(prompt):
    output = get_completion(prompt, None, TTI_ENDPOINT)
    result_image = base64_to_pil(output)
    return result_image

First attempt, just captioning

import gradio as gr 
with gr.Blocks() as demo:
    gr.Markdown("# Describe-and-Generate game 🖍️")
    image_upload = gr.Image(label="Your first image",type="pil")
    btn_caption = gr.Button("Generate caption")
    caption = gr.Textbox(label="Generated caption")
    
    btn_caption.click(fn=captioner, inputs=[image_upload], outputs=[caption])

gr.close_all()
demo.launch(share=True, server_port=int(os.environ['PORT1']))

Output

在这里插入图片描述

Let’s add generation

with gr.Blocks() as demo:
    gr.Markdown("# Describe-and-Generate game 🖍️")
    image_upload = gr.Image(label="Your first image",type="pil")
    btn_caption = gr.Button("Generate caption")
    caption = gr.Textbox(label="Generated caption")
    btn_image = gr.Button("Generate image")
    image_output = gr.Image(label="Generated Image")
    btn_caption.click(fn=captioner, inputs=[image_upload], outputs=[caption])
    btn_image.click(fn=generate, inputs=[caption], outputs=[image_output])

gr.close_all()
demo.launch(share=True, server_port=int(os.environ['PORT2']))

Output

在这里插入图片描述

Doing it all at once

def caption_and_generate(image):
    caption = captioner(image)
    image = generate(caption)
    return [caption, image]

with gr.Blocks() as demo:
    gr.Markdown("# Describe-and-Generate game 🖍️")
    image_upload = gr.Image(label="Your first image",type="pil")
    btn_all = gr.Button("Caption and generate")
    caption = gr.Textbox(label="Generated caption")
    image_output = gr.Image(label="Generated Image")

    btn_all.click(fn=caption_and_generate, inputs=[image_upload], outputs=[caption, image_output])

gr.close_all()
demo.launch(share=True, server_port=int(os.environ['PORT3']))

Output

在这里插入图片描述

L5: Chat with any LLM! 💬

import os
import io
import IPython.display
from PIL import Image
import base64 
import requests 
requests.adapters.DEFAULT_TIMEOUT = 60

from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv()) # read local .env file
hf_api_key = os.environ['HF_API_KEY']
# Helper function
import requests, json
from text_generation import Client

#FalcomLM-instruct endpoint on the text_generation library
client = Client(os.environ['HF_API_FALCOM_BASE'], headers={"Authorization": f"Basic {hf_api_key}"}, timeout=120)

Building an app to chat with any LLM

Here we’ll be using an Inference Endpoint for falcon-40b-instruct , the best ranking open source LLM on the 🤗 Open LLM Leaderboard.

prompt = "Has math been invented or discovered?"
client.generate(prompt, max_new_tokens=256).generated_text

Output

'\nMath has been both invented and discovered. It is a human invention in the sense that it is a system of rules and concepts that we have created to help us understand the world around us. However, it is also a discovery in the sense that it is a fundamental aspect of the universe that we have uncovered through our observations and experiments.'
#Back to Lesson 2, time flies!
import gradio as gr
def generate(input, slider):
    output = client.generate(input, max_new_tokens=slider).generated_text
    return output

demo = gr.Interface(fn=generate, 
                    inputs=[gr.Textbox(label="Prompt"), 
                            gr.Slider(label="Max new tokens", 
                                      value=20,  
                                      maximum=1024, 
                                      minimum=1)], 
                    outputs=[gr.Textbox(label="Completion")])

gr.close_all()
demo.launch(share=True, server_port=int(os.environ['PORT1']))

Output

在这里插入图片描述

gr.Chatbot()

  • gr.Chatbot() allows you to save the chat history (between the user and the LLM) as well as display the dialogue in the app.

  • Define your fn to take in a gr.Chatbot() object.

    • Within your defined fn function, append a tuple (or a list) containing the user message and the LLM’s response:
      chatbot_object.append( (user_message, llm_message) )
  • Include the chatbot object in both the inputs and the outputs of the app.

import random

def respond(message, chat_history):
        #No LLM here, just respond with a random pre-made message
        bot_message = random.choice(["Tell me more about it", 
                                     "Cool, but I'm not interested", 
                                     "Hmmmm, ok then"]) 
        chat_history.append((message, bot_message))
        return "", chat_history

with gr.Blocks() as demo:
    chatbot = gr.Chatbot(height=240) #just to fit the notebook
    msg = gr.Textbox(label="Prompt")	
    btn = gr.Button("Submit")
    clear = gr.ClearButton(components=[msg, chatbot], value="Clear console")

    btn.click(respond, inputs=[msg, chatbot], outputs=[msg, chatbot])
    msg.submit(respond, inputs=[msg, chatbot], outputs=[msg, chatbot]) #Press enter to submit

gr.close_all()
demo.launch(share=True, server_port=int(os.environ['PORT2']))

Output

在这里插入图片描述

Format the prompt with the chat history

  • You can iterate through the chatbot object with a for loop.
  • Each item is a tuple containing the user message and the LLM’s message.
for turn in chat_history:
    user_msg, bot_msg = turn
    ...
def format_chat_prompt(message, chat_history):
    prompt = ""
    for turn in chat_history:
        user_message, bot_message = turn
        prompt = f"{prompt}\nUser: {user_message}\nAssistant: {bot_message}"
    prompt = f"{prompt}\nUser: {message}\nAssistant:"
    return prompt

def respond(message, chat_history):
        formatted_prompt = format_chat_prompt(message, chat_history)
        bot_message = client.generate(formatted_prompt,
                                     max_new_tokens=1024,
                                     stop_sequences=["\nUser:", "<|endoftext|>"]).generated_text
        chat_history.append((message, bot_message))
        return "", chat_history

with gr.Blocks() as demo:
    chatbot = gr.Chatbot(height=240) #just to fit the notebook
    msg = gr.Textbox(label="Prompt")
    btn = gr.Button("Submit")
    clear = gr.ClearButton(components=[msg, chatbot], value="Clear console")

    btn.click(respond, inputs=[msg, chatbot], outputs=[msg, chatbot])
    msg.submit(respond, inputs=[msg, chatbot], outputs=[msg, chatbot]) #Press enter to submit

gr.close_all()
demo.launch(share=True, server_port=int(os.environ['PORT3']))

Adding other advanced features

def format_chat_prompt(message, chat_history, instruction):
    prompt = f"System:{instruction}"
    for turn in chat_history:
        user_message, bot_message = turn
        prompt = f"{prompt}\nUser: {user_message}\nAssistant: {bot_message}"
    prompt = f"{prompt}\nUser: {message}\nAssistant:"
    return prompt

Streaming

  • If your LLM can provide its tokens one at a time in a stream, you can accumulate those tokens in the chatbot object.
  • The for loop in the following function goes through all the tokens that are in the stream and appends them to the most recent conversational turn in the chatbot’s message history.
def respond(message, chat_history, instruction, temperature=0.7):
    prompt = format_chat_prompt(message, chat_history, instruction)
    chat_history = chat_history + [[message, ""]]
    stream = client.generate_stream(prompt,
                                      max_new_tokens=1024,
                                      stop_sequences=["\nUser:", "<|endoftext|>"],
                                      temperature=temperature)
                                      #stop_sequences to not generate the user answer
    acc_text = ""
    #Streaming the tokens
    for idx, response in enumerate(stream):
            text_token = response.token.text

            if response.details:
                return

            if idx == 0 and text_token.startswith(" "):
                text_token = text_token[1:]

            acc_text += text_token
            last_turn = list(chat_history.pop(-1))
            last_turn[-1] += acc_text
            chat_history = chat_history + [last_turn]
            yield "", chat_history
            acc_text = ""
with gr.Blocks() as demo:
    chatbot = gr.Chatbot(height=240) #just to fit the notebook
    msg = gr.Textbox(label="Prompt")
    with gr.Accordion(label="Advanced options",open=False):
        system = gr.Textbox(label="System message", lines=2, value="A conversation between a user and an LLM-based AI assistant. The assistant gives helpful and honest answers.")
        temperature = gr.Slider(label="temperature", minimum=0.1, maximum=1, value=0.7, step=0.1)
    btn = gr.Button("Submit")
    clear = gr.ClearButton(components=[msg, chatbot], value="Clear console")

    btn.click(respond, inputs=[msg, chatbot, system], outputs=[msg, chatbot])
    msg.submit(respond, inputs=[msg, chatbot, system], outputs=[msg, chatbot]) #Press enter to submit

gr.close_all()
demo.queue().launch(share=True, server_port=int(os.environ['PORT4']))    

Output

在这里插入图片描述

Notice, in the cell above, you have used demo.queue().launch() instead of demo.launch(). “queue” helps you to boost up the performance for your demo. You can read setting up a demo for maximum performance for more details.

Afterword

2024年6月1日19点52分完成这门课的学习,了解了gradio的基本用法。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:/a/671132.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

C++与Java数据结构设计的区别(持续更新中)

链表 c代码示例 #include <list> int main() {std::list<int> list;list.push_front(1);list.push_back(2);std::list<int>::iterator it list.begin();while (it ! list.end()){std::cout << *it << std::endl;}return 0; } java代码示例 …

C#WPF数字大屏项目实战06--报警信息

1、ItemsControl 简介 ItemsControl 是用来表示一些条目集合的控件&#xff0c;所以它叫条目控件&#xff0c;它的成员是一些其它控件的集合&#xff0c;其继承关系如下&#xff1a; 其常用的派生控件为&#xff1a;ListBox、ListView、ComboBox&#xff0c;为ItemsCo…

Spring Boot 官方不再支持 Spring Boot 的 2.x 版本!新idea如何创建java8项目

idea现在只能创建最少jdk17 使用 IDEA 内置的 Spring Initializr 创建 Spring Boot 新项目时&#xff0c;没有 Java 8 的选项了&#xff0c;只剩下了 > 17 的版本 是因为 Spring Boot 官方不再支持 Spring Boot 的 2.x 版本了&#xff0c;之后全力维护 3.x&#xff1b;而 …

Mac专用投屏工具:AirServer 7 for Mac 激活版下载

AirServer 7 是一款在 Windows 和 macOS 平台上运行的强大的屏幕镜像和屏幕录制软件。它能够将 iOS 设备、Mac 以及其他 AirPlay、Google Cast 和 Miracast 兼容设备的屏幕镜像到电脑上&#xff0c;并支持高质量的录制功能。总的来说&#xff0c;AirServer 7 是一款功能全面的屏…

配置 HTTP 代理 (HTTP proxy)

配置 HTTP 代理 [HTTP proxy] 1. Proxies2. curl2.1. Environment2.2. Proxy protocol prefixes 3. Use an HTTP proxy (使用 HTTP 代理)3.1. Using the examples (使用示例)3.1.1. Linux or macOS3.1.2. Windows Command Prompt 3.2. Authenticating to a proxy (向代理进行身…

Java开发分析工具:JProfiler 14 for Mac/win 激活版下载

JProfiler是一款功能强大的Java应用程序性能分析工具&#xff0c;适用于Java开发人员和企业用户&#xff0c;可帮助他们识别和解决Java应用程序中的性能问题&#xff0c;提高应用程序的性能和稳定性。使用JProfiler&#xff0c;开发人员可以实时查看Java应用程序的性能数据&…

如何实现对亚马逊网站产品搜索结果网页中自动上下页的翻页

需要在亚马逊网站中通过关键词搜索产品时&#xff0c;实现对网页自动进行点击上下页链接翻页&#xff0c;以便进行数据提取。经观察发现&#xff0c;包含上下页翻页链接的HTML内容及代码如下所示&#xff1a; <a href"/s?kiphonecharger&amp;crid1EE3FWFFFIWJEOF&…

mediasoup基础概览

提示&#xff1a;本文为之前mediasoup基础介绍的优化 mediasoup基础概览 架构&#xff1a;2.特性&#xff1a;优点缺点 3.mediasoup常见类介绍js部分c 4.mediasoup类图5.业务类图 Mediasoup 是一个构建在现代 Web 技术之上的实时通信&#xff08;RTC&#xff09;解决方案&#…

Mac硬件设备系统环境的升级/更新 macOS

Mac硬件设备上进行系统环境的升级/更新macOS 1.大版本(升级)判断(比如&#xff1a;我买的这台电脑设备最高支持Monterey) 点击进入对应的大版本描述说明页查看相关的兼容性描述&#xff0c;根据描述确定当前的电脑设备最高可采用哪个大版本系统(Sonoma/Ventura/Monterey/Big Su…

vue中使用WebSocket心跳机制与Linux中的心跳机制

WebSocket心跳机制 一、WebSocket简介 WebSocket是HTML5开始提供的一种浏览器与服务器进行全双工通讯的网络技术&#xff0c;属于应用层协议。 WebSocket 使得客户端和服务器之间的数据交换变得更加简单&#xff0c;允许服务端主动向客户端推送数据。 二、WebSocket事件与方法 …

为什么总是卡在验证真人这里无法通过验证?

最近总是在浏览某些网站的时候卡在这个“确认你是真人”的验证页面&#xff0c;无法通过真人验证&#xff0c;这是怎么回事儿&#xff1f;如何解决呢&#xff1f; 首先&#xff0c;出现这个“确认您是真人”的验证一般都是这个网站使用了 CloudFlare 的安全防护 WAF 规则才会出…

【每日刷题】Day54

【每日刷题】Day54 &#x1f955;个人主页&#xff1a;开敲&#x1f349; &#x1f525;所属专栏&#xff1a;每日刷题&#x1f34d; &#x1f33c;文章目录&#x1f33c; 1. 575. 分糖果 - 力扣&#xff08;LeetCode&#xff09; 2. 147. 对链表进行插入排序 - 力扣&#xf…

Ubuntu 24.04 LTS 安装Docker

1 更新软件包索引&#xff1a; sudo apt-get update 2 安装必要的软件包&#xff0c;以允许apt通过HTTPS使用仓库&#xff1a; sudo apt-get install apt-transport-https ca-certificates curl software-properties-common 3 添加Docker的官方GPG密钥&#xff1a; curl -fs…

【WEEK14】 【DAY5】Swagger Part 3【English Version】

2024.5.31 Friday Following up on【WEEK14】 【DAY4】Swagger Part 2【English Version】 Contents 16.6. Configure API Groups16.6.1. Modify SwaggerConfig.java16.6.2. Restart 16.7. Entity Configuration16.7.1. Create a new pojo folder16.7.2. Modify HelloControlle…

RDD与Java实战:学生列表,先按性别降序,再按年龄降序排列

文章目录 Scala RDD 实现Java 实现实战总结 在本实战任务中&#xff0c;我们的目标是对学生列表进行排序&#xff0c;排序规则是先按性别降序排列&#xff0c;再按年龄降序排列。我们提供了两种实现方式&#xff1a;使用Scala的RDD&#xff08;弹性分布式数据集&#xff09;和…

牛客网刷题 | BC109 正斜线形图案

目前主要分为三个专栏&#xff0c;后续还会添加&#xff1a; 专栏如下&#xff1a; C语言刷题解析 C语言系列文章 我的成长经历 感谢阅读&#xff01; 初来乍到&#xff0c;如有错误请指出&#xff0c;感谢&#xff01; 描述 KiKi学习了循环&am…

今日好料推荐(这就是会计)

今日好料推荐&#xff08;这就是会计&#xff09; 参考资料在文末获取&#xff0c;关注我&#xff0c;获取优质资源。 这就是会计&#xff1a;资本市场的会计逻辑 是一本由中国会计专家编写的书籍&#xff0c;旨在深入探讨会计在资本市场中的核心作用及其运作逻辑。作为一本…

HTTPS加密

一.加密是什么 加密就是把明文(要传输的信息)进行一系列的变换,生成密文. 有加密就有解密,解密就是把密文进行一系列的变换,生成明文. 在这个加密和解密过程中,往往需要一个或多个中间数据,辅助进行这个过程,这样的数据称为密钥. 加密解密到如今已经发展成了一个独立的学科 : 密…

AtCoder Beginner Contest 356 A~E(F更新中...)

A.Subsegment Reverse 题意 给出三个正整数 N , L , R N, L, R N,L,R。 对于一个序列 A ( 1 , 2 , … , N ) A (1, 2, \ldots, N) A(1,2,…,N)&#xff0c;请你输出翻转了 L ∼ R L \sim R L∼R之间数字后得到的序列。 分析 使用循环进行翻转即可。 代码 #include <…

学工管理系统有什么作用

学生信息办理系统是针对学校学生处的很多作业处理作业而开发的办理软件&#xff0c;首要用于学校学生信息办理&#xff0c;整体使命是完成学生信息联系的体系化、科学化、标准化和自动化&#xff0c;其首要使命是用手机和计算机对学生各种信息进行日常办理&#xff0c;如查询、…