1.phidata是什么?
AI APP开发框架,基于此框架可快速搭建智能体或智能助手以实现记忆、知识库及工具使用等功能。
2.框架是怎样的?
3.为什么选择phidata?
问题:LLMs 的上下文有限,无法执行具体动作
解决方案:添加记忆、知识和工具
-
记忆:通过将聊天历史记录存储在数据库中,使 LLMs 能够进行长期对话。
-
知识:通过将信息存储在矢量数据库中,为LLMs提供业务上下文。
-
工具:启用 LLMs 来执行从 API 提取数据、发送电子邮件或查询数据库等操作。
记忆和知识让LLMs变得更聪明,而工具则让他们变得自主。
4.怎么使用?
-
第一步:安装pip install -U phidata
-
第二步:创建
Assistant
-
第三步:添加工具(功能)、知识(vectordb)和记忆(数据库)
-
第四步:使用 Streamlit、FastApi 或 Django 构建您的 AI 应用程序
5.例子
(1)网页检索
from phi.assistant import Assistant
from phi.tools.duckduckgo import DuckDuckGo
from phi.llm.ollama import Ollama
assistant = Assistant(
llm=Ollama(model="llama3"),
show_tool_calls=True,
# Let the Assistant search the web using DuckDuckGo
tools=[DuckDuckGo()],
# Return answers in Markdown format
debug_mode=True,
markdown=True,
)
assistant.print_response("What's happening in France?")
(2)API调用
import json
import httpx
from phi.assistant import Assistant
from phi.llm.ollama import Ollama
def get_top_hackernews_stories(num_stories: int = 10) -> str:
"""Use this function to get top stories from Hacker News.
Args:
num_stories (int): Number of stories to return. Defaults to 10.
Returns:
str: JSON string of top stories.
"""
# Fetch top story IDs
response = httpx.get("https://hacker-news.firebaseio.com/v0/topstories.json")
story_ids = response.json()
# Fetch story details
stories = []
for story_id in story_ids[:num_stories]:
story_response = httpx.get(f"https://hacker-news.firebaseio.com/v0/item/{story_id}.json")
story = story_response.json()
print(story)
if "text" in story:
story.pop("text", None)
stories.append(story)
print(story)
return json.dumps(stories)
assistant = Assistant(
llm=Ollama(model="llama3"),
tools=[get_top_hackernews_stories], show_tool_calls=False, markdown=True, debug_mode=False)
assistant.print_response("Summarize the top stories on hackernews?")
(3)工具使用
import json
from phi.assistant import Assistant
from phi.llm.ollama import Ollama
from phi.llm.openai import OpenAIChat
from phi.tools.file import FileTools
from phi.tools.sql import SQLTools
from phi.tools.shell import ShellTools
db_url = "postgresql+psycopg://ai:ai@localhost:5532/ai"
assistant = Assistant(
llm=Ollama(model="llama3"),
tools=[
SQLTools(
db_url=db_url,
),
ShellTools(),
FileTools()
],
show_tool_calls=True,
)
#assistant.print_response("List the tables in the database. Tell me about contents of one of the tables", markdown=True)
assistant.print_response("What is the most advanced LLM currently? Save the answer to a file.", markdown=True)
assistant.print_response("Show me the contents of the current directory", markdown=True)
(4)知识库使用
from phi.knowledge.website import WebsiteKnowledgeBase
from phi.vectordb.pgvector import PgVector2
from phi.assistant import Assistant
from phi.llm.openai import OpenAIChat
db_url = "postgresql+psycopg://ai:ai@localhost:5532/ai"
# Create a knowledge base with the seed URLs
knowledge_base = WebsiteKnowledgeBase(
urls=["https://docs.phidata.com/introduction"],
# Number of links to follow from the seed URLs
max_links=10,
# Table name: ai.website_documents
vector_db=PgVector2(
collection="website_documents",
db_url=db_url
),
)
# Load the knowledge base
knowledge_base.load(recreate=False)
# Create an assistant with the knowledge base
assistant = Assistant(
llm=OpenAIChat(model="gpt-3.5-turbo"),
knowledge_base=knowledge_base,
add_references_to_prompt=True,
)
# Ask the assistant about the knowledge base
assistant.print_response("How does phidata work?")
(5)多助手协同
"""
Please install dependencies using:
pip install openai newspaper4k lxml_html_clean phidata
"""
from pathlib import Path
from shutil import rmtree
from phi.llm.ollama import Ollama
from phi.assistant import Assistant
from phi.tools.yfinance import YFinanceTools
from phi.tools.newspaper4k import Newspaper4k
from phi.tools.file import FileTools
reports_dir = Path(__file__).parent.joinpath("junk", "reports")
if reports_dir.exists():
rmtree(path=reports_dir, ignore_errors=True)
reports_dir.mkdir(parents=True, exist_ok=True)
stock_analyst = Assistant(
llm=Ollama(model="llama3"),
name="Stock Analyst",
role="Get current stock price, analyst recommendations and news for a company.",
tools=[
YFinanceTools(stock_price=True, analyst_recommendations=True, company_news=True),
Newspaper4k(),
FileTools(base_dir=reports_dir),
],
description="You are an stock analyst tasked with producing factual reports on companies.",
instructions=[
"The investment lead will provide you with a list of companies to write reports on.",
"Get the current stock price, analyst recommendations and news for the company",
"If you find any news urls, read the article and include it in the report.",
"Save your report to a file in markdown format with the name `company_name.md` in lower case.",
"Let the investment lead know the file name of the report.",
],
# debug_mode=True,
)
research_analyst = Assistant(
llm=Ollama(model="llama3"),
name="Research Analyst",
role="Writes research reports on stocks.",
tools=[FileTools(base_dir=reports_dir)],
description="You are an investment researcher analyst tasked with producing a ranked list of companies based on their investment potential.",
instructions=[
"You will write your research report based on the information available in files produced by the stock analyst.",
"The investment lead will provide you with the files saved by the stock analyst."
"If no files are provided, list all files in the entire folder and read the files with names matching company names.",
"Read each file 1 by 1.",
"Then think deeply about whether a stock is valuable or not. Be discerning, you are a skeptical investor focused on maximising growth.",
"Finally, save your research report to a file called `research_report.md`.",
],
# debug_mode=True,
)
investment_lead = Assistant(
llm=Ollama(model="llama3"),
name="Investment Lead",
team=[stock_analyst, research_analyst],
tools=[FileTools(base_dir=reports_dir)],
description="You are an investment lead tasked with producing a research report on companies for investment purposes.",
instructions=[
"Given a list of companies, first ask the stock analyst to get the current stock price, analyst recommendations and news for these companies.",
"Ask the stock analyst to write its results to files in markdown format with the name `company_name.md`.",
"If the stock analyst has not saved the file or saved it with an incorrect name, ask them to save the file again before proceeding."
"Then ask the research_analyst to write a report on these companies based on the information provided by the stock analyst.",
"Make sure to provide the research analyst with the files saved by the stock analyst and ask it to read the files directly."
"The research analyst should save its report to a file called `research_report.md`.",
"Finally, review the research report and answer the users question. Make sure to answer their question correctly, in a clear and concise manner.",
"If the research analyst has not completed the report, ask them to complete it before you can answer the users question.",
"Produce a nicely formatted response to the user, use markdown to format the response.",
],
# debug_mode=True,
)
investment_lead.print_response(
"How would you invest $10000 in META, GOOG, NVDA and TSLA? Tell me the exact amount you'd invest in each.",
markdown=True,
)
(6)更多
如workflow,模版创建等见官方文档
6.其他
(1)LLM支持
-
OpenAI
-
OpenAI Like
-
Anthropic Claude
-
Ollama
-
Hermes2
-
AWS Bedrock
-
Together
-
Anyscale
-
Gemini
-
Groq
-
Fireworks
-
Azure
-
Mistral
-
Cohere
(2)内置Toolkits
-
ApifyTools to use Apify Actors.
-
ArxivTools to read arXiv papers.
-
DuckDbTools to run SQL using DuckDb.
-
DuckDuckGoTools to search the web using DuckDuckGo.
-
EmailTools to send emails.
-
ExaTools to search the web using Exa.
-
FileTools to read and write files.
-
Newspaper4kTools to read articles using Newspaper4k.
-
OpenBBTools to search for stock data using OpenBB.
-
PubmedTools to search Pubmed.
-
PythonTools to write and run python code.
-
ResendTools to send emails using Resend.
-
SerpapiTools to search Google, Youtube, and more using Serpapi.
-
ShellTools to run shell commands.
-
SQLTools to run SQL queries.
-
TavilyTools to search the web using Tavily.
-
WebsiteTools to scrape websites.
-
WikipediaTools to search Wikipedia.
-
YFinanceTools to search Yahoo Finance.
-
YoutubeTools to search Youtube.
-
ZendeskTools to search Zendesk.
(3)内置知识库加载器
-
PDF knowledge base: 将本地PDF文件加载到知识库
-
PDF URL knowledge base: 将 PDF 文件从 URL 加载到知识库
-
Text knowledge base: 将文本/docx文件加载到知识库
-
JSON knowledge base:将JSON文件加载到知识库
-
Website knowledge base: 将网站数据加载到知识库
-
Wikipedia knowledge base: 将维基百科文章加载到知识库
-
ArXiv knowledge base: 将 ArXiv 论文加载到知识库
-
Combined knowledge base: 将多个知识库组合为1
-
LangChain knowledge base: 使用Langchain检索器作为知识库
-
Load knowledge base manually:手动加载知识库
(4)支持的向量数据库
-
PgVector
-
SingleStore
-
Pinecone
-
LanceDB
(5)支持的数据库存储
-
PostgreSQL
-
SingleStore