Skip to content

tools.py里的SearchTool类里的prompt模板经常不起作用,只是偶尔生效。可能需要优化下 #4

@vivisol

Description

@vivisol

我尝试把google搜索接口更换成百度的搜索接口,改动后,get_search_result函数返回正常,但是_call_api函数并没有按照新的prompt来处理,而是仍按照最开始定义的判断问题意图的那个prompt去执行的,具体情况如下:
我修改了SearchTool类的代码,修改了get_search_result函数以调用百度搜索接口:

# search tool #
class SearchTool(APITool):
    llm: BaseLanguageModel

    # tool description
    name = "搜索问答"
    description = "根据用户问题搜索最新的结果,并返回Json格式的结果"

    # search params
    top_k = 5

    # QA params
    qa_template = """
    请根据下面带```分隔符的文本来回答问题。
    如果该文本中没有相关内容可以回答问题,请直接回复:“抱歉,该问题需要更多上下文信息。”
    ```{text}```
    问题:{query}
    """
    prompt = PromptTemplate.from_template(qa_template)
    llm_chain: LLMChain = None
    
    def _call_api(self, query) -> str:
        self.get_llm_chain()
        context = self.get_search_result(query)
    
        print("[DEBUG]context:",context,"\n")
        print("[DEBUG]query:",query,"\n")
        print("[DEBUG]promt:", self.prompt,"\n")

        resp = self.llm_chain.predict(text=context, query=query)
        
        print("[DEBUG]resp from LLM:",resp,"\n")
        return resp

    def get_search_result(self, query):
        headers = {
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3',
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
        'Referer': ''
    }
        SEARCH_KEYWORD = query
        self.url = f"https://www.baidu.com/s?wd={SEARCH_KEYWORD}&tn=json"
        data = requests.get(self.url, headers=headers).json()
        results = data['feed']['entry'][:5]

        snippets = []
        if len(results) == 0:
            return("No Search Result was found")
        for result in results:
            text = ""
            if "title" in result:
                text += result["title"] + "。"
            if "abs" in result:
                text += result["abs"]
            snippets.append(text)
        return("\n\n".join(snippets))

    def get_llm_chain(self):
        if not self.llm_chain:
            self.llm_chain = LLMChain(llm=self.llm, prompt=self.prompt)

run.py的代码如下:

# run example

from langchain.agents import AgentExecutor

from llm import ChatGLM
from tools import  DrawTool, SearchTool
from agent import IntentAgent


# baidu translate api key
BAIDU_APPID = "*************"
BAIDU_APPKEY = "*************"


llm = ChatGLM(model_path="D:\Program Files\AI\Models\THUDM\chatglm3-6b")
llm.load_model()

tools = [SearchTool(llm=llm), DrawTool(baidu_appid=BAIDU_APPID, baidu_appkey=BAIDU_APPKEY)]

agent = IntentAgent(tools=tools, llm=llm)
agent_exec = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True, max_iterations=1)
agent_exec.run("世界上最长的江是?")

运行后大部分时间的结果如下:
image
可以看出,传递给LLM的context、query、prompt都是对的,但是返回的结果却是使用的判断问题意图的那个prompt格式。
只有偶尔有一次,运行结果是对的:
image
我使用作者案例中给的问题也试了下,依然没有给出正确的结果:
image
摸索了好久。不知道问题出在哪里,希望有大佬能够帮忙解答下。

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions