题意:Ruby 的 langchainrb
gem 以及针对模型设置的自定义配置
问题背景:
I am working in a prototype using the gem langchainrb. I am using the module assistant module to implemente a basic RAG architecture.
我正在使用 langchainrb
这个 gem 来开发一个原型。我利用其中的 assistant
模块来实现一个基本的 RAG(Retrieval-Augmented Generation,检索增强生成)架构。
Everything works, and now I would like to customize the model configuration.
一切运行正常,现在我想自定义模型的配置。
In the documenation there is no clear way of setting up the Model. In my case, I would like to use OpenAi and use:
在文档中,没有明确的方法来设置模型。在我的情况下,我想使用 OpenAI 并使用以下配置:
- temperature: 0.1
- Model: gpt-4o
In the README, there is a mention about using llm_options
.
在 README 文件中,提到了使用 llm_options
。
If I go to the OpenAI Module documentation:
如果我去查看 OpenAI 模块的文档:
- Class: Langchain::LLM::OpenAI — Documentation for langchainrb (0.13.4)
It says I have to check here: 它说我要查看这里:
But there is not any mention of temperature
, for example. Also, in the example in the Langchain::LLM::OpenAI
documentation, the options are totally different.
但是在文档中并没有提到例如“温度”这样的设置。此外,在 Langchain::LLM::OpenAI 的文档示例中,给出的选项是完全不同的。
I am working in a prototype using the gem langchainrb. I am using the module assistant module to implemente a basic RAG architecture.
Everything works, and now I would like to customize the model configuration.
In the documenation there is no clear way of setting up the Model. In my case, I would like to use OpenAi and use:
In the README, there is a mention about using llm_options
.
If I go to the OpenAI Module documentation:
It says I have to check here:
But there is not any mention of temperature
, for example. Also, in the example in the Langchain::LLM::OpenAI
documentation, the options are totally different.
# ruby-openai options:
CONFIG_KEYS = %i[
api_type
api_version
access_token
log_errors
organization_id
uri_base
request_timeout
extra_headers
].freeze
# Example in Class: Langchain::LLM::OpenAI documentation:
{
n: 1,
temperature: 0.0,
chat_completion_model_name: "gpt-3.5-turbo",
embeddings_model_name: "text-embedding-3-small"
}.freeze
问题解决:
I have a conflict between llm_options
and default_options
. I thought it was the same with different priorities.
我在 llm_options
和 default_options
之间遇到了冲突。我原本以为它们只是优先级不同的相同设置。
For the needs expressed in the question I have to use the default_options
as in here:
针对问题中表达的需求,我必须按照这里的示例来使用 default_options
:
llm =
Langchain::LLM::OpenAI.new(
api_key: <OPENAI_KEY>,
default_options: {
temperature: 0.0,
chat_completion_model_name: "gpt-4o"
}
)
-
- ruby-openai/lib/openai/client.rb at main · alexrudall/ruby-openai · GitHub
- temperature: 0.1
- Model: gpt-4o
- Class: Langchain::LLM::OpenAI — Documentation for langchainrb (0.13.4)
- ruby-openai/lib/openai/client.rb at main · alexrudall/ruby-openai · GitHub
- Langchain.rb version: 0.13.4