Llm
OpenAI
¶
OpenAI(
model: str,
base_url: Optional[str] = "https://api.openai.com/v1",
api_key: Optional[str] = None,
key_name: Optional[str] = None,
client: Optional[OpenAI] = None,
verbose: Optional[bool] = False,
logger: Optional[Union[Logger, Callable]] = None,
**kwargs: Any
)
Initialize a new instance of OpenAI client.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model
|
str
|
The model to use for completion. |
required |
base_url
|
Optional[str]
|
The base URL of the API endpoint. |
'https://api.openai.com/v1'
|
api_key
|
Optional[str]
|
The API key used for authentication. |
None
|
key_name
|
Optional[str]
|
The name of the API key used for authentication. If not provided, the first API key in the environment variables will be used. |
None
|
client
|
Optional[OpenAI]
|
The HTTP client instance used to make requests to the API. This could be an instance
of a library like |
None
|
verbose
|
Optional[bool]
|
A boolean flag indicating whether to enable verbose output. When set to True, additional debugging information or logs will be displayed. |
False
|
logger
|
Optional[Union[Logger, Callable]]
|
A logger instance used for logging messages. |
None
|
**kwargs
|
Any
|
Additional keyword arguments. |
{}
|
Examples:
from lumix.llm import OpenAI
base_url = "https://open.bigmodel.cn/api/paas/v4"
llm = OpenAI(model="glm-4-flash", base_url=base_url, api_key="your_api_key")
Source code in lumix\llm\completion\openai.py
completion
¶
completion(
prompt: Optional[str] = None,
messages: Optional[
Union[List[TypeMessage], List[Dict]]
] = None,
stream: Optional[bool] = False,
tools: List[Dict] = None,
**kwargs
) -> Union[ChatCompletion, Stream[ChatCompletionChunk]]
Call OpenAI API to get a completion.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prompt
|
Optional[str]
|
The prompt to generate a completion. |
None
|
messages
|
Optional[Union[List[TypeMessage], List[Dict]]]
|
The messages to generate a completion. |
None
|
stream
|
Optional[bool]
|
Whether to stream the response or not. |
False
|
tools
|
List[Dict]
|
The tools to generate a completion. |
None
|
**kwargs
|
|
{}
|
Returns:
Type | Description |
---|---|
Union[ChatCompletion, Stream[ChatCompletionChunk]]
|
Union[ChatCompletion, Stream[ChatCompletionChunk]] |
Examples:
Source code in lumix\llm\completion\openai.py
sse
¶
Source code in lumix\llm\completion\openai.py
sync
¶
structured_schema
¶
Source code in lumix\llm\completion\openai.py
parse_dict
¶
structured_output
¶
structured_output(
schema: ModelMetaclass,
prompt: Optional[str] = None,
messages: Optional[
Union[List[TypeMessage], List[Dict]]
] = None,
**kwargs
) -> Dict
结构化输出
Parameters:
Name | Type | Description | Default |
---|---|---|---|
schema
|
ModelMetaclass
|
输出结构Scheme |
required |
prompt
|
Optional[str]
|
prompt |
None
|
messages
|
Optional[Union[List[TypeMessage], List[Dict]]]
|
messages |
None
|
**kwargs
|
|
{}
|
Returns:
Type | Description |
---|---|
Dict
|
结构化数据 |
Examples:
class Joke(BaseModel):
'''Joke to tell user.'''
setup: str = Field(description="The setup of the joke")
punchline: str = Field(description="The punchline to the joke")
rating: int = Field(description="How funny the joke is, from 1 to 10")
data = self.llm.structured_output(schema=Joke, prompt="给我讲个简单的笑话")
pprint(data)