OPENAICHAT
Overview
The OPENAICHAT workflow application lets you interact with an OpenAI chat model.
How it works
The application allows you to interact with OpenAI completion models.
Application logs are available. These can be specified by setting the value of the
OpenAIChatLogLevel
parameter in theweb.config
file to0
to deactivate logs,1
for error logs,2
for information logs, or3
for debug logs; the default value is0
.
Required parameters
Parameter | Type | Direction | Description |
---|---|---|---|
| TEXT | IN | ID of the model to use
You can find available models at the following link:
https://platform.openai.com/docs/models/model-endpoint-compatibility; the endpoint used by default is |
You can use either of the following configurations: with system/user messages, with a message number, or with a JSON message array.
With system/user messages
Parameter | Type | Direction | Description |
---|---|---|---|
| TEXT | IN | The system message content |
| TEXT | IN | The user message content |
With a message number
Parameter | Type | Direction | Description |
---|---|---|---|
| TEXT | IN | The type of the message, where |
| TEXT | IN | The user message content, where |
With a JSON message array
Parameter | Type | Direction | Description |
---|---|---|---|
| TEXT | IN | The JSON array message object; the structure should match the following: |
Optional parameters
Parameters | Type | Direction | Description |
---|---|---|---|
| TEXT | IN | OpenAI API key
By default, this value comes from the |
| TEXT | IN | API endpoint; defaults to |
| NUMERIC | IN | Sampling temperature, between Higher values (e.g. |
| NUMERIC | IN | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So |
| NUMERIC | IN | Number between |
| NUMERIC | IN | Maximum number of tokens that can be generated in the chat completion; defaults to |
| NUMERIC | IN | Number between Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. |
| TEXT | IN | Format of the response: When the value is |
| TEXT | IN | Specifies ( In case of error, if the parameter has |
| TEXT | OUT | Chat result call |
| TEXT | OUT | Content of the assistant message |
| NUMERIC | OUT | Total of tokens used for generation |
| NUMERIC | OUT | Total of tokens used for generation |
| NUMERIC | OUT | Total of token used for the prompt |
| TEXT | OUT | Response status code |
| TEXT | OUT | Response payload or error message |
Last updated