OPENAICHAT Workflow Application
Overview
The OPENAICHAT workflow application lets you interact with an OpenAI chat model.
How it works
The application allows you to interact with OpenAI completion models.
Application logs are available. These can be specified by setting the value of the
OpenAiChatLogLevel
parameter in theweb.config
file to0
to deactivate logs,1
for error logs,2
for information logs, or3
for debug logs; the default value is0
.
Required parameters
MODEL
TEXT
IN
ID of the model to use
You can find available models at the following link:
https://platform.openai.com/docs/models/model-endpoint-compatibility; the endpoint used by default is /v1/chat/completions
.
You can use either of the following configurations: with system/user messages, with a message number, or with a JSON message array.
With system/user messages
SYSTEM_MESSAGE
TEXT
IN
The system message content
USER_MESSAGE
TEXT
IN
The user message content
With a message number
MESSAGE_ROLEx
TEXT
IN
The type of the message, where x
corresponds to the message number; the
value should be assistant
, system
, or user
MESSAGE_CONTENTx
TEXT
IN
The user message content, where x
corresponds to the message number
With a JSON message array
MESSAGE_JSON
TEXT
IN
The JSON array message object; the structure should match the following:
Optional parameters
API_KEY
TEXT
IN
OpenAI API key
By default, this value comes from the OpenAiApiKey
parameter in the web.config
file.
URL
TEXT
IN
API endpoint; defaults to https://api.openai.com/v1/audio/transcriptions
TEMPERATURE
NUMERIC
IN
Sampling temperature, between 0
and 1
; defaults to 1
Higher values (e.g. 0.8
) will make the output more random, while lower values (e.g. 0.2
) will make it more focused and deterministic.
TOP_P
NUMERIC
IN
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1
means only the tokens comprising the top 10% probability mass are considered.
Defaults to 1
FREQUENCY_PENALTY
NUMERIC
IN
Number between -2.0
and 2.0
. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
Defaults to 0
MAX_TOKENS
NUMERIC
IN
Maximum number of tokens that can be generated in the chat completion; defaults to 256
PRESENCE_PENALTY
NUMERIC
IN
Number between -2.0
and 2.0
; defaults to 0
Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
RESPONSE_FORMAT
TEXT
IN
Format of the response: json_object
or text
; defaults to text
When the value is json_object
, the system prompt should contain the JSON keyword.
APP_RESPONSE_IGNORE_ERROR
TEXT
IN
Specifies (Y
or N
) if error should be ignored; defaults to N
In case of error, if the parameter has Y
as its value, the error will be ignored and defined OUT parameters (APP_RESPONSE_STATUS
or APP_RESPONSE_CONTENT
) will be mapped. Otherwise, an exception will be thrown.
RESULT
TEXT
OUT
Chat result call
RESULT_CONTENT
TEXT
OUT
Content of the assistant message
RESULT_TOTAL_TOKENS
NUMERIC
OUT
Total of tokens used for generation
RESULT_COMPLETION_TOKENS
NUMERIC
OUT
Total of tokens used for generation
RESULT_PROMPT_TOKENS
NUMERIC
OUT
Total of token used for the prompt
APP_RESPONSE_STATUS
TEXT
OUT
Response status code
APP_RESPONSE_CONTENT
TEXT
OUT
Response payload or error message
Last updated