Chat Stream

Streamed version of Chat feature, the raw text will be streamed chunk by chunk.

NOTE: For this feature, you an only request one provider at a time.

Recent Requests
Log in to see full request history
TimeStatusUser Agent
Retrieving recent requests…
LoadingLoading…
Body Params
string
Defaults to [object Object]

A dictionnary or a json object to specify specific models to use for some providers.
It can be in the following format: {"google" : "google_model", "ibm": "ibm_model"...}.

providers
array of strings
required

It can be one (ex: 'amazon' or 'google') or multiple provider(s) (ex: 'amazon,microsoft,google') that the data will be redirected to in order to get the processed results.
Providers can also be invoked with specific models (ex: providers: 'amazon/model1, amazon/model2, google/model3')

providers*
fallback_providers
array of strings
length ≤ 5
Defaults to

Providers in this list will be used as fallback if the call to provider in providers parameter fails. To use this feature, you must input only one provider in the providers parameter. but you can put up to 5 fallbacks.

They will be tried in the same order they are input, and it will stop to the first provider who doesn't fail.

Doesn't work with async subfeatures.

fallback_providers
boolean
Defaults to true

Optional : When set to true (default), the response is an object of responses with providers names as keys :
{"google" : { "status": "success", ... }, }
When set to false the response structure is a list of response objects :
[{"status": "success", "provider": "google" ... }, ].

boolean
Defaults to false

Optional : When set to false (default) the structure of the extracted items is list of objects having different attributes :
{'items': [{"attribute_1": "x1","attribute_2": "y2"}, ... ]}
When it is set to true, the response contains an object with each attribute as a list :
{ "attribute_1": ["x1","x2", ...], "attribute_2": [y1, y2, ...]}

boolean
Defaults to true
boolean
Defaults to false

Optional : Shows the original response of the provider.
When set to true, a new attribute original_response will appear in the response object.

string | null

Start your conversation here...

string | null

A system message that helps set the behavior of the assistant. For example, 'You are a helpful assistant'.

previous_history
array of objects

A list containing all the previous conversations between the user and the chatbot AI. Each item in the list should be a dictionary with two keys: 'role' and 'message'. The 'role' key specifies the role of the speaker and can have the values 'user' or 'assistant'. The 'message' key contains the text of the conversation from the respective role. For example: [{'role': 'user', 'message': 'Hello'}, {'role': 'assistant', 'message': 'Hi, how can I help you?'}, ...]. This format allows easy identification of the speaker's role and their corresponding message.

previous_history
double
0 to 2
Defaults to 0

Higher values mean the model will take more risks and value 0 (argmax sampling) works better for scenarios with a well-defined answer.

integer
≥ 1
Defaults to 4096

The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens cannot exceed the model's context length.

string
enum
Defaults to auto
  • auto - auto
  • required - required
  • none - none
Allowed:
available_tools
array of objects

A list of tools the model may generate the right arguments for.

available_tools
tool_results
array of objects

List of results obtained from applying the tool_call arguments to your own tool.

tool_results
string
enum

Choices:

  • 'low': Minimal reasoning, quick responses
  • 'medium': Balanced reasoning approach
  • 'high': In-depth, comprehensive reasoning

Example: 'high' for complex problem-solving tasks

  • low - low
  • medium - medium
  • high - high
Allowed:
string
enum
Defaults to continue
  • rerun - Rerun
  • continue - Continue
Allowed:
Response

Language
Credentials
Bearer
JWT
LoadingLoading…
Response
Click Try It! to start a request and see the response here! Or choose an example:
text/plain