ModelModel: Context Tokens:
TRAINING DATA:
Parameter
Continuous conversation
1
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both.
Plugins
Note: The networking function is currently in a testing state, which may lead to inaccurate information, errors, etc. When turning it on, please note that networking will increase the context, which will result in the consumption of more Tokens. Using this function will not consume additional Tokens. Claude model is not supported yet
Text-to-speech
Note: Please select the corresponding language, otherwise speech synthesis cannot be performed.AI voice may not play on some mobile devices.
