@Noel Wilson Do you mean increasing the max request for input and output tokens for a particular model? On the models page, the documented max request tokens for each model is the maximum that can be used for the context window and I don' t think this can be increased beyond the maximum.
The configurable limit of tokens per minute(TPM) is available on the quota page of the OpenAI studio. You can use this to configure to the maximum TPM limit. Is this what you are looking for?
If this answers your query, do click Accept Answer
and Yes
for was this answer helpful. And, if you have any further query do let us know.