@S-A There are a couple of things you could try but their success would be dependent on your base prompt, context, and user query.
Considering your results aren't hallucinations in most cases, you have a prompt that emphasizes the need to stick to the context provided, which is pushing the model in that direction.
If your responses, while brief, still include all the context you are giving it, that itself is a win and getting larger (more descriptive) responses might work by simply asking it to be more elaborate.
It could be as simple as ending the base prompt with - "Provide a detailed and comprehensive answer to the question using the information below".
Also, another thing you could try would be increase top_p and max_tokens, though I could not really give you specific numbers to try here.