Hi @Jelle Holtkamp ,
I have checked internally on your second ask; it is not possible as this not how LLM's work as it cannot "remembers" the initial conversation. You can however work towards creating prompts with Chain of Thought Prompting to lower the possibility of inaccurate outcomes.
Please let me know if you have any other questions.
Thanks
Saurabh
Please do not forget to "Accept the answer" wherever the information provided helps you to help others in the community.