For your first issue about the context window, the product team has confirmed that this is a UI limitation. I am still waiting for their confirmation on whether they will update the studio UX to allow a higher input token limit by default.
Now, regarding your second issue of training, this is an expected behavior and part of the problem is that it both doesn't know about itself (refer to - When I ask GPT-4 what model it's running it tells me it's running GPT-3. Why does this happen?) and without RAG augmentation isn't a 100% reliable source of facts in general though it will often get things right if it has seen enough instances of something in its training data.
For newer info it isn't surprising that older info that would have more occurrences in the training data might crowd out newer info depending on how a question is asked. The training data definitely goes beyond 2021.
Here is an example question you can ask to see that training data goes beyond 2021:
Product team is internally working to get this information updated to under FAQs as well.
Please let me know if you have any questions.
Please 'Accept as answer' and Upvote if it helped so that it can help others in the community looking for help on similar topics.