The theory is that you can leave "max server memory" to its default, and SQL Server will manage the memory for you. That is, it will grab as much as it can, and then yield memory if there is memory pressure in the machine, that is some other process wants the memory.
In practice, many DBAs do always set this value to make sure that the OS has space to breathe. And if there other services or applications running on the machine, there is all reason to cap SQL Server. Even more so if there are multiple instances of SQL Server running on the machine.
If SQL Server is the only thing running in this box, I would probably set max server memory to 700000, although there is not much science behind that number.
You mention page faults. But are they page faults in general or with the SQL Server process? SQL Server should preferably not page-fault, and there is a cure for this, to wit, grant the service account for SQL Server the Windows permission "Lock pages in memory". I think that with this setting, it is even more important to set "max server memory".
But your real problem seems to be that some update operations are taking longer time. I don't think these considerations on memory are related to this problem. Rather this is likely to be due to a change in query plans. Are you on SQL 2016 or later and have enable Query Store for the database? In such case you have a wealth of material to dig into.