Graph auditLogs/signins slow and gets slower with paging

McAninch, Robin 51 Reputation points

Am trying to pull a days worth of signins from auditlogs/signins like this$filter=createdDateTime ge 2023-03-07

Get data back the first time (6 seconds and gets slower to the point of being unresponsive) and an @odata.nextlink value so I try to run it again (and again) using C# and using HTTPClient Any way to speed this up?

using (HttpClient client = new HttpClient())
	client.DefaultRequestHeaders.Authorization = new System.Net.Http.Headers.AuthenticationHeaderValue("Bearer", GetToken());
        string URL = "$filter=createdDateTime ge 2023-03-07";

        //  We are looping because we need to page based on size.  
        while (doContinue)
        	HttpResponseMessage response = await client.GetAsync(URL);

                if (response != null)
	                if (response.IsSuccessStatusCode)
        	        	string r = await response.Content.ReadAsStringAsync();
                                string foo = Convert.ToString(r);

                               if (t.value != null && t.value.Length > 0)
                                   [Add data list of objects]

                              if (t.odatanextLink != null && !(string.IsNullOrEmpty(t.odatanextLink)))
                                    URL = t.odatanextLink.ToString();
                                   doContinue = false;

Microsoft Graph
Microsoft Graph
A Microsoft programmability model that exposes REST APIs and client libraries to access data on Microsoft 365 services.
10,569 questions
0 comments No comments
{count} votes

2 answers

Sort by: Most helpful
  1. HarmeetSingh7172 4,811 Reputation points

    Hello McAninch, Robin

    Thanks for reaching out!

    List SignIns graph API retrieve the Azure AD user sign-ins for your tenant. Sign-ins where a username/password is passed as part of auth token and successful federated sign-ins are currently included in the sign-in logs. The maximum and default page size is 1,000 objects and by default, the most recent sign-ins are returned first.

    There is a possibility that the number of records in API response are too high. As this API supports use of $top query parameter, so to increase the performance you can use $top query parameter. This can help in reducing/limiting the number of results in API response.

    I tried below API in my test tenant and it's working smooth for me:$filter=createdDateTime+ge+2023-02-10&$top=200

    C# Code Snippet:

    var graphClient = new GraphServiceClient(requestAdapter);
    var result = await graphClient.AuditLogs.SignIns.GetAsync((requestConfiguration) =>
        requestConfiguration.QueryParameters.Filter = "createdDateTime ge 2023-02-10";
        requestConfiguration.QueryParameters.Top = 200;


    Hope this helps.

    If the answer is helpful, please click Accept Answer and kindly upvote. If you have any further questions about this answer, please click Comment.

    0 comments No comments

  2. McAninch, Robin 51 Reputation points

    Good morning and thank you for taking the time. The issue is/was the fact that we are trying to pull a lot of information that was necessitating round trips (many of them) which was causing the system to degrade. Using the signins endpoint, I couldn't select only the fields I wanted (limitation on the endpoint's OData) to make the smaller the payload to hopefully speed up performance and it was executing in a loop at 7 second intervals in the beginning and started to bog down to 30+ seconds as I kept hitting it. When I tried to thread it out I started to get 429 errors. All told, it took 3 hours to pull all the data. Ultimately we took this to Microsoft for guidance in the form of a ticket. Their recommendation was to use their Beta endpoint$select=userPrincipalName,signInActivity&$top=999

    I was confused by the recommendation given the non-guaranteed nature of the Beta endpoint but I was assured in this case it was my best option. This endpoint allowed me to use $select and they had added a top 999 rather than the default 1000 page size I was getting before. At any rate I was able to pull 200K approx rows in a fraction of the time on a single thread with no 429 errors. To this point that seems to work for what we need to do which is gather the info to double check against other processes.