Hello,
In order to download a file, Invoke-WebRequest isn't the most optimal way as the HTTP response stream is buffered into memory, and once the file has been fully loaded- then only it will be flushed to disk. This can cause a performance impact in case of large files.
I would suggest you to use the System.Net.WebClient DotNET class to download files from your GitHub source. You can refactor your code to something like this:
$url = "<URLpath>"
$output = "C:\SomePath\filename"
(New-Object System.Net.WebClient).DownloadFile($url, $output)
How is this cmdlet better than Invoke-WebRequest? You might ask.
With System.Net.WebClient, the speed/performance gets improved a lot as the HTTP response stream is buffered directly to disk throughout the download process (and not splitting the work into fetch-and-flush tasks).
Note: Make sure the local output file (for which you're providing the path in $output) is a valid file and it exists, or else you might get some error while using DownloadFile method.
Additionally since the above solution doesn't seem to be working as expected in case of compressed files, here's another workaround that can be used for achieving this using PowerShell:
$url = "<URLpath>"
$zipOutput = "C:\Output\" + $(Split-Path -Path $url -Leaf)
$extractedOutput = "C:\ExtractedOutput\"
(New-Object System.Net.WebClient).DownloadFile($url, $zipOutput)
$shellObj = New-Object -ComObject Shell.Application
$files = $shellObj.Namespace($zipOutput).Items()
$shellObj.NameSpace($extractedOutput).CopyHere($files)
Start-Process $extractedOutput
The zip file will be downloaded to the path provided in $zipOutput, and the script will further extract the contents & store extracted contents in the path provided in $extractedOutput. Make sure that the 'C:\ZipOutput' and 'C:\ExtractedOutput' folders exist on your machine where you're executing this script.
--If the reply is helpful, please Upvote and Accept as answer--