Try this if it helps
Create the upload session:
filename = 'Large File.txt'
result = requests.post(
f'{ENDPOINT}/createUploadSession',
headers={'Authorization': 'Bearer ' + access_token},
json={
'@microsoft.graph.conflictBehavior': 'replace',
'description': 'A large test file',
'fileSystemInfo': {'@odata.type': 'microsoft.graph.fileSystemInfo'},
'name': filename
}
)
upload_session = result.json()
upload_url = upload_session['uploadUrl']
This request will return information about the setting including the URL where we will send the file chunks.
Now calculate the number of chunks you will need to send:
st = os.stat(filename)
size = st.st_size
CHUNK_SIZE = 10485760
chunks = int(size / CHUNK_SIZE) + 1 if size % CHUNK_SIZE > 0 else 0
The documentation recommends a 10MB chunk size for most cases, but you may want to adjust this.
Now upload the chunks:
with open(filename, 'rb') as fd:
start = 0
for chunk_num in range(chunks):
chunk = fd.read(CHUNK_SIZE)
bytes_read = len(chunk)
upload_range = f'bytes {start}-{start + bytes_read - 1}/{size}'
print(f'chunk: {chunk_num} bytes read: {bytes_read} upload range: {upload_range}')
result = requests.put(
upload_url,
headers={
'Content-Length': str(bytes_read),
'Content-Range': upload_range
},
data=chunk
)
result.raise_for_status()
start += bytes_read
Follow Example 2 and 3 of this documentation to continue uploading byte ranges until the entire file has been uploaded.
Thanks!