Assuming that you know beforehand which columns to aggregate on, I would consider to load the data as an nvarchar(MAX) column, so that the load is quick. Then I would have a background task - an Agent job, or maybe something Service Broker basked - that takes a couple of rows at a time and extracts the fields you need to aggregate on and loads these to a table, and this routine also compresses the data.
An alternative is to read to the json into the binary column directly, but uncompressed, which would be indicated with a flag, and the background job would work on that column.
Of course, this approach assumes that you don't those aggregations on the spot, but can wait a while until the background job has completed.