To get a number close to 100, change
, (1.0*sp.rows_sampled)/(1.0*sp.rows) AS sample_pct
to
, (100.0*sp.rows_sampled)/(1.0*sp.rows) AS sample_pct
That is, you query computes fractions but calls the column pct which is misleading.
As for how important it is to have fullscan statistics (as a sample rate of 100% is referred to), there is no discussion that sampled statistics can sometimes result in less optimal plans. This is particularly true if you query for a condition at the edges. To take a simple example, consider a query like SELECT * FROM tbl WHERE col IS NULL. The table has a sizable number of rows where col indeed is NULL, but when statistics were sampled - and for a big table that may actually be around 1% of the rows - no NULL were found, and the optimizer says "I guess 1 row!" when in fact there were 98234 rows. This can lead to a plan which is not good at all if you are joining to other table. (Since NULL values sort first, they are the edge, so to speak.)
The problem with fullscan statistics is that they are costly to produce for big tables. If you are rebuilding the index, you get the fullscan for free. It can also be fairly cheap to get fullscan statistics for non-clustered indexes, since they are smaller than the base table. But if you say
UPDATE STATISTICS WideTable WITH FULLSCAN
And that table has hundreds of millions of columns, and lots non of non-indexed columns, this can take many hours as each non-indexed column requires a full scan of the table.