I think Olaf jumped the gun, when he said there is nothing to optimize.
First of all, what data type do you have and which collation? If you have nvarchar or a Windows collation, you can reduce the execution time by 90% or so, by forcing a binary collation. This is because it is not only a matter of scanning the table, but also scanning the strings in full, and the complex Unicode rules start to take their toll.
But beware that this may change the result. But you are already doing lower which means that you are rolling your own case-insensitivity. But if you also want accent-insensitive search, a binary collation is not what you want.
If you have an SQL collation and varchar, you can still make some gains by switching to a binary collation, but nowhere as dramatic. It's rather 30% or so.
If the binary collation is not feasible for you, or you need even better speed, there are other solutions, but it is heavy artillery. You chop the strings into fragment and store mapping between fragment and the PK value. This is something I've written about, and it appears in the book _SQL Server MVP Deep Dives, which is a collection of chapters of 10-25 pages in lengths from a number of SQL Server MVPs. My chapter is called Build Your Own Index. Our royalties for the book goes to War Child International. In total, my chapter has three different solutions to the problem.