So if I understand correctly, the only way to avoid duplicates with first querying is to have a pre-defined unique index value.
Creating a unique constraint is a very common approach to stop duplicate data from entering the database at the table level. This is not a new concept and the documentation is openly published. It is totally up to you if you want to take advantage of a unique constraint.
So assuming the CustomerID and ItemNo may repeat, there is really no way to accomplish this without first querying the database. Am I understanding you correctly
You have not explained what constitutes a duplicate record in your application. More importantly, there is nothing stopping you from querying a table to figure out if the data already exists.
I would at the very least create a unique constraint because doing so stops duplicates at the table level. That way if someone writes an ad-hoc insert/update or another application has access to the table the unique constraint will stop duplicate entries.
Checking for duplicates at the application level is perfectly fine as well, especially if a unique constraint exists. A unique constraint violation will cause an exception in the application. You have the option of handling the exception and/or checking for duplicates.
Your original question is concerned with efficiency. Do you have efficiency specifications and if so what are the specs? If you are worried about moving data between the web and DB servers, perhaps crafting a stored procedure that does the duplicate check then insert/update if the duplicate check passes.