... this might have to do with table statistics not having been updated yet!
I was banging my head on this problem today: One and the same query took minutes when run within a transaction, and only seconds when run right after transaction commit. What's special about that case is, that the query reads from a table, that undergoes massive inserting just the statement before (insert-by-select).
As I was testing in an isolated environment without any other connections, it was pretty clear from that start, that this was not a locking issue. I suspected a query optimizer problem from the beginning, and tried "option(recompile)", rephrasing the query (it was pretty complex, three levels of subqueries and the like) and even indexed views, but to no avail.
I was about to dig into a huge profiled transaction batch execution plan (the same query's post-transaction execution plan had looked fine), when I found this posting, and scales fell from my eyes. My immediate guess: this was not an index issue (as the original poster had presumed), this was an "outdated statistics" issue! See, every index has statistics attached, and rebuilding an index automatically updates its statistics as well.
But instead of calling some heavyweight "dbcc dbreindex" as explained in the posting (index rebuild should go to a maintenance plan, but not be placed in an application runtime SQL batch!), I simply invoked "update statistics" on the table in question, right between the insert statement and the query. Voila! Query optimizer is back on track, and now executes the same query about a hundred times faster. I should note though, that "update statistics" also comes with a certain overhead (two to three seconds in our case).
I was aware that SqlServer decides on its own when to update statistics. Seems as if 50,000 new rows and a complex follow-up query isn't necessarily considered sufficient reason for a statistics update, at least not when being part of the same transaction.