Richard,
By "range" I assume you mean distance, and not location (as in my particular rifle range). In some ways, it would be best to test at the distance of the match. However, there are at least two reasons not to. 1st, we don't often have those distances available to us. My longest shooting distance is only 300 yds, and most of my shooting begins beyond that distance (turkeys at 385 meters out to 1000 yds).
The 2nd reason is that there are so many different things that can go wrong at longer distances during the flight of an individual bullet. Even if you can see every single wind current and eddy out to the 1000 yard line and you can accurately adjust for all of them, the reality is that the bullet doesn't arrive at the target until about 4 seconds after you decide to pull the trigger -so lots of things can happen with respect to conditions while the bullet is in the air. Things you could not predict unless you are clairvoyant. So, by restricting one's testing to shorter distances, I like 200 yds usually, you can remove some of the environmental variance - and that increases your statistical power to detect differences in load accuracy.
Maybe I should add a 3rd reason as well. By whatever method you attempt to discover the most accurate load, you will face exactly the same problems. The use of a proper, formal statistical procedure only minimizes these problems to the best extent possible. Shooting a 10=shot group or a 3-shot group with each load and going picking the winner has exactly the same problems only much MORE so, not less.
There are, of course, a few precision issues that really must be tested at the range of your matches. The most important of which is bullet stability. The reason being that instability increases exponentially with distance, not linearly. So, imagine you have two bullets, one of which is very stable all the way to 1000 yds, and one that is not. But you don't know which is the stable one. At 200 yds, the differences in precision might be extremely small, so small that you cannot determine the winner, even after shooting 20 2-shot groups of each one. In such a case, you wouldn't be able to determine the winner with a couple of traditional 10-shot groups either. But at 1000 yds, that unstable bullet may reveal itself pretty easily, yet you can imagine if conditions are a bit breezy, it will be harder to assess instability than if it is dead calm (oh so rare at a 1000 yds, but it happens sometimes).
On to wind effect. Wind effects on any testing method are a problem. If they weren't we could shoot just two bullets of each load, measure and go home (assuming we are perfect aimers and trigger breakers). But that's not the case. So, we try for ideal and consistent conditions but we also recognize that there will be differences in group size caused by stochastic factors beyond our ability to control or even detect. And that is why we shoot more than one group. So, the 5 or 10 or 20 2-shot groups you shoot with each load, you hope will all experience about the same range of conditions over the course of the testing event. In that way, the variation in conditions is controlled for by replication and the average group sizes of the two loads can be compared to each other statistically.
I don't always use a formal statistical approach when I begin to work up a load. For gross changes in load precision, the tradition, informal statistical methods that shooters apply will suffice. But once you get down to the nitty gritty - Federal 210s vs CCI-BR2s or LDPE wads vs fiber wads, now you are talking small improvements, if any, and formal statistical testing is really the only way (besides getting lucky) of determining the best selection.
There are few sports where statistical testing makes so much sense as it does in competitive shooting, but there seems to be no sport where statistics are more cavalierly dismissed.