This is a short guideline inspired by an article from J.B. Rainsberger on the Nov/Dev issue of IEEE Software.
The motivation is that whenever your team is on a meeting and taking more than 20 minutes to come up with a decision on a design issue, you should be better by simply doing experiments and just trying the altenatives.
Identify and state a clear question the experiment should answer. Most of the times this is half the work, as people love arguing about 10 things at once and confusing matters, causes and effects etc.
Fail fast. Identify and state clearly how you'll know if the experiment has failed early. The whole point of an experiment is to answer a question quickly.
Choose a reasonable amount of time to spend on the experiment and stop when this limit is over. The "cone of uncertainty" tells us that it takes up to 20% of a project's timeline to have a good estimate of the work remaining (i.e. it takes a day's experimentation to evaluate an idea for a week-long task). Running out of time on an experiment is probably a sign that you've asked the wrong question. Go back to your team, discuss the issue, come up with another question, set a new time limit and start a new experiment.
Throw the experimental code away when the question is answered. The goal is gathering information rapidly so you might write code with less care as you focus only on the question and its answer. Most people is tempted into using the experimental code as a foundation for production code. Resist doing that!