Every team loves running experiments. Launch a new feature, test an idea, or try a big change. Then, check the data. Did it work? Did users love it?

But sometimes, people go digging for answers in the results. They search for any number that looks “good.” This is called metric fishing, and it’s a problem.

Why? Because metric fishing leads to bad science. It’s like throwing a dart first and then drawing the target around it. Looks great — but it’s not real.

What is an Experiment Review Board?

An Experiment Review Board (ERB) is a group of people that checks your experiments. They make sure you follow best practices. They help you ask the right questions before looking at the answers.

Think of them like the referees of fair testing. They want your results to be real — not just lucky guesses.

The Problem with Metric Fishing

Imagine this. You run an A/B test. Then you measure:

  • Click-through rate
  • Conversion rate
  • Time on page
  • Bounce rate
  • Revenue per user
  • Number of visits per user

That’s six metrics. If you keep measuring more things, eventually something looks “statistically significant” — just by chance. But it’s not real. It’s noise.

This is metric fishing. You cast a wide net in a sea of data, hoping to catch something. But you might hook a false positive. That means you think your change worked — but it didn’t. You got tricked by randomness.

How ERBs Stop Metric Fishing

ERBs stop the madness. They help you:

  1. Pre-register your hypothesis: What are you trying to learn?
  2. Choose your primary metric: What will count as success?
  3. Set your time frame: When will you stop and check results?

This prevents teams from “reading the tea leaves” in 100 dashboards. It’s a bit like sealing your test plan in an envelope. You promise what you’re going to do. Then you do it. No changing the rules halfway.

Why People Fish for Metrics

Sometimes teams really want their idea to work. They believe in it. They need to prove it to a boss or a board.

Other times, they just don’t know better. Data is seductive. Dashboards are addictive. “Let’s just take a quick look” sounds harmless… until the wrong decision gets launched to millions of users.

Real-World Example

A video app tested a new homepage layout. The goal? Boost video starts. After a week, video starts showed no change. But session length? It went slightly up!

“Victory!” they shouted. “Let’s launch.”

The ERB stepped in. “Wait — was session length your goal from the start?”

“No,” said the team. “But it’s something!”

That’s the red flag. That’s metric fishing.

The Role of Metrics in Science

Metrics are powerful. But only when used right. You wouldn’t change your car’s engine settings based on how loud the radio is, would you? Of course not. That’s the wrong measure.

Good experiments follow the Scientific Method:

  1. Ask a question
  2. Form a hypothesis
  3. Test with a plan
  4. Check your result
  5. Draw a conclusion

Skipping the plan breaks the process. Metric fishing skips straight to the shopping part — searching for a result that looks shiny.

Good Questions Every ERB Asks

  • What is your hypothesis?
  • What data will you collect?
  • What metrics will matter?
  • What would make you say “this worked” — or didn’t?
  • Who will review this before results are seen?

ERBs often hold review meetings before and after launches. This accountability helps prevent fishing.

Benefits of Using an ERB

Aside from cleaner science, using an ERB brings many advantages:

  • Less false launches — fewer bad changes go live
  • More team trust — people know experiments are honest
  • Faster learning — fewer side tracks chasing random results
  • Better metrics — teams learn which indicators matter

But Wait — What About Learning?

Some might ask: “If we can’t explore, how do we learn?” Good question!

You can explore. But you must label it correctly. Don’t pretend your exploration is proof. Call it what it is: an idea, a clue, a signal.

Follow up with a new experiment. This time with a real plan. Let the ERB take a look. That’s how you explore responsibly.

How to Build Your Own ERB

You don’t need a giant committee. Start small. Invite:

  • One data scientist
  • One product manager
  • One unbiased engineer

Use templates. Ask others to submit test plans in advance. Keep meetings short. Discuss just the essentials. Leave time for questions.

Keep things friendly. The goal is not to catch people cheating. It’s to support good science.

Final Thoughts

Experiments are amazing. They help us test ideas and learn quickly. But only if we stay honest. Metric fishing looks like learning, but it’s not. It leads to wasted time and bad decisions.

Experiment Review Boards are the guardians of fairness, rigor, and truth. They help teams stay focused, stay honest, and—most importantly—stay effective.

So next time you run a test, remember: Pre-register. Pick your metric. Trust the process.

And stay out of the fishing pond.

You cannot copy content of this page