How can I run my own experiments on LabintheWild?

*** As of May 2023, we’re taking a break from offering this service to external researchers to focus our efforts on building new infrastructure and streamlining the addition of new studies. We hope to be able to serve you again in the fall of 2023. In the meantime, please reach out to us if you’d like to get on the waitlist! (email to ***

We’re happy to link to your experiment if you can host it on your own server. Please get in touch with us to ask whether your experiment is suitable for LabintheWild and to inquire about a time frame for testing and launching the experiment and for collecting the data you need.

To ensure that participants have a good experience, we ask you to go through the following steps:

  1. Familiarize yourself with LabintheWild experiments. Take our tutorial and have a look at our experiment template. (It’s a cat study!)
  2. Design your study to be engaging. We spend about ⅓ of the time designing our experiments on trying to make them more engaging, fun, usable, and visually appealing. The more fun and interesting the study, the more participants will want to take it! (Our tutorial has more information on this topic.)
  3. Design the study to provide meaningful feedback to participants. Make sure participants can learn something about themselves from your study (this, after all, is their compensation!). Our results pages (see this page for a few examples) often include graphs, an explanation for why we are doing this research, how a participant’s data contributes to it, and what the results mean for them. In most cases, participants can compare their own results to those of others; this social comparison is often what leads participants to share the study and encourage others to take it.
  4. The feedback page should link back to two other LabintheWild experiments (we will develop a convenient widget for it some time soon). Have a look at our previous experiments and their results pages for how we do this.
  5. Include a comments box on the feedback page to collect participants’ opinions about your study. We have found these comments to be an invaluable help for debugging and improving the user experience.
  6. Obtain IRB approval.
  7. Rigorously test the implementation of the study to ensure that it runs in different browsers and on different devices.
  8. Send us a short slogan (e.g., “Test your social skills!”, or “Are you more Eastern or Western?”), a short description of what people can get out of participating in your experiment (see LabintheWild’s front page for examples), plus the link to your experiment. You can send this to and CC (make sure you do this if you want a reasonably quick response!).

Once you’ve followed these steps, we will first check whether your study is suitable for LabintheWild. Please remember that the study should be fun for our participants! We will then post your study to our LabintheWild Facebook page to ask our “followers” for feedback. If the comments indicate that the study is usable, interesting, and bug-free we will advertise it on our front page. In the past, this process of testing, gathering feedback, revising the study, and posting it on LabintheWild has taken 1-4 weeks, so please allow for some  time.

We will retain the right to decide whether your experiment will be linked from the front page, including the right to unlink your experiment at any point in time. You are required to acknowledge the use of LabintheWild in any publication (including blog posts) that reports on the data by using the following citation:

Reinecke, K., & Gajos, K. Z. (2015). LabintheWild: Conducting large-scale online experiments with uncompensated samples. In Proceedings of the 18th ACM conference on computer supported cooperative work & social computing (pp. 1364-1378). 

What is LabintheWild?

LabintheWild is an online experiment platform for conducting behavioral research studies with volunteers.  


Who is behind LabintheWild?

We are researchers at the University of Washington and at Harvard University. Katharina Reinecke and Krzysztof Gajos founded LabintheWild, and are developing the experiments together with their students and postdocs. You can read more about us and our team on the about page.

How does the data quality of LabintheWild experiments differ from those conducted in a controlled in-lab environment?

LabintheWild participants are intrinsically motivated to participate since they want to learn about themselves and compare themselves to others. As such, we have found that LabintheWild experiments collect data of extremely high quality, sometimes exceeding that of controlled laboratory studies. For example, our results of three prior laboratory studies that we replicated on LabintheWild matched those obtained in laboratory settings (download the publication). These studies represented a variety of tasks requiring different amounts of cognitive effort and both objective and subjective measures. We found that the data quality is comparable despite the different incentive structure on LabintheWild and despite the fact that our participants complete the tests without direct supervision. We have since replicated a number of other experiments and extended them with new results from much more diverse participant samples (check out this paper). In addition, we have invented new methods that allow us to gain a bit more control over participants’ diverse environments, such as through our Virtual Chinrest (you can read about it in this publication).


How is LabintheWild different from Mechanical Turk?

LabintheWild relies on a different incentive structure for participant recruitment. Mechanical Turk is a micro labor market in which workers (participants) receive financial compensation at the end of a task. In contrast, LabintheWild does not pay participants but instead provides participants with the ability to compare themselves to others and learn something about themselves.

There are many other differences: LabintheWild is only meant to host behavioral experiments, while Mechanical Turk is primarily aimed at hosting micro tasks. LabintheWild does not require participants to sign up, and this might be one of the reasons why its subject population is more diverse than that on Mechanical Turk. For example, participation in experiments on LabintheWild is not restricted to people above 18 years old, who have a social security number, and a means of receiving monetary payment, as it is required on Mechanical Turk.

Importantly, we have repeatedly found that the data quality on LabintheWild outperforms that on Mechanical Turk. For example, in this paper, we describe how LabintheWild participants exerted themselves more (and also scored higher) than participants recruited through Mechanical Turk.  

How many participants do you have and where do they come from?

We get a bit more than a thousand participants a day on LabintheWild on average (this number only includes those participants who have finished the whole length of an experiment and did not report to have taken the same experiment before). However, the number of participants fluctuates highly depending on the availability of experiments, their popularity, and probably many other factors that we do not have control over.

Our participants come from more than 200 countries and regions around the world, with the plurality (roughly 36%) accessing the site from the US. They are between 5-99 years old and have a variety of educational backgrounds.


How do participants hear about LabintheWild?

LabintheWild leverages the fact that participants recruit each other. People spread the word via social networks, blogs, or newspaper sites. According to Google analytics, more than 5,000 websites currently mention LabintheWild or one of its experiments and lead visitors to the site. Despite this variety, Facebook is the primary source of traffic referring about 19% of participants. 


What makes LabintheWild experiments fun and successful?

All our experiments provide some kind of social comparison. Previously successful LabintheWild experiments (where success is defined as an experiment that resulted in lots of participation) were 5-10 minutes long, did not require a huge amount of cognitive effort (although there can be some), and gave participants a meaningful take-away. If you want to learn more about the perspective of participants, why they participate in LabintheWild experiments, and what their experience was, you can download our paper to read more.  

How long does it take to gather data on LabintheWild?

It depends. We’ve previously launched experiments that received 15,000 participants in only a few days, but we’ve also had the case where this number was not reached after a year. We believe that this has to do with the fact that some experiments are more rewarding and more fun than others.

Where can I sign up?

You can participate in any LabintheWild experiment without signing up. If you are a researcher who would like to run experiments on LabintheWild, please read the entry at the top of this page.

Can you help me design and implement my experiment for LabintheWild?

We are only a small team behind LabintheWild and so we currently do not offer these services. In some cases, we collaborate with other researchers on designing and implementing experiments for LabintheWild if the research is related to our own, and if we agree that a joint project makes sense. In most other cases, however, we do not have enough (wo)man power to support others.

Can I get the source code to your experiments?

We will gladly share the source code for any experiment that we are not currently using to gather data for publications (well, almost: the code for some of our earliest experiments was not particularly pretty). Please send us an email to (and CC to inquire about the experiment that you are interested in.

Where can I see the findings and data from your previous experiments?

We have a Findings & Datasets page on which you will find publications, short summaries of the findings, as well as datasets, analyses source code, and other experiment details.

How are LabintheWild experiments implemented?

We mostly use Javascript and PHP to implement our experiments.