Gorilla Integration Guide

Gorilla’s support centre provides lots of useful information to help you set up your study, including specific advice about how to recruit participants using Prolific
 
Here, we'll list some important things to consider when running Gorilla experiments on Prolific, as well as instructions for the changes you’ll need to make before publishing your study.
 
  1. Record participant IDs

Gorilla make it easy to record Prolific IDs in your experiment, as they can be recorded automatically via the URL.
 
To do this, open your experiment and go to the Recruitment tab. Here, click ‘Change Recruitment Policy’ > ‘Recruitment Service’ and choose Prolific.
 
 
Then, all you need to do is use the newly created Unique URL as your Study URL on Prolific, and participant IDs will be recorded automatically in your Gorilla data.
 
  1. Redirect participants to your Completion URL at the end of your experiment

You will find your Completion URL in your study’s basic details section on Prolific:
 
 
Participants should be redirected here upon completion, so that you can quickly review which submissions have completed successfully by checking their completion code.
 
To do this, go to any Finish nodes in your experiment, and enter your completion URL as the Onward URL: 
 
 
IMPORTANT: Participants should only be redirected to your completion URL if you wish to include them in your data and pay them the study reward. Any participants who you do not want to complete your study (e.g. those who do not give consent, or match the prescreening you’ve selected) should be redirected to a reject node using branches, and asked to return their submission”.
 
  1. Set an Experiment Time Limit and use Checkpoint Nodes

On Prolific, participants who decide to leave the study early without completing (returned submissions) or who time-out after becoming inactive (timed-out submissions) will be excluded from your total number of submissions automatically. The spaces that were previously filled then become available to new participants.
 
However, Gorilla does not automatically reject these returned or timed-out submissions, so they will take up one of your participant tokens until you manually reject them. This could lead to Prolific sending new participants to your experiment only for it to already be filled!
 
Therefore, we (and Gorilla) strongly recommend setting a Time Limit for your experiment. This will mean that any participants who go over the time limit will be automatically rejected on Gorilla, and the space will be available again for a new Prolific participant.
 
Similarly to setting the Maximum Allowed Time on Prolific, this time limit should give participants plenty of time to complete all tasks in your study, and should be intended to reject inactive participants. As such, it should not be too close to your study’s estimated completion time, to avoid rejecting a diligent participant who has taken slightly longer to complete than on average.
 
To minimise problems with returned submissions, you should follow Gorilla’s guidance to use Checkpoint Nodes and regularly check your data as it’s coming in to manually reject any incomplete submissions. 
 
Setting your recruitment target to Unlimited will avoid this problem altogether, but your license may not allow you to do so. Alternatively, setting the recruitment target to greater than your number of Maximum Submissions on Prolific will also help to account for a few returned/timed out participants taking up a space on Gorilla (e.g. 200 on Prolific/250 on Gorilla). This would require you to buy more participant tokens in advance, but you wouldn’t have to use up the additional tokens because you can reject the incomplete submissions at the end.
 
  1. Consider the risk of server downtime

Participants experiencing a server outage during their participation in your study on Gorilla is a rare occurrence, but it is still an important consideration. If a participant makes it 80% of the way through your study, but a server outage occurs meaning they cannot continue, you will not have access to their data on Gorilla until you manually include them by expending a participant token
 
Our policy when technical issues occur on the side of the survey software is that participants should be at least partially compensated to reflect the time they have invested into your study, or otherwise approved as normal. Therefore, to avoid having to compensate for potentially incomplete or missing data, here are some points to consider:
 
  • Run your study in batches which are not too large. For example, if you were recruiting 200 participants for a long study (60 mins+), we would recommend running this in batches of ~20 to avoid too many participants being affected by a possible server outage. Gorilla recommend running studies “in small enough batches that you can afford to lose every participant that is currently active.”
  • Include checkpoint nodes at various stages in your experiment. This way, you’ll be able to see how far participants were able to get through your study before any server outage occurred, without having to expend a participant token and inspect the data manually. The ‘Current Node’ column on your participants tab will also be informative here.
  • If a server outage does occur, and you wish to inspect the data of your participants without wanting to expend the participant tokens, consider contacting Gorilla’s support team with a list of the participant IDs that were affected. Should they find evidence of any technical issues that occurred, they’ll be happy to help you out.
 

Additional points to consider:

  1. Running an experiment with multiple conditions

If you’re running a study which requires participants to be assigned to one of multiple conditions (e.g. a treatment group vs. a control), you can program this into your Gorilla experiment by using multiple Start Nodes. This will generate N Unique URLs, where N is your number of conditions. In this example, I have two paths in my experiment - one for my treatment group, and one for controls:
 
 
You can then flexibly manage how many participants you wish to recruit for each condition by publishing separate studies on Prolific, one condition at a time. Once you’ve collected the data for one group, you can prevent these participants from taking part in the next study by adding a Custom Blacklist or Previous Studies filter to your prescreening.
 
  1. Running a longitudinal (multi-part) study

You can run longitudinal studies on Prolific by setting up multiple studies, linked together using a Custom Whitelist containing the Prolific IDs of the participants from the first stage. On Gorilla, you can build all stages of your study into one experiment, so you can use the same Study URL for each part.
 
To do this, add a Redirect Node at the point in your experiment that you want each stage to end. Your redirect URL should be your Prolific study’s completion URL, so that participants are redirected back to Prolific at the end of each stage. If you have a specific time period set before you will invite participants to the next stage, you can change the Completion setting to ‘Delay’ and specify this time here. Otherwise, the default setting is fine.
 
When a participant reaches the end of each stage in your experiment, the Redirect Node will record their active Prolific submissions as Awaiting Review. This will provide your list of IDs to include in the Custom Whitelist for the next Prolific study. The next time participants access your Gorilla experiment will be when you publish the subsequent stage on Prolific - here, they’ll pick up straight from where they left off in the next Node of your experiment tree.
Was this article helpful?
2 out of 2 found this helpful