AimIn surveys non-responders may introduce bias and lower the validity of the studies. Ways to increase response rates are therefore important. The purpose of the study was to investigate if an unconditional monetary incentive can increase the response rate for vulnerable children and youths in a postal questionnaire survey.MethodsThe study was designed as a randomized controlled trial. The study population consisted of 262 children and youth who participated in an established intervention study aimed at creating networks for different groups of vulnerable children and youths. The mean age of the participants was 16.7 years (range 11–28) and 67.9% were female. The questionnaire was adapted to three different age groups and covered different aspects of the participants’ life situation, including the dimensions from the Strengths and Difficulties Questionnaire (SDQ). In the follow-up survey, participants were randomly allocated to two groups that either received a €15 voucher for a supermarket together with the questionnaire or only received the questionnaire. We used Poisson regression to estimate the differences in response rate (Rate Ratio RR) between the intervention group and the control group.ResultsThe response rate was 75.5% in the intervention group and 42.9% in the control group. The response rate in the intervention group was significantly higher than in the control group when adjusting for age and gender (Rate Ratio, RR 1.73; 95% CI 1.38–2.17). We did not find any significant differences in scales scores between the two groups for the five scales of the SDQ. In stratified analyses, we found the effect of the incentive to be higher for males (RR 2.81; 95% CI 1.61–4.91) than for females (1.43; 95% CI 1.12–1.84).ConclusionsMonetary incentives can increase the response rate for vulnerable children and youths in surveys.Trial registrationThe trial was retrospectively registered at ClinicalTrials.gov Identifier: NCT01741675.
In questionnaire surveys non-responders reduce the effective sample size and may introduce bias, and this will lower the validity of the studies. The bias can occur when the non-responders are systematically different from those who participate in the survey. The likelihood of bias decreases with increasing response rate. Cohort studies show that response rates in epidemiological surveys have declined over time [1,2]. Moreover, Galea & Tracey suggest that this decline may be due to several factors, for example, a dramatic increase in the number of scientific surveys and especially surveys within telemarketing, and a decrease in people’s general willingness to participate in community activities . Consequently, finding ways to improve response rates has become increasingly important.
A Cochrane review by Edwards et al. identified trials that evaluated different ways of increasing response rates . The trials evaluated more than 110 different strategies for increasing response to postal questionnaires. Different types of incentives were examined in most of the trials, and it was seen that not only did monetary incentives significantly increase the response rate to postal questionnaires (odds ratio (OR) 1.87), they also had a greater effect than non-monetary incentives (OR 1.62). Moreover, the response rate increased when the incentive was given unconditionally together with the questionnaire compared with when it was given under the condition that the respondents return their questionnaires (OR 1.61). Newer studies are in line with the review by Edwards et al. [4–7].
Previous studies of the effect of incentives on response rates in surveys have been conducted within various groups, for example, adults in the general population, employees, residents, households, businesses, patients, health personnel, consumers and students [3–7]. However, to our knowledge, none of the studies concerned vulnerable children and youths.
While much research has focused on studying the effect of incentives on response rates in surveys, little attention has been given to studying the effect of incentives on the response distribution . That is, whether the incentive changes the response in the treatment group compared to the control group. Incentives can affect the response distribution indirectly by affecting the composition of demographic variables, such as gender and age of the study sample, or directly by affecting the attitude of the respondents . There may be a risk that respondents who receive an incentive are more likely to provide positive answers than the control group who do not receive an incentive.
Even though advances in communication technology continue to alter the way we perform surveys, postal questionnaires are still commonly used in surveys within health and social sciences. There may be several reasons for this. First, despite the increasing popularity of web or online surveys, postal questionnaire surveys still have a higher response rate than web surveys [9–12]. Having said that, a few studies have shown similar response rates for the two different modes [13–15]. Second, surveys based on postal questionnaires are relatively easy to administer, do not require online access and may have less bias than web surveys, because responses in web surveys may depend on the age and level of education of respondents [12,16,17]. This is why a postal questionnaire survey may be the preferred method within social sciences when dealing with vulnerable groups.
The purpose of the study was to investigate if an unconditional monetary incentive can increase the response rate for vulnerable children and youths in a postal questionnaire survey. We hypothesized that participants who were randomly allocated to receive a supermarket voucher (€15) together with the questionnaire would have higher response rates than participants in a control group, who only received the questionnaire. We also investigated if the monetary incentive influenced the response distribution in the postal survey.
The study was designed as a randomized controlled trial (RCT). To study the effect of incentives on survey response, participants were randomly allocated to receive a €15 supermarket voucher together with the questionnaire or to only receive the questionnaire. The trial was retrospectively registered at ClinicalTrials.gov Identifier: NCT01741675.
Participants were taken from an evaluation study of a social initiative aimed at creating networks for different groups of vulnerable children and youths aged 8–23 years . Both the social initiative and the evaluation study were initiated by the Danish National Board of Social Services and were performed in ten municipalities in Denmark. The main purpose of the social initiative was to strengthen the social skills of the participants as well as their ability to create social relations. The children and youth all came from families with severe social problems. Moreover, they were vulnerable and lonely, because they had an inadequate or severely restricted social network or were excluded from their peer group. Participants came from three main groups: children in foster care or former foster care youths; children from vulnerable families; and lonely children and youths. The children from vulnerable families included children from families with a mentally ill or physically disabled parent or sibling, children of parents with alcohol abuse problems and children raised in violent families. The group characterized as lonely children and youths was a group of children and youths that did not fit into the other two groups.
In the evaluation study, written informed consent was obtained for all participants. Approximately 70% of the children and youths who participated in the social initiative gave consent to participate in the evaluation study. For participants under 16 years old, written informed consent was obtained from a parent, and for all other participants, consent was given by the participants themselves. The RCT study was submitted to the National Committee on Health Research Ethics (reference number VEK H-1-2012-FDP-79), who declared that questionnaire surveys do not need approval by the ethics committee in Denmark. The notification exemption for studies that only include questionnaire surveys is described in part 4, section 14(2) of the Danish Act on Research Ethics Review of Health Research Projects .
Participants who gave informed consent to take part in the evaluation study of the social initiative, and thereby agreed to fill out questionnaires and report their unique personal identification number, were eligible to take part in the present RCT study. We included participants regardless of whether they had responded to the questionnaires in the original study. In total, 274 participants were eligible for the study. Based on the participants’ unique personal identification number (CPR number in the Danish Civil Registration System), the Central Office of Civil Registration provided us with addresses of the participants. Participants who were not available in the Civil Register because they were dead, had immigrated or had requested survey exemption were excluded from the study. Participants who reported an incomplete identification number were also excluded. Furthermore, participants who had changed address and had not yet reported their new address to the local authorities were also excluded from the study.
The questionnaire survey in the present study was a follow-up survey of the original study and was performed in four waves. The first wave was performed in November 2012 and included participants who had left the original study 1–2½ years before the follow-up. The second, third and fourth waves were performed in May 2013, November 2013 and May 2014, respectively, 1–1½ years after the participants had left the original study.
The randomization was performed within each wave of the survey. The participants were given a unique identification number in the original study. Within each wave of the study, participants were sorted according to their identification number and were assigned a random number between 0 and 1. Participants assigned with a number lower than 0.5 were allocated to the control group; participants assigned with a number greater than 0.5 were allocated to the intervention group. We used the Rand (‘uniform’) function in SAS to create the randomization sequence.
The intervention group received a postal questionnaire together with a voucher worth €15 for the largest supermarket chain in Denmark. The control group only received a questionnaire. The questionnaire was sent to both the intervention group and the control group together with a cover letter and a stamped return envelope. The addresses on the envelopes were handwritten. After three weeks, non-responders in both groups were sent a reminder together with a new copy of the questionnaire.
After 13 weeks, the study period was over and the participants in the control group received a voucher similar to the one given to the intervention group. This was done for ethical reasons to ensure both groups were treated equally. Responses received after the study period were not included in the analyses.
As the age of the study population ranges from 8 to 23 years, the questionnaire was made in three versions covering three age groups: 8–12 year olds, 13–16 yea -olds and 17–23 year olds. The questionnaires for the 8–12 year olds and 13–16 year olds included 91 items and the questionnaire for the 17–23 year olds included 90 items. In total, 78 of the items were the same for the three age groups, 13 items where the same for two age groups and 12 items were unique for one age group.
The questionnaire covered the following areas regarding the participants’ life situation: family and housing; education and training; sport and leisure time; relation to friends; drug use; and strengths and difficulties. The questionnaire also included questions that evaluated the intervention in the original study. These questions were taken from three Danish longitudinal surveys of children and youths [20,21]. The Danish version of the Strengths and Difficulties Questionnaire (SDQ) was included in the questionnaires for all three age groups .
The primary outcome measure was the questionnaire response rate that was defined as the proportion of questionnaires returned by participants. The secondary outcome was scores on the five multi-item scales in the strengths and difficulties questionnaire.
In the primary analysis we estimated the rate ratio (relative risk, RR) to assess whether there was a difference between group allocation and questionnaire response rate. We used Poisson regression with robust error variance  and adjusted for age and gender in the analysis. The analysis was performed using PROC GENMOD in SAS.
In the secondary analyses we tested whether mean scores on the five SDQ scales were different between group allocations. We adjusted for the covariates age and gender as in the primary analysis. We used GLM in SAS. The significance level was set at 5% in all analyses.
In post-hoc analyses we calculated the rate ratio for the response rate when stratified by gender. Furthermore, as a robustness test of the primary analysis, we calculated the rate ratio where we adjusted for response history. Response history was defined as whether the respondent had responded to the questionnaire in the original study or not. In a final robustness test, we calculated the rate ratio where we adjusted for the group that characterized the children and youths.
The protocol of the trial was registered at ClinicalTrials.gov Identifier: NCT01741675.
In total, 274 participants were eligible for the study (Fig 1). However, we excluded seven participants from the study before the randomization. Three participants had emigrated at the time of the follow-up, two had an error in their personal identification number (CPR), and two had no known address. Of the 267 participants who were randomized, 146 were allocated to the intervention group and 121 were allocated to the control group. We sent questionnaires to the 267 participants in the study population. However, we had to exclude three participants in the invention group and two participants in the control group because the letter could not be delivered to their addresses that were obtained from the CPR register. Consequently, a total of 262 participants were included in the primary analysis.
The characteristics of the study population and the respondents are shown in Table 1. The original study aimed at children and youths aged between 8 and 23 years, but participants older than 23 years of age were also included. At the follow-up that was performed 1–2½ years after participants were included in the original study, the age of the study population ranged between 11 and 28.
|Age, mean (standard deviation)||16.3 (3.5)||17.1 (3.6)||16.7 (3.6)|
|Children in foster care or former foster care youths||35 (52.2)||32 (47.8)||67|
|Children from vulnerable families||43 (55.8)||34 (44.2)||77|
|Lonely children and youths||34 (56.7)||26 (43.3)||60|
|A mixture of all 3 target groups||31 (53.5)||27 (46.6)||58|
|Age, mean (standard deviation)||16.2 (3.5)||16.5 (3.0)||16.3 (3.4)|
|Age range, years||11–26||12–23||11–26|
|Children in foster care or former foster care youths||25 (65.8)||13 (34.2)||38|
|Children from vulnerable families||34 (66.7)||17 (33.3)||51|
|Lonely children and youths||28 (73.7)||10 (26.3)||38|
|A mixture of all 3 target groups||21 (65.6)||11 (34.8)||32|
The original study was organised into twelve projects within the ten municipalities. The projects all worked at creating networks for the participating children and youths. The population consisted of 67 children in foster care or former foster care youths, 77 children and youths from vulnerable families and 60 participants characterized as lonely children and youths, Table 1. A total of 58 participants came from projects that included participants from all target groups, Table 1.
A total of 159 participants returned the questionnaire, resulting in an overall response rate of 60.7% for the study. The response rate in the intervention group was 75.5%, whereas the response rate in the control group was 42.9%, Table 2.
The response rate in the intervention group was significantly higher than in the control group when adjusting for age and gender (RR 1.73; 95% CI 1.38–2.17), Table 3.
In the secondary analyses we tested whether scores for the five SDQ scales differed between the intervention group and the control group. The mean scores for the five scales in the SDQ are given in Table 4. When comparing the five scale scores, we found no significant difference between the scores for two groups when adjusting for age and gender. We found similar results in analyses where we did not adjust for gender and age.
|Intervention N = 105*||Control N = 51||P-value|
|Peer relationship problems||2.6||1.9||3.3||2.4||0.0721|
In Table 1, the characteristics of participants who answered the questionnaire are given. The percentage of females who answered the questionnaire was remarkably higher in the control group than in the intervention group and vice versa for men. We therefore performed a post hoc analysis on the total sample in which we analysed the response rate and allocation stratified by gender.
In the stratified analyses, the response rate was significantly higher for both males and females in the intervention group compared to in the control group. For males, the rate ratio was 2.81; 95% CI 1.61–4.91 (p = 0.0003). For females, the rate ratio was 1.43; 95% CI 1.12–1.84 (p = 0.0048).
We tested if the response history in the original study could alter the conclusion of the primary analysis. In total, 58% of the participants had answered at least one questionnaire in the original study; 42% were non-responders. Adjusting for response history did not change the rate ratio much (RR 1.71; 95% CI 1.36–2.14), and response history was insignificant in the analysis (p = 0.07).
In a final analysis, we adjusted the primary analysis for the group variable that characterized the children and youths. This did not change the rate ratio (RR 1.73 CI 1.38–2.17). The group variable was insignificant in the analysis.
Our results show that monetary incentives can significantly increase the response rate for vulnerable children and youths in postal surveys. Despite the differences in response rate between the intervention and control group, we did not find any significant difference in SDQ scales scores between the two groups.
To our knowledge, this is one of the first studies on the effect of monetary incentives on survey response for vulnerable children and youths. The children and youths who unconditionally received the incentive together with the questionnaire were 1.73 times more likely to respond to the questionnaire compared to the children and youths who only received the questionnaire. In other words, the response rate in the intervention group was 73% higher than in the control group.
In our analysis we calculated the rate ratio because, in contrast to the odds ratio, it can be interpreted directly. To compare our results with the results presented in the Cochrane review , we estimated the odds ratio based on the rate ratio and the incidences in our study . The rate ratio of 1.73 corresponded to an odd ratio of 4.0. However, the results from the Cochrane review cannot be compared directly to the results of the present study as we must combine two odds ratios into one. In the Cochrane review, the effect size for monetary incentive vs no incentive was OR = 1.87, and the effect size for unconditional vs conditional monetary incentive was OR = 1.61. In the studies that compared monetary incentive vs no incentive, the incentive was based on a combination of conditional incentive and unconditional incentive. In the Cochrane review, there was no direct comparison of the treatment unconditional monetary incentive vs no incentive. In accordance with Bucher et al. , if we regard the meta-analysis of the treatment monetary incentive vs no-incentive presented in the Cochrane review as a treatment of conditional monetary incentive vs no-incentive, we can combine the two indirect comparisons into one direct comparison [25,26]. The effect size for unconditional incentive vs no incentive was then calculated from the two separate effect sizes OR = 1.61/(1/1.87) = 3.0. The effect size of OR = 4.0 (RR = 1.73) in our study was larger than the effect size of OR = 3.0 in the Cochrane review. This emphasizes that the effect size in our study was a remarkably large effect size compared to that found in previous studies . Furthermore, the present study comprised vulnerable children and youths for whom a low or moderate response rate would be expected, whereas the studies in the Cochrane Review comprised adults and less vulnerable groups.
The strength of the study was that it was designed as an RCT study. The random allocation ensured that there were no systematic differences between the intervention group and the control group, and therefore the association found between the intervention and the outcome has a high probability of being a causal relationship. Furthermore, before the study was performed, we registered a study protocol in which we outlined the planned analyses.
The study had some limitations. The study population was not homogenous as it comprised different groups of vulnerable children and youths. However, despite this lack of homogeneity, all participants shared the same problem of having limited social relations and, consequently they were lonely or at risk of being lonely. Furthermore, in the analysis where we adjusted the primary analysis for the group variable that characterized the children and youths, we did not see any change in the rate ratio.Reaching socially disadvantaged or vulnerable groups in surveys can be challenging . Nevertheless, to our knowledge, there is no evidence that different vulnerable groups differ with regard to their willingness to participate in surveys.
The study population had a quite large age span from 10 to 28 years (mean age 16.7) and was not evenly distributed with respect to gender (67.9% of the participants were female). Because demographic variables such as gender and age may influence response rates in surveys, we decided to adjust for these variables in the analyses. The primary analysis showed no statistically significant difference for gender and age in the model (see Table 3), and therefore the found result was not biased by age and gender. Similarly, in the secondary analyses of the effect of the incentive on the response distribution, the results were independent of whether we adjusted for gender and age.
We used simple randomization within each of the four waves of the study. As a result of this procedure, 146 participants were allocated to the intervention group and 121 participants were allocated to the control group. In a small sample like this, one could argue that we should have ensured equal allocation to the two groups by using block randomization. However, even with the simple randomization procedure we used, the difference in the two groups are the result of chance rather than bias.
In the secondary analyses, we did not find any significant difference in SDQ scales scores between the intervention group and the control group. However, the results should be interpreted with care as we had limited power for the analyses. These results should be replicated for a larger sample.
Our stratified analysis showed that the effect of the incentive was larger for males (RR = 2.81) than for females (RR = 1.43). The gender distribution of the respondents in the intervention group (63.9% female) was close to the gender distribution of the total sample (67.9%). In comparison, the percentage of females who responded in the control group was 80.4%. The monetary incentive reduced or practically levelled out the gender bias that was seen in the control group. Thus, a monetary incentive may reduce any gender bias.
The present study was performed on a group of participants who were already participants in an ongoing study, and 58% of the participants had already answered at least one questionnaire in the original study. We do not know whether the effect of the incentive would have been smaller if the present survey had been the first contact to the participants. However, when we adjusted the primary analysis for response history, we did not see any change in the rate ratio.
We deviated from the pre-specified protocol with regard to two points. First, in the primary analysis we only specified that we would adjust for age, whereas in the secondary analyses we specified that we would adjust for age and gender. In reality, we adjusted for both age and gender in the primary analysis and secondary analyses. In the protocol, we specified that the follow-up period was 10 weeks. In reality, the follow-up period was 13 weeks, but only very few questionnaires were returned in the period from week 10 to 13.
The Strength and Difficulty Questionnaire was developed for children aged between 4 and 17. In the original intervention study, it was decided to use the SDQ for all participants even though the study aimed at children and youths aged between 8 and 23. Later it was further decided to expand the intervention to also include a few participants who were older than 23 years. However, in our analyses we did not find a significant effect of age when looking at differences in scale scores between the intervention group and the control group. In a recently published study, Brann et al.  have examined the psychrometric properties of SDQ for young adults (age 18–25 years) in comparison with SDQ for adolescents (age 12–17 years). They found similar psychometric properties for the two versions of the SDQ.
The present study was performed with a postal questionnaire. We do not know if the results also are valid for web surveys for vulnerable children and youths. The Cochrane review found no effect of monetary incentives on the response rate in web surveys . Recent reviews have found an effect of monetary incentives on the response rate in web studies, however, the effect is still smaller than for postal-based studies [29–31].
9 Jan 2020
The effect of monetary incentive on survey response for vulnerable children and youths: A randomized controlled trial
Dear Dr Jan Hyld Pejtersen,
Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.
You will find the reviewers comments in this letter. They have raised issues about the methods and contribution to knowledge. Please could you address these concerns itemised in an table providing the comment, your response and the page and line number of any changes.
We would appreciate receiving your revised manuscript by 9th March 2020. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter.
To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols
Please include the following items when submitting your revised manuscript:
Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.
We look forward to receiving your revised manuscript.
Dr Leica S. Claydon-Mueller
When submitting your revision, we need you to address these additional requirements.
1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at http://www.journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and http://www.journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf
2. Please include the ClinicalTrials.gov Identifier in your manuscript (NCT01741675)
3. Please clarify in your manuscript that your ethics committee specifically waived the need for ethics approval.
4. You indicated in your ethics statement that you obtained written informed consent from a parent for participants under 16 years old. Please clarify whether parental consent was also obtained from participants aged 16 and 17. If not, please clarify whether the research ethics committee specifically waived the need for their consent. Please add this information to your methods section.
5. PLOS requires an ORCID iD for the corresponding author in Editorial Manager on papers submitted after December 6th, 2016. Please ensure that you have an ORCID iD and that it is validated in Editorial Manager. To do this, go to ‘Update my Information’ (in the upper left-hand corner of the main menu), and click on the Fetch/Validate link next to the ORCID field. This will take you to the ORCID site and allow you to create a new iD or authenticate a pre-existing iD in Editorial Manager. Please see the following video for instructions on linking an ORCID iD to your Editorial Manager account: https://www.youtube.com/watch?v=_xcclfuvtxQ
6. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information.
[Note: HTML markup is below. Please do not edit.]
Reviewer's Responses to Questions
Comments to the Author
1. Is the manuscript technically sound, and do the data support the conclusions?
The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.
Reviewer #1: Yes
Reviewer #2: Partly
2. Has the statistical analysis been performed appropriately and rigorously?
Reviewer #1: Yes
Reviewer #2: I Don't Know
3. Have the authors made all data underlying the findings in their manuscript fully available?
The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.
Reviewer #1: Yes
Reviewer #2: No
4. Is the manuscript presented in an intelligible fashion and written in standard English?
PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.
Reviewer #1: Yes
Reviewer #2: No
5. Review Comments to the Author
Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)
Reviewer #1: Interesting and relevant topic however the use of postal questionnaire in health and social sciences are reducing day by day so not sure what this research will bring.The paper is generally well presented in a good literary style although there are a few comments which need to be addressed, for example:
1. The authors mentioned they used vulnerable children and youths in this research but its not clarified why they are considered as vulnerable groups.
2. I think it would be good update why non response rate among the children and youth is a major problem.
3. In introduction, I feel the reference number 1 was outdated and more recent one could be found and utilized.
4. Why children below 8 years old were included. Isn't it obvious they would not know the importance in participating in a research through postal survey.
5. line 126 to 128 " Participants with a number lower than 0.5 were allocated to the control group, participants with a number greater than 0.5 were allocated to the intervention group". This is not clear.
6. Very minimal statistical analysis performed. Table 1 and Table 5 can be merged and findings can be presented together.
I am still not clear how this research will contribute to the existing body of knowledge.
Reviewer #2: Note: I didn't see a link to the full data set within the PDF file I was provided. However, maybe the authors submitted this separately? i responded with a "no" to the question above (#3) as I don't see such a link, but it may exist outside of this PDF.
Note: I answered "I don't know" to question 2 as it was unclear from the manuscript how the randomization was completed. It is possible it was done correctly, but I don't know. The authors need to provide more information within the methods section of their paper.
Overall, the authors seem to be filling an important gap in the literature (conducting on RCT on the effectiveness of monetary incentives on an under-studied population). They also measure these effects on survey responses themselves. The use of an RCT is a reasonable way to meet their research objectives. However, there are a number of serious issues with the manuscript, including the extent to which these findings can be generalized to other populations, methodological inconsistencies, a lack of clarity regarding the make-up of the study sample, as well as various grammatical issues.
This paper could benefit tremendously from professional editing with attention to both grammar and diction. The language could also be more formal, and the conclusion in particular could be more fleshed out (it is only 2 sentences). Noting just a few issues on the first page or two:
(1) Noun/ verb disagreements: Different types of incentive(S); age group(S).
(2) The sentence starting on line 47 “Different types of incentive” is repetitive with the second half of that sentence.
(3) The sentence “The participating children and youths were vulnerable and lonely; because of they had inadequate or severely restricted social network or were excluded from their peer group” has multiple issues.
It would be useful to have a better sense of the location of the study in order to understand to what degree we can make generalizations about this study. Does this study take place in Denmark? Where precisely? Can we find out more about the population?
The authors state that the present RCT is limited to those that gave consent. What percentage of participants in this program gave consent to participate? How representative is this group of the target population?
The authors state that there are three groups of participants: “lonely” children, children in foster care, and vulnerable children. How is this determination made? How were participants identified generally? - particularly for the “lonely children”? What does this mean practically? In what kind of family situation are these children living? Again, this information would provide context and offer information regarding the external validity of this study.
In terms of the RCT itself, how do we know that the subjects under study themselves responded to this incentive? (And not the parents or caregivers of the children etc.)? - Especially for younger participants (8-12 or so). Also, how long was the questionnaire/ roughly how long did it take? This information could be provided on lines 144-155.
The authors state that they randomized the treatment using various procedures in SAS, but the treatment and control groups are quite different in size (146 versus 121). Why is this? Was the randomization successful? Further, in Table 1, it is customary to also provide statistical tests of means post-randomization to verify that the randomization worked correctly (for each control variable). Can this be added? Also, do the authors have any more demographics available to provide additional evidence?
The authors state that “incentives can affect the response rate distribution indirectly by affecting the demographic composition of the study sample or by directly affecting the attitude of the respondents.” While the authors test for differences in survey responses across treatment and control group respondents, they don’t offer an extensive look at the demographics of responders and non-responders in each group (apart from gender and age). Do the authors have any more demographics they could use to investigate to what extent the intervention could have affected the demographic composition of the study sample.
Were those that responded representative of the target population overall? (See notes on demographics above).
It would be helpful to provide not only the response rate and gender percentages but also the raw numbers of individuals (stratified by gender) that participated in the study and responded to the incentives. This way, the reader can clearly see that while these incentives may work better on men, they still do increase the response rate for women as well (in addition to the RR results). For example, these are back-of-the envelope calculations:
69 women (72% of women in the treatment group responded)
41 women (50% of women in the control group responded)
Overall, the paper would be stronger if the authors make a better case for why these findings can truly be generalized elsewhere. How similar is the participant population to populations elsewhere? And across the participants (lonely children, foster children, and vulnerable children), are there differences in response rates? As “lonely” is not defined, an analysis of foster and vulnerable children alone could be a useful addition.
6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.
If you choose “no”, your identity will remain anonymous but your review may still be made public.
Reviewer #1: No
Reviewer #2: No
[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.]
While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at firstname.lastname@example.org. Please note that Supporting Information files do not need this step.
The response to reviewer and editor comments is uploaded in the file Response to reviewers
28 Apr 2020
The effect of monetary incentive on survey response for vulnerable children and youths: A randomized controlled trial
Dear Dr. Pejtersen,
We are pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it complies with all outstanding technical requirements.
Within one week, you will receive an e-mail containing information on the amendments required prior to publication. When all required modifications have been addressed, you will receive a formal acceptance letter and your manuscript will proceed to our production department and be scheduled for publication.
Shortly after the formal acceptance letter is sent, an invoice for payment will follow. To ensure an efficient production and billing process, please log into Editorial Manager at https://www.editorialmanager.com/pone/, click the "Update My Information" link at the top of the page, and update your user information. If you have any billing related questions, please contact our Author Billing department directly at email@example.com.
If your institution or institutions have a press office, please notify them about your upcoming paper to enable them to help maximize its impact. If they will be preparing press materials for this manuscript, you must inform our press team as soon as possible and no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact firstname.lastname@example.org.
With kind regards,
Dr. Leica S. Claydon-Mueller
1 May 2020
The effect of monetary incentive on survey response for vulnerable children and youths: A randomized controlled trial
Dear Dr. Pejtersen:
I am pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.
If your institution or institutions have a press office, please notify them about your upcoming paper at this point, to enable them to help maximize its impact. If they will be preparing press materials for this manuscript, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact email@example.com.
For any other questions or concerns, please email firstname.lastname@example.org.
Thank you for submitting your work to PLOS ONE.
With kind regards,
PLOS ONE Editorial Office Staff
on behalf of
Dr. Leica S. Claydon-Mueller