|How can the [PROGRAM] Program ensure high quality surveys?||Should there be addition requirements in the design and analysis of surveys? Would the development of guidance or best practices for surveys be useful?||Survey fatigue is a problem. What steps could be taken to reduce survey fatigue across the consortium?||Besides noting at the top of emails soliciting responses to approved surveys, how could the [PROGRAM] Program combat the issuance of those surveys that are not reviewed by the appropriate groups or are not approved?||What corrective actions could take place for those groups that issue surveys that are not reviewed by the appropriate groups or are not approved?||There are plans to develop a tracking and archival system for surveys and their results. Are there any specific additions to this system that you would like included?||Open feedback appreciated:|
Limit the number of frivolous surveys.
Yes, most of us have survey fatigue so please limit the number and scope of surveys to information that is truly necessary for [PROGRAM] success
|Agree - see previous response|
We've had this discussion before and at one point, it seemed like we agreed all surveys must first be vetted and approved by the SC but there's really no way of preventing individuals from soliciting [PROGRAM] memberships. Possibly we could only be required to respond to surveys coming from the new coordinating center (?)
no but like the idea of cataloging surveys as well as results for future reference - good idea as the same questions often pop up year after year.
Be very clear which surveys have been vetted and 'endorsed' by the coordinating center. Every question should be tied, as closely as possible, to an action by [IC], coordinating center, workgroup or hub. Avoid 'nice to know.'
The coordinating center should be able to handle this. Usual issues, such as consistency in wording, avoid double barreled questions etc.
An early task should be for the coordinating center to make sure that a survey is the best way to get the information, as opposed to some key informant interviews. If the survey has a clear purpose that will benefit the program, I'm happy to fill it out. Sometimes the PI is not the right person to be filling out the survey, but we can handle that internally.
Our contact information is public, we can't keep people from writing to us. Free speech and all that.
If they are outside the [PROGRAM] then I don't think we can do anything. If its a [PROGRAM] hub or [IC] workgroup that just goes ahead to sends out surveys they should get a wristslap from the coordinating center,but make sure that the PI and the head of the steering committee get copied. That should fix it. But it'll still occur occasionally.
Just make sure we can find the results. Certainly an early task for any new survey, and the coordinating center, is see if the survey has already been conducted, and how long ago!
Work through the coordinating center. Decrease number of items. Provide clear rationale for why surveys are needed. Bundle whenever possible. Minimize number of surveys.
yes-- but there is certainly MUCH knowledge about this. We do not need to develop new best practices on survey techniques. There are text books on this, and certainly questionnaire design and implementation is a part of more population oriented and social science research.
Limit number. Target specific groups with the most relevant feedback. For instance, can't target the PIs every time for every survey.
Not sure. What if someone has a funded study to look at [PROGRAM] function individually? Shouldn't hubs decide whether they want to participate voluntarily?
I think a tracking pathway- how were results used? what decisions did they influence if any? Also-- it would be great to be able to go back to these findings to avoid redundancies.
Proper review and oversight of surveys as described in the Policy and Procedure for Surveys. Send survey questions in the survey request email so that key information holders can be consulted before survey responses are filled-some surveys require consulting multiple people for complete answers.
Best practices and survey templates would be useful.
A record or repository with all surveys to monitor the number of survey requests.
A site with links to all open approved surveys. Survey requests only originate from specific email addresses.
Limited access to email distribution lists.
Ability to generate templates from surveys as some surveys will contain sets of questions that will be useful in other situations/applications.
Vetting of surveys that have been tested and demonstrate the appropriate amount and type of questions to answer the purpose of the survey.
Something brief (as a paper or short video) or a consultation through the coordinating center might be helpful.
People joining together two shorter surveys so there are a perceived less amount. I think clear identification of surveys supported by the consortium will help as well. Many surveys come out soliciting us without any clear or confirmed connection to the [PROGRAM] program. It is often difficult to determine if a survey is malicious or legitimate.
Unsure...reporting of 'rogue' surveys to the coordinating center to alert the consortium. Not sure what a punishment should be as it might be someone outside of the consortium and therefore out of our jurisdiction.
Not sure what a punishment should be as it might be someone outside of the consortium and therefore out of our jurisdiction.
All surveys must be rigorously developed through the involved [PROGRAM] group or program, as detailed in the policy. Careful vetting is essential so as to lessen the existing burden on [PROGRAM] hubs.
Yes, I think a best practice manual would help.
Restrict the number and length of surveys and ensure high quality. There must be a significant purpose and a high likelihood of an impactful outcome.
This may be difficult to police. One approach that might help is a system of advance notice from [IC] about approved surveys and dates of distribution. Then a PI or administrator who receives a survey that was not announced, can delete it without concern.
Another difficult issue - I suppose a sternly worded message could be developed and sent out by [IC], or on behalf of the SC.
Very good idea. Easy online access and searchability would be essential.
This process ensures that each survey will be vetted by the SC. This will help produce quality surveys.
Development of best practices and templates would be useful to ensuring consistency and that specific requirements in design are met. Also, clarity about the goals of these questionnaires must be communicated. How will these data be useful to drive action?
Committing to sending only 1-2 surveys per quarter.
Make the templates, guidance and steps to getting questionnaires approved easy to find and follow.
Personal call to the person or people who disseminate the questionnaire from [NAME].
It would be helpful if the archiving system could track and search each question in the questionnaires. I can imagine that we will want to re-use individual questions more than re-issue whole questionnaires.
Vetting with the SC and appropriate DTFs
The coordinating center could include this functionality.
Ensure that the surveys will be acted upon and produce useful outcomes. Fatigue comes when it is perceived that the surveys are exercises in futility.
Make the rules clear and deal with violators
I don't think if the rules and procedures are clear that sites will try to sneak in surveys!
Vet all surveys before they are distributed to the hubs. Assure hubs that there will be no negative consequences for not completing unapproved surveys.
If such information is easily available, it could be supplied, but our efforts could be best used elsewhere.
Approve only necessary, useful surveys. Don't approve surveys for information that can be obtained by other means. (We've gotten several asking for our salaries.) Assure hubs that there will be no negative consequences for not completing unapproved surveys.
There probably is no way to do this. If some hubs want to respond to unapproved surveys, they can. Tell the rest of us which ones have been approved, so that we know which ones to complete.
Tell us when such surveys are identified, so that we know we don't have to complete them.
The system seems well vetted. Beta testing of surveys would help ensure quality and clarity of questions.
No, we can't make it too cumbersome to do; nor delay initiation too long.
Highlight in the Headers of the surveys the target audience. And then permit Hubs to opt out if the survey is largely not applicable to their programs.
|Don't see any good way.|
I'm not a policeman. But, as noted above, if there was an easy way to identify how to opt out, then corrective actions would not be needed.
Do more to Incorporate social scientists ,and methodology centers
Yes, perhaps consider making survey construction a funded component of future rfa's
I have no suggestions, I think data fatigue (and survey) fatigue are here to stay
|I do not have suggestions||I do not onow||No specific suggestions|
Surveys should be managed ONLY by/via the new coordinating center
Multiple attempts in the past have failed, why waste time trying again?
Coordinating center should be tracking and prioritizing surveys.
If surveys don't come from the Coordinating Center, I for one will not respond!
It should be done by the coordinating center, not [IC]
Have the surveys reviewed by individuals who have survey experience -- check for readability, ability to get reasonable data analyzed, etc. Not well reviewed at this point
I don't think we need more development of guidance or best practices when these already abundantly exist in the survey methodology field. Let our evaluation teams do their job here and coordinate better
Hard to track what the different survey requests are and who should be tuned in to them. So some common way of identifying and endorsing each survey
Just send a reminder to the originator of the unapproved surveys to remove them
Remind them of policies for group; have them recall surveys
Searchable database perhaps or organized by year and topic
Surveys should be overseen by informatics faculty and staff to ensure they quality and efficiency.
Development of best practices would be helpful.
Coordination of general survey themes by the Steering committee.
If the survey emails were sent by [IC] this would ensure that they are 'official'.
I don't think that punitive action here would be effective.
Limit number/internal screening and review
Yes- although they likely exist in the literature.
Surveys are often difficult if one has to retrieve further information rather than providing direct answers--tailoring surveys to immediate response would be helpful. Also, consider in-person surveys at [PROGRAM] or ACTS meetings.
|Unclear how else to minimize this event.|
They could potentially be banned from access if violations are egregious.
Limited and simple. Stated purpose and anticipated outcome.
A guidance document might be useful, or a set template.
Minimize and have all participants share some credit if there is a resultant publication or white paper.
|Suspend the right to do surveys.|
As above and/or a call-out of some kind for those that repeatedly offend.
Any surveys should be vetted with experts in evaluation who know how to construct surveys and how to validate them before implementation.
A guidance document with specific requirements for development of an effective survey should be created.
At some level, perhaps SC or a subcommittee of SC, there should be a vetting of which surveys should be accepted for distribution to maximize the value of the surveys distributed.
Indentify an [IC] staff member or [COORDINATING CENTER] staff who is responsible for all surveys who is given authority (with assistance of SC or SC subcommittee) to approve or deny dissemination of surveys. A check list could be developed that the staff completes to be sure that all reviews have been completed and approval obtained before allowing dissemination.
Not going to go there. I would prevent this from happening by developing a more organized system that has to be completed before a survey can be released for response.
Unaware of this plan but it would be great if it were possible to somehow search for past surveys and results with key words
Less text, more quantitative answers
Best step is to avoid surveys like this one: too much material to read, too much text boxes. And why do I have to list my name? Deidentified surveys will be better.
|I don't know||What group approved this survey?||no||none|
implementation of the program policies attached would be beneficial
yes best practices guidelines would be useful
the steps in the attached document seem reasonable
We can communicate the survey preferences to each site. If we see that surveys are not sanctioned this will help in our local triage process.
Perhaps a notification from [IC] leadership would be helpful.
Surveys are one of the banes of my existence, and often lead nowhere, but when they need to be done, I advocate using few questions. See below.
What are addition requirements? I assume you mean additional requirements. There are surveys for everything these days, but they don't yield much. For example, every time I fly, I am asked about my experience. When I think back on flying 20 years ago and today, the experience is not better. Yet today we have surveys.
Thus, limit the number. Surveys are non scientific methods of collecting information.
I am not sure what 'reviewed by the appropriate groups' means. Who generated this survey? [DIVISION], SC, CC? etc? One thing would be to identify who is asking.
It would be easy to be facetious here. Are we really overwhelmed by too many 'unapproved' surveys that are hurting the program? If so, what is the evidence?
Our group felt that this document was pretty much a waste of time. If folks do not want to fill out a survey, they won't. You added a huge additional layer of complexity that will simply inhibit folks from even trying this route.
No, there are too many obstacles already.
People are NOT obligated to fill out surveys. The proof of the value of the survey will be whether or not it is filled out.
|Minimal review. A quick read. That's it.|
None, you are not the survey police. This is a bit over stretch.
No, there is not enough money to allocate to the hubs. You slashed many to four years, and now you want to spend more money on this trivial stuff? Really!!!
Ultimately, a screening and feedback process will be needed.
In the guidelines, items C and D do not seem to be the purview of this process. It would not seem that a governmental body should be asking to review outside communication of grantees.
This kind of process should be useful for this.
The process is reasonable aside from items C and D, and this kind of process should be useful.
|Discuss with initiator of survey.|
The idea is good, but items C and D, being outside of [IC]' purview, should not be archived in this location.
In the guidelines, items C and D do not seem to be the purview of this process. It would not seem that a governmental body should be asking to review outside communication of grantees.
Continue to provide guidance/oversight. Would be helpful to provide a centralized electronic platform for the surveys. Would also be good to reduce number of individual emails about surveys -- perhaps a weekly reminder of surveys to be completed?
Use a centralized platform for survey conduct -- use Coordinating Center to set this up.
They should not have access in teh first place
Index/search function for easy access to surveys and results.
More vetting by some oversight or SC prior to releasing
|Guidance document would be great.|
Route via Admin at each hub so they can distribute to appropriate folks
Have them all go through some review process
I seem to take a lot of surveys but rarely get the aggregate results back in a timely fashion. Should be a best practice for returning survey results; otherwise will stop doing them.
Limit numbers surveys and engage the PIs in deciding the issues to focus upon.
Limit the numbers of surveys and engage PIs so as to concentrate on the key issues.
for approved surveys, ask the PI and/or head administrator to be responsible for ensuring a response. It would be important, however, to limit the number of surveys. for non-approved limit access to the email lists
Notify the leadership of the [PROGRAM] and ask the PI and/or head administrator responsible to address the issue
The ability to search the questions to find the useful answers/responses.
for the approved surveys, it would be helpful to have a copy of the survey attached to the email in PDF and Word. Most [PROGRAM] PI for complex surveys will involve other program leaders. The word and PDF copies will make it easier to collectively develop answers and respond with a single response.
there are lots of guidelines out there for surveys; best design depends on the purpose I would not start with this
I think the steps laid out in the draft survey policy are a reasonable approach - approval by the steering committee or designated body before dissemination to the hubs
for unapproved surveys coming from within the [PROGRAM] consortium, can remind our colleagues of the appropriate procedures. Same for outlaw surveys coming from outside the consortium
hard to specify additions to a system that hasn't been described
Process seems duplicative - Steering committee must approve and then [IC] must approve? Why the extra step, if the steering committee represents collaborative decisions between [IC] and [PROGRAM] hubs? Will surveys ever be sent directly to hub partner institutions without approval by the local hub PI?
make sure they have appropriate design and rigor
|probably not necessary|
provide summative feedback to the recipients on what was learned from the survey
i dont think it productive to be a survey policman
|make them take their own surveys|
provide summative feedback to the recipients on what was learned from the survey
The system described appears for the most part sufficient. One area that is not discussed is how to assure that redundancy does not interfere with the survey system.
No additional requirements; development of guidance or best practices for survey would be very useful
Approval of surveys should include assessment of value/need and assure that redundancy does not occur. Because of survey fatigue, attention to providing an heierarchy of importance for the surveys would be most helpful
Create a unique network for distribution of approved surveys....surveys distributed in any other way can be ignored.
Rather than address the issue post facto, creating a solution such as that noted above might be better.
State upfront how data will be utilized
Clear directions, use and feedback would be useful
Have designations imported into the top of the emails and surveys that they have been sanctioned.
|Letter from [NAME]|
Access to raw and aggregated data before archiving and in a timely fashion
The document as written provides good overall guidance for program-sponsored surveys. It should acknowledge ad-hoc surveys that will originate outside of program and note how these should be labeled and how program-sponsored surveys will be labeled.
Yes. Experts in measurement should be consulted and involved in the design of the surveys and analytic plans should be submitted prior to a program-sponsored survey being deployed. Surveys should be reviewed to ensure that they are not collecting duplicate information from prior surveys and that, if information is duplicative from other surveys, that information can be linked in to the new survey in order to reduce respondent burden.
Clearly identifying program-sponsored surveys is key. In addition, archiving data sets with appropriate meta data so that information can be retrieved as opposed to re-gathered would be helpful.
The [PROGRAM]s and the PIs are publicly available data. There is no way to prevent an investigator (from inside a [PROGRAM] institution or outside a [PROGRAM] institution) from sending a survey to the group. Providing guidance on how to appropriately label program-sponsored and ad hoc surveys would be helpful.
Corrective action can only be directed at [PROGRAM] program sanctioned groups that survey outside of the lines. Reminders of policy for first offense, restriction of privileges for subsequent.
This is a very important initiative to support open science principles and reduce respondent burden. The system should have appropriate metadata attached to the survey and provide a way for individuals to use the survey results for additional analyses.
Provide a mechanism that will display previous [PROGRAM] survey results transparently so we don't have to answer the same questions multiple times.
|Not partciularly||See above|
Have all the approved surveys come from one mailbox so it is clear it is an approved [PROGRAM] survey
Review or development of the instrument by a survey expert or others with expertise in constructing surveys. The Steering Committee Chair and other representatives can review the survey before it is rolled out. An alternative is to assemble a 'study section' with expertise in survey design with input from the Steering Committee (SC).
Make the approval process for surveys extremely selective.
Because emails listing the consortium recipients are commonly sent out and the listing of PIs can easily be copied from this list have all 'pan-PI' and 'pan-[PROGRAM]' emails sent out using BCC for emails so that the mailing list is opaque.
I am not sure that for individuals who have copied the email address list from a prior email or otherwise sent out unapproved surveys that there is an appropriate punishment.
If resources are available, it would be great to code surveys in terms of a few key characteristics: domains assessed, target respondents (researchers, research staff, leadership, trainees etc), sampling (random, convenience), response rate, survey objectives/research questions, etc. Following reproducibility and rigor guidelines, could we ask all those seeking to field surveys of the [PROGRAM] community to provide a de-identified, documented data set as part of the approval process?
Have a group expert in surveys pre-screen them.
Perhaps the expert group mentioned above can provide some guidance.
Fewer, and most essential, surveys. Better identification of specific people at each site for different surveys so they don't all go through the PIs.
There should be some penalty for unauthorized surveys.
A warning, followed by additional measures (reduction in funding?)
Develop a vetting process to use survey expertise across the [PROGRAM] to ensure high quality output.
Submit draft surveys to committee, see above. Development of best practices would be useful.
Use the expert survey committee to triage.
This is an unresolvable issue for the [PROGRAM].
This is an unresolvable issue for the [PROGRAM].
If there is any PHI involved in the survey, that it's managed appropriately.
Having the SC and [IC] [DIVISION] leadership engaged with final review of each survey before release.
Guidance and/or best practices always helpful.
|Keep them brief, simplistic.||See below|
Make PI aware and require response to mitigate future occurrences.
|None identified||Monitor for survey fatigue|
Development of best practices guidance made available through the Coordinating Center would be useful. The engagement of the Evaluators Group in reviewing proposed surveys or in assisting individuals developing surveys may be useful.
Development of best practices available through the Coordinating Center would be useful.
Development of a schedule for known planned surveys of [PROGRAM] hubs would identify approved surveys and provide information for [PROGRAM] hubs to prepare for responding to the surveys.
|no specific comment/feedback||no specific comment/feedback|
Provide an opportunity for [PROGRAM]s or individuals to request notification when a survey is proposed/completed for topics selected by that [PROGRAM] or individual (e.g., request to be notified of survey involving [X] scholars)
Centralized review and approval by the [PROGRAM] SC will ensure high quality surveys.
Yes, the development of guidance or best practices for surveys would be useful so that when a survey is submitted to the SC for review, there is minimal touch by the SC
Ensuring surveys are not targeting an individual stakeholder more than 2 times per year will minimize survey fatigue.
The issuance of surveys should come from a centralized [PROGRAM] email address (e.g. [PROGRAM] Category X Survey) so they can be more easily identified by stakeholders.
If a group issues a survey that has not been appropriately reviewed and approved, a probationary period should be implemented that prohibits emails from being sent within that period of time (e.g. 6 months).
|Not at this time.|
Should be reviewed by people with appropriate expertise in both content and survey processes/logistics.
Every survey should be pilot tested with a small group with feedback used to improve the value and usability of the survey. Best practices would be very useful.
Vetting by the SC and [IC] should help - In addition, perhaps the Coordinating Center could coordinate/organize surveys and when a group makes a decision to conduct a survey, they should let this group know as early as possible in case there is an opportunity to consolidate across surveys or a timing issue that should be taken into consideration.
The officially approved surveys could always come from the coordinating center, [IC] or one specific e-mail address. Ask everyone to report any non-official survey that they receive
|Address individuals responsible|
There should be easy access to surveys and results. Each survey should have a designated POC and a FAQ page could be established for efficiency.
Hopefully, we will not try to regulate individual PI's or administrators ability to send out a group e-mail with a specific query. Should not be over inclusive in the definition of a survey
Provide expertise in survey preparation, distribution and analysis
|Yes it would be helpful|
Not sure. Always a tension between collecting data for decision making and avoiding over surveying. Perhaps creating strategic priorities and limiting surveys to those areas.
|I dont know|
Hiatus during which they are not allowed to submit surveys.
I actually have not experienced over survey yet,
Careful review by the SC and by [DIVISION] is essential. Moreover, be absolutely certain that consideration is given to the questions -- Is this survey really needed? Is the survey too long and involved? Does it provide value?
Would likely be helpful if all surveys were carefully constructed following best practices.
Be certain that a high bar be met for the survey to be distributed: a) it must have the potential to contribute real value; b) it must have been adequately vetted by the SC and [DIVISION] staff to determine importance and quality of the survey; c) it must be no longer than absolutely necessary.
Ask the PIs to indicate when they have received unapproved surveys, and [IC] leadership should react if such surveys are distributed.
Perhaps discussing this on the SC or PI calls (with examples) to delineate the problem(s) will decrease their occurrence.
Make sure the results are broadly available to the PI community.
The red-typed comment 'must provide value' is certainly relevant for surveys. It has not always been clear that distributed surveys have provided value.
|This process should help.||Yes||The current process should help.||Good question.||Not sure||Inclusion of CTSC web page|
Review should request analytic plan for survey
Yes-some are just informational but often the validity of our survey methodology has not been held to the highest scientific rigor which is critical if being published or we recommend best practices from it
Limit length of survey and avoid sending to multiple people at single [PROGRAM]-might designate single person (Administrator) to distribute to right person. Should be someway to prioritize value
|That should be sufficient||reminder|
summary data from surveys linked to survey
The surveys should be reviewed by designated experts.
Best practices/ guidance by experts would be useful.
Reduce the number of surveys circulated
All email solicitations should include chair or designated approval by appropriate groups.
|They should be told to seek approval.||No specific additions||none|
engage experts in survey design to design and conduct surveys --
this survey is an example of conducting a survey without sufficient input from PI's. Timing is poor and design is onerous.
Communicate policies to all and discourage participation in surveys not approved
Remind all of policies. Communicate policies to all.
As below, develop best practices.
Best practices for survey development and implementation would be useful.
Monthly calls with [IC] program and PIs could be a useful platform to highlight the surveys being currently circulated and what specific information is being sought in the survey.
Greater turn around time would be helpful to ensure the appropriate responder(s) could respond in a timely manner.
Uncertain what this question is asking.
By using validated measures whenever possible, and otherwise by restricting surveys to those that have been designed by expert evaluators. If the team producing a survey did not include someone whose academic expertise is the design and conduct of surveys, it should not be distributed to the consortium.
The [PROGRAM] Steering Committee should form an Evaluation DTF, composed of the leading institutional experts in evaluation from across the consortium. This DTF should be asked to review all surveys and make recommendations to the Steering Committee as to whether or not they should be distributed. You might consider requiring IRB approval of all [PROGRAM] Consortium Surveys. This will ensure that respondents are given informed consent, and that survey results are publishable. It will also immediately slow down the number of surveys being issued, and will add a significant level of rigor to the research being conducted.
No survey should be distributed which takes more than 15 minutes to complete. This survey would not meet that criterion.
This shouldn't be a problem. Following the issuance of this guidance, if a survey is distributed that hasn't been authorized by the [PROGRAM] Steering Committee, people will largely ignore it.
No corrective action is needed. Groups that send out unauthorized surveys will largely see little to no response. This will curb the behavior over time.
ALL survey results should be published, at least on the consortium website. We should extend to the members of the consortium the same respect that we advocate for on behalf of our research subjects.
The vetting process is a good step as the tendency is to develop a preponderance of surveys. But unless there is going to be a required final review of all surveys, it would be difficult to ensure they are all high quality.
Most [PROGRAM]s have resources available internally to help with survey design. Its not necessary to provide this training.
Keep track of the survey requests, approvals, and surveys actually sent. Track over time how many and what type are being collected. Ask the administrators about survey frequency and what is acceptable. The administrators are the ones completing or ensuring the completion of most of them.
After making sure that all are aware of the new policy, addressing directly the rogue survey senders would be best.
Direct follow-up including phone call to site PI.
Could the survey results be available in something like ROCKET? (or whatever the new coordinating center will be using?) We answer a lot of surveys, sometimes the aggregate results are shared, but unless you keep the email, it's difficult to find them again.
There was a process like this instituted under NCRR for approval of surveys or at some point in the past. Was that successful? What was tracked? Is there anyone at [IC] now that would have that information?
Require approval of surveys before they are sent to the hubs.
Guidelines for the information that can be requested in surveys would be helpful. We have gotten several surveys asking for our salaries and are reluctant to provide that information.
Reassure the hubs that they do not have to complete any surveys not approved by the SC.
Tell the hubs what surveys have been approved, so that we know that we do not have to complete any others.
There's probably nothing that can be done about them. If some hubs have the time or inclination to complete unapproved surveys, they could. It would be helpful for the hubs to be informed if a hub states their survey has been approved when it hasn't.
Draft plan, page 3, point 3, '... it is recommended that communications regarding survey requests should contain the following information... :' Make it required rather than recommended, so that the SC gets all the information it needs to assess each request to perform a survey.
The [PROGRAM] Program Evaluators and Program Evaluation Group would be an excellent resource with whom to consult regarding centrally-distributed surveys. Many evaluators teach and/or conduct research on survey methodology, and it would be a valuable way to engage their expertise that goes beyond their work on the Common Metrics Initiative.
Same as the previous answer. Many evaluators have high levels of expertise in this area, and they would be able to work with [IC] and/or the Coordinating Centers (including CD2H) to develop a best practices guide as well as create/maintain a repository of high-quality instruments for future use and/or analysis. This process would also allow for the potential of consortium-wide access to survey data for secondary analyses or psychometric studies.
Enabling access to not only survey instruments but also their data will not only reduce the number of surveys that are distributed redundantly, but also promote the value of open-access and collaborative science. A rigorous vetting procedure should be employed using the resources described above that critique not only the survey content but also its construction, mode of administration, and necessity.
Again, broader use of the Program Evaluators would help in this situation, but what is even more critical is actively engaging the hub Administrators. Too many surveys go directly (and only) to the PI when they are, frankly, not usually the individual who has the time or context to fully respond to these surveys. Sending the surveys to someone with more day-to-day involvement with the specific instructions to engage with the PI or other hub personnel would go a long way.
Unsure until we see what the new process would look like.
I've mentioned this in a previous answer. Enabling open-access to a repository of survey instruments and their raw data will allow for hub researchers to use high-quality instruments with previously tested success, and it will also let them engage in studies to assess the reliability and validity evidence of some of these measures. As for additional features to this system, having a unique hub identifier that can be used to link response data across multiple instruments would be phenomenal.
Perhaps this could be a focus for the Coordinating Center as we shouldn't assume all [PROGRAM]s know how to put together a high-quality survey.
Absolutely - echoes comment above. A guide or best practices tool would be extremely helpful.
With fatigue being such an issue, we're not sure why we should even offer option D and allow surveys that are external and not affiliated to the [PROGRAM] program to survey us. There are likely already too many surveys within the network that need to be accomplished. This would be an easy way to eliminate some of the fatigue.
Perhaps the only way to combat this is to have approved surveys being deployed by the Coordinating Center on behalf of whomever has designed the survey. This would be the only way to know for certain that the survey has been approved. The [PROGRAM] administrators often send surveys to just the administrators related to processes or administrative issues - would this apply to the administrators listserv as well?
That they not be allowed to issue surveys again, or for a certain period of time. In addition, I think perhaps having a public/consortium list of 'offenders' could be deterrent enough.
It mentions that results would have to be sent to [NAME], but as mentioned above, perhaps the Coordinating Center should have access to the survey once deployed as this would likely be the only way to ensure the results are received since the senders may forget to send the results. Or guidelines should be created and deployed as part of question 1 above stating what format the results should be archived (we would suggest raw data in Excel format) so that people could analyze the data as they see fit.
The process for how to seek SC approval to conduct a survey isn't clear. Is this a formal application via REDCap or via email to [NAME]? There is also no mention of a timeline for how often the SC and/or [IC] will review survey requests submitted. Will this be on a rolling basis / weekly / bi-weekly? Also, it isn't mentioned whether reminders would be allowed to be sent or whether a survey can be sent out only one time.
The process as written is clear and appropriate. It may be useful to include a section regarding ad hoc surveys that are not sanctioned by the [PROGRAM] program or [IC], but are distributed by a hub, particularly with regard to how those should be labeled to ensure that they are note felt to be programatic surveys.
Program-distributed surveys should be designed in consultation with, or reviewed by, an individual with expertise in measurement and an analysis plan should be in place prior to distribution.
Rigorous design and evaluation planning should reduce duplicate information gathering. Ensure that results are available for additional analyses on the [PROGRAM] CC site with an appropriate application process for those analyses.
Governance regarding how to distribute 'ad hoc' surveys should be provided in this document. Email lists will always be used. We should not only have notation for program-distributed surveys but also notation for these ad hoc surveys.
the [PROGRAM] members and investigators are publicly accessible. individuals wishing to survey them from within [PROGRAM] institutions and outside of [PROGRAM] institutions are able to access that information. Clear guidance regarding approved programatic and ad hoc surveys is critical. No DTF or other official group should issue a not-approved survey. In addition, providing the data for additional analyses for individuals interested (open science principles) would enhance the re-use of the survey and could cut back on data redundancy.
Please see comments above. This is an excellent step towards open science and data re-use. the metadata regarding the survey and items should be included in these archives.
They should move up thru DTFs (or appropriate WGs) and be approved by the SC, as proposed.
Yes, this would be useful. Please see comments below.
Limit the number and improve the quality. USe consistent headers and a single source for the emails so they are identifiable. Please see comments below.
Sanctions should be considered by the SC for those who circulate unofficial/unapproved surveys.
|The SC should discuss and decide.||Good idea.|
Having a consistent process is a good idea. Surveys approved by the Steering Committee and the Domain Task Forces should have a consistent header. At present, it is hard to figure out who is responsible for some surveys.