Lessons Learned from Implementing a Survey in Rural North Carolina

An exciting update from my last blog post – my team has completed the patient survey! This process has been so much more difficult than my team and I imagined it would be, yet I know the results with provide invaluable insight into which elements of our proposed pilot program will work. The survey results will be extremely beneficial to my community partner, Hope Clinic, as they plan for ways to enhance their services to better serve their patient population.

The purpose of the survey was to gain patient insight and gauge their interest in various elements of the pilot program that we proposed for the Hope Clinic. Key lessons learned from survey development to survey dissemination are outlined below.



When developing the survey, question phrasing and question design were the main elements to think through. For question phrasing, my team and I went through multiple rounds of edits to ensure that the wording was specific and unambiguous. Part of this process was ensuring that the patient population would understand the wording of the questions we were posing and that we were not using technical language that those outside of our research team would not be able to understand. Additionally, we wanted to be mindful of the language used for questions gauging interest to clinic survey expansion. While the clinic is interested in expanding services, they’re unsure if they will have the capacity or funding to do so at this time. As such, we didn’t want question wording to come across as a promise for scaled up services in the future.

Refining the question design included deciding between asking questions on a scale versus free response questions versus multiple choice questions. The first draft of the survey largely included questions that were asked on a scale, such as “On a scale of 1 to 10, how equipped do you feel to manage your chronic condition?” After talking through the questions with a survey expert at Duke and our course instructors, we decided this question design was not well-suited for our survey population. The clinic’s patients were likely not familiar with such a scale, and so their responses would vary greatly and be difficult to compare across patients. The next round of the survey included multiple choice questions with free-response follow-up questions based on each yes or no selection. When the revised survey was close to final, we tested the survey among classmates and community members and realized the survey was too burdensome due to the number of questions that were being asked and the extent to which we were asking for the free-response follow-up questions. In order to cut down on the time it would take to complete the survey, the final survey included mostly multiple-choices, yes or no questions with an ‘other’ option for patients who wished to elaborate more. We also ensured we were only asking the essential questions for the scope of this project.



Early on in the process, my team and I knew we would not reach a 100% response rate, even if the survey was disseminated digitally. When the final decision was to conduct the patient survey via phone calls, our goal became surveying 10 to 15 percent of the patient population. To ensure that the patients we surveyed were representative of the full patient population, we sampled the list of patients based on the demographics of age, sex, race/ethnicity, health condition, and language. The original sample that we randomly pulled to survey matched the demographics of the full patient population in every category. While we had to sample additional patients along the way to account for non-responses, the final list of patients surveyed still matches the overall demographics of the patient population nicely.



As we conducted the survey via phone calls, each member on our team created a Google Voice phone number with an area code that matched the Pamlico County region. This was a necessary step, as we hoped having a similar area code to the patients we were calling would encourage a better phone call pick-up rate. Additionally, the clinic posted a notice on their social media accounts, notifying patients that they may receive a phone call asking them survey questions for the clinic.

Conducting the survey calls was a process of trial and error. After conducting initial calls, we began recognizing patterns in availability during the day that patients had and strategically tried calling around those times. My team and I held weekly check-ins during this time to assess progress and determine if we were on track for wrapping up the survey. The survey took 5 weeks in total to reach the goal of sampling 15 percent of the patient population. As we have now closed the survey, we have begun cleaning the survey responses and analyzing the results.

Arianna Farmer is a second-year Master of Public Policy candidate. She studied global health and social policy as an undergraduate at Northwestern University. Arianna is interested in implementing policy changes to increase equitable health outcomes at the state and local levels, particularly in underserved communities. She hopes to better understand health disparities in rural communities and develop innovative models to address those disparities.

All posts by