The evaluation of patient-reported outcomes (PROs) in health care has seen greater use in recent years, and methods to improve the reliability and validity of PRO instruments are advancing. This paper discusses the cognitive interviewing procedures employed by the Patient Reported Outcomes Measurement Information System (PROMIS) pediatrics group for the purpose of developing a dynamic, electronic item bank for field testing with children and adolescents using novel computer technology. The primary objective of this study was to conduct cognitive interviews with children and adolescents to gain feedback on items measuring physical functioning, emotional health, social health, fatigue, pain, and asthma-specific symptoms.
A total of 88 cognitive interviews were conducted with 77 children and adolescents across two sites on 318 items. From this initial item bank, 25 items were deleted and 35 were revised and underwent a second round of cognitive interviews. A total of 293 items were retained for field testing.
Children as young as 8 years of age were able to comprehend the majority of items, response options, directions, recall period, and identify problems with language that was difficult for them to understand. Cognitive interviews indicated issues with item comprehension on several items which led to alternative wording for these items.
Children ages 8–17 years were able to comprehend most item stems and response options in the present study. Field testing with the resulting items and response options is presently being conducted as part of the PROMIS Pediatric Item Bank development process.
The Patient Reported Outcomes Measurement Information System (PROMIS) project, a National Institute of Health Roadmap for Medical Research initiative, was developed to advance the science and application of patient-reported outcomes (PRO) in chronic diseases . The process of developing item banks for PROMIS includes literature review, focus groups, and individual cognitive interviews [2-4]. Among the qualitative methods, cognitive interviewing allows direct input from respondents on the item content, format, and understandability. This method has emerged as an essential component in the development of a number of standardized measures [5-7].
The cognitive interviewing methodology for PROMIS was designed to elicit input from respondents on all items under consideration for the PROMIS item bank . The pediatric cognitive interviewing methodology followed the general principles of the PROMIS Network , with the necessary adaptations required for children as young as 8 years of age, relying in part on the cognitive interviewing methodology utilized in the development of the PedsQL™ instruments  and the work of Willis .
The cognitive interviewing methodology is designed to assess the cognitive processes underlying respondents' comprehension and generation of answers to questionnaire items within an information processing conceptual model . The intent of cognitive interviewing is to determine what the respondent thinks or comprehends a particular item is asking (what do specific words and phrases in the item stem mean to the respondent); the processes used by the respondent to retrieve relevant information from autobiographical memory; the decision or judgment processes used to conceive an answer; and the process of formulating a response to the item stem [10-13].
Although there are two major types of cognitive interviewing methods (think-aloud and respondent debriefing), the PROMIS cognitive interviews employed the respondent debriefing technique . In this technique, after a participant completes the questionnaire, an interviewer probes for specific information on what types of difficulties respondents experienced while completing the items, and the basis for the response for each item . Cognitive probes elicit information regarding the clarity and rationale of the directions, the meaning of the items, the appropriateness of the response choices, and overall comments on the relevance and complexity of the questionnaire [12,13].
The primary objective of this study was to conduct cognitive interviews with children and adolescents to gain feedback on items measuring physical functioning, emotional health, social health, fatigue, pain, and asthma-specific symptoms.
The PROMIS Pediatrics project focused on the development of PRO item banks across several health domains for youth ages 8–17 years. Initially, PROMIS focused on the measurement of generic health domains that are important across a variety of illnesses, including physical function, pain, fatigue, emotional distress, and social function . Since asthma is the most common chronic disease of childhood, and PRO measurement is an essential component of evaluation of outcomes for children with asthma [14-16], asthma was an excellent chronic condition for the initial development of the PROMIS pediatrics disease specific item bank.
The PROMIS item bank was developed using a strategic item generation methodology. A series of focus groups were conducted to generate themes and domains ; a literature review was conducted to identify existing pediatric health questionnaires; and discussions with health care and research personnel (including physicians, psychologists, social workers, epidemiologists and nurses) were utilized to identify an initial item pool of over 3345 items. These items were "binned" (i.e., items were classified into domains according to their content) and "winnowed" (items were eliminated that either lacked face validity for the domain or were very similar to a more ideally worded item) [2,3] by the PROMIS pediatric project team. Items were rewritten or modified to adhere to a set of formatting requirements accepted by the PROMIS development team (e.g., use of past tense, 7 day recall period, standard response options (see Table 1 for response options utilized)). Cognitive interviews were conducted on the resulting 318 items across 6 domains, after which 35 items were revised and underwent a second round of cognitive interviews. The final item set contained 293 items across 6 domains (Physical Function = 70 items; Emotional Health = 49 items; Social Health = 74 items; Fatigue = 39 items; Pain = 27 items; Asthma = 34 items).
Table 1. Item response options
To participate in the cognitive interviews at The Children's Hospital at Scott and White (S&W) and the University of North Carolina (UNC), participants needed to meet the following criteria: between the ages of 8 and 17 years inclusive; speak and read English; provide informed assent prior to study entry; and provide parent or guardian informed consent. We also recruited children with asthma to review all domain items and asthma-specific items. Participants were not eligible for the study if they had any concurrent medical, psychiatric or cognitive conditions that, in the investigator's opinion, would interfere with participation in this study.
Purposive sampling was used to recruit a total of 28 children and adolescents from the UNC (6 with asthma; 22 without asthma) hospital and community clinics and 37 children and adolescents from the general pediatric clinic at S&W (16 with asthma; 21 without asthma), who participated in the first round of cognitive interviews. For the second round of cognitive interviews, 18 children and adolescents from S&W and 5 children from UNC participated (11 of these 23 participated in first round interviews). Table 2 lists the demographic characteristics of the first round cognitive interview participants from each site. For each domain questionnaire, the cognitive interview sample included at least 2 children 8 or 9 years of age, 1 adolescent between 13 and 18 years, 2 children of non-white ethnicity, and 1 child of white/Caucasian ethnicity. These categories were not exclusive. For example, a Latina girl age 8 would fulfill both the racial/ethnic requirement and the age requirement.
Table 2. Participant demographics and clinical characteristics for first round cognitive interviews
At both UNC and the S&W, potential participants were identified through review of clinic appointment books. A research assistant then mailed an informational letter to the child's parent to inform them about the study. Those who were interested in participating contacted the study coordinator to schedule their interview time. If the child was deemed eligible to participate in the cognitive interview and the parents agreed to allow their child to participate, they were scheduled for an interview date. At the time of the interview, a trained research assistant obtained parental informed consent and the children signed an assent document. All child participants received a $25 gift card in return for their time and effort. Children were allowed to take a break or end the interview at any time, although no children ended the interview prematurely. The study protocols were approved by the institutional review boards at UNC (#05-1431) and at S&W (#05-0077).
Cognitive interviewing process
The interviewers utilized for this study underwent an extensive training session (16 hours) that included general information on cognitive interview theory and procedures, as well as pediatric specific procedures. Interviewers were graduate students in social work or research nurses all who had experience working with children in pediatric research settings. All interviewers were trained by a pediatric psychologist with extensive experience in children's therapy and qualitative questionnaire development. Interviews were conducted in a comfortable environment and breaks were offered for the children.
We applied a sampling scheme that allowed each participant to be interviewed on approximately 30 items rather than all 318 items. Each child evaluated items from only one or two domains and only one response scale. By this method, all items in the bank were reviewed by at least 5 participants (59% of items were reviewed by 5 participant; 34% were reviewed by 6 participants; 7% were reviewed by 7 participants) meeting the target demographic characteristics outlined above (see Participants Section). During the cognitive interviews, participants were asked to provide verbal open-ended feedback on each item regarding response categories, time frame, item interpretation and overall impression of domain content and coverage.
Parents were asked to complete a sociodemographic form which contained information regarding the child's age, gender, ethnicity, living situation, and chronic health condition(s) as well as the parent/guardian's employment and education. Parents of children with asthma also completed an asthma form, which contained information about the number of days and nights in the previous week the child had coughing, wheezing, or shortness of breath, the number of times in the previous week the child used rescue medication, and the types of medications the child was taking. These demographic characteristics are described in Table 2.
Other than the children with asthma who underwent the cognitive interview on the asthma-specific item set, participants were randomly assigned to receive an item set (approximately 30 items) selected from one of the domains. Prior to the cognitive interview, participants completed an item set through paper and pencil administration. A research assistant trained in cognitive interviewing techniques then reviewed each item stem and item response with the child and began the interview using standardized questions (see Table 3) for each item. A subset of participants were asked questions about preference of item tense (past vs. present). The participant's comprehension or interpretation of the item along with their preferences on recall options and recall time period was elicited. All participant answers were recorded on a computerized spreadsheet. At the end of the interview, participants completed the Wide Range Achievement Test-3 Reading Subtest (WRAT) as a gross measure of reading ability . Interviews were also audio-taped to ensure accuracy of interviewer notes.
Table 3. Cognitive interview questions
Data analysis and item revision
After each interview, project personnel completed a summary statement for each item and the child's comments. After completing all initial cognitive interviews for an item, project personnel compiled reports that included all comments for an item. The item development team then reviewed all of the comments to determine issues with formatting, item comprehension, instructions, tense, and response options (see Table 4). Items deemed problematic by two or more children of any age were revised for clarity. Other items similar to those revised after the initial interview process were also changed by project personnel to maintain consistency across item stems or wording. In all, 35 items were revised as a result of the first round of cognitive interviews.
Table 4. Common issues identified by participants in first round of interviews
To ensure comprehension of the 35 revised items, a second set of cognitive interviews was conducted. Project personnel then reviewed the revised items and participants' responses from the second review. Items that continued to be problematic to research participants after the second round were eliminated from the item bank. Table 5 shows the 22 items that were retained in the final item bank and revised after the second round of cognitive interviews, along with the reasons for revising the items.
Table 5. PROMIS pediatric revised items and reasons for revision
Children who participated in the cognitive interviews spent approximately 1 hour with each interviewer, with some children (for example, younger children who took breaks) requiring additional time. In general, even children as young as 8 could understand the majority of the items (293/318 = 92%) and response options, indicating that they could think about and discuss their own health. Although younger children had a more difficult time with specific words, they understood the purpose of the items and response options and were able to provide alternatives using their own vocabulary. They also had no difficulty understanding that they needed to answer questions while thinking about specific recall periods. Older children seemed to clearly understand the majority of items and response options, and had fewer comprehension difficulties than younger children.
Tables 4 and 5 outline common issues identified by participants. Specific words (i.e., irritable, stressed) were difficult to comprehend for some children and items were sometimes too vague or ambiguous to be clearly understood. The majority of items (92%) were retained in the item banks for further large scale testing.
There was no indication that children had difficulty with the response options, except that younger children seemed to misunderstand the word "difficulty". When questioned, children were able to distinguish between the different response options, indicating that they could clearly identify variable levels of functioning, so the word "difficulty" was changed to "trouble" in subsequent cognitive interviews. Additionally, 48/53 (91%) of the children reported that the 7 day recall period meant the previous 7 days, and they responded to items accordingly. A subset of children were probed on present and past tense preferences for the item stems; 8 preferred the present tense, 8 preferred the past tense, and 9 had no stated preference when referring to the past 7 days. Participants had an overall positive opinion of the items and did not provide any suggestions for additional content that was not included in the current item banks.
These results confirm that children ages 8–17 can talk about and respond to items asking them about their health and well-being. They can also offer unique insight into the understandability of the items. These findings are consistent with other studies [5,6]. The majority of the items were well comprehended by all age groups, but we also identified several terms that were not well understood by younger children. Items containing difficult words or vague concepts were readily identified by the children and led to important questionnaire changes.
We also received valuable feedback on the format of the questionnaire, including increasing the font size for ease of readability, shortening the instructions, and putting the recall period in bold type. For some children, certain items were not applicable to them; for example, one child didn't have a computer at home, so he could not answer items related to computer use. Similarly, items that asked about walker or wheelchair use were not applicable to the majority of children interviewed, so feedback was limited for these items.
The sample included an almost equal distribution of children in different age groups, and represented a diverse population. One benefit of the sample is that it included a number of children with asthma, ensuring that comments from children with the most common chronic disease in the United States were included. The sample was well balanced for socioeconomic status and race/ethnicity, which is a strength of this study.
Our study is similar to other cognitive interview studies for children's PRO instrument development. For example, we found that younger children had more difficulty understanding specific item words than older children, particularly for words such as "irritable", "nervous" and "worried". Children in our study also had difficulty understanding ambiguous terms or phrases such as "did things" and "activities". These findings are consistent with other studies of child-reported health outcomes [5,18,19]. Additionally, like other studies, the children in our study reported few issues with the response formats using up to 5 response options, and were able to respond to items within the recall period . On occasion, the PROMIS pediatrics item development team had to decide what to do if a suitable synonym or content description was not available for substitution when a word was not well understood by some children. For example, the idea of "worry" is important content for the anxiety domain and it was kept in the item bank even though some children noted problems. These items will be reviewed again after large scale testing is completed and final decisions for these items will be made at that time.
Our study has several limitations. First, each item received a minimum of 5 cognitive interviews. Although we felt this was sufficient, some authors suggest that 10 – 15 interviews are better . Because of experience on previous scale development projects [5,18,19] with very similar items we felt comfortable performing fewer overall interviews on these items. Since a minimal number of children ages 8 or 9 were required to review the items, some important findings for this age group could be missed. Secondly, as with any qualitative study, the item development team had to make judgments as to the importance of an item problem and whether revisions were necessary. We tried to adhere to the operationalization of two negative comments leading to revision, but all such judgments are inherently qualitative. Our team, however, was interested in identifying the most clear and important items for inclusion and carefully responded to all of the feedback from the children. Lastly, the interview questions about content validity were phrased very broadly and did not add additional information to our previous studies utilizing focus groups .
Overall, the findings of the cognitive interviews suggest that children as young as 8 years could respond to items and talk about all aspects of their health and well-being in meaningful ways. They are able to comprehend varying response options on a categorical scale, and can accurately respond to items using a 7-day recall period. Feedback from the children who participated was valuable in creating a set of items to be administered to a wide age range of children. The final item set generated as a result of the cognitive interview process is currently undergoing large scale testing as part of the PROMIS Pediatric Item Bank development process.
(PROMIS): Patient Reported Outcomes Measurement Information System; (PROs): Patient-reported outcomes; (S&W): Scott and White; (UNC): University of North Carolina; (WRAT): Wide Range Achievement Test-3 ReadingSubtest; (PedsQL™): Pediatric Quality of Life Inventory™.
The authors declare that they have no competing interests.
All authors have made substantial contributions to conception and design, or acquisition of data, or analysis and interpretation of data, been involved in drafting the manuscript or revising it critically for important intellectual content; and have given final approval of the version to be published.
We would like to acknowledge the contributions of Jin-Shei Lai PhD, Esi DeWitt MD, Kelli Scanlon, Kelly Williams and Tasha Burwinkle PhD for their contributions to reviewing items and cognitive interview data. We would like to acknowledge the contribution of Harry A. Guess, MD, PhD to the conceptualization and operationalization of this research prior to his death.
This work was funded by the National Institutes of Health through the NIH Roadmap for Medical Research, Grant 1U01AR052181-01. Information on the Patient-Reported Outcomes Measurement Information System (PROMIS) can be found at http://nihroadmap.nih.gov/ and http://www.nihpromis.org.
Medical Care 2007, 45(Suppl 1):S1-S2. Publisher Full Text
Cella D, Yount S, Rothrock N, Gershon R, Cook K, Reeve B, Ader DN, Fries JF, Bruce B, Rose M: The Patient-Reported Outcomes Measurement Information System (PROMIS): Progress of an NIH Roadmap Cooperative Group during its first two years.
Medical Care 2007, 45(Suppl 1):S3-S11. Publisher Full Text
Medical Care 2007, 45(Suppl 1):S12-S21. Publisher Full Text
Qual Life Res 2008, 17:T725-735. Publisher Full Text
Educational and Psychological Measurement 2006, 66:687-700. Publisher Full Text
Qual of Life Res 1996, 5(1):35-46. Publisher Full Text
Research on Social Work Practice 2004, 14:191-200. Publisher Full Text
Expert Review of Pharmacoeconomics and Outcomes Research 2005, 5:353-364. Publisher Full Text