Email updates

Keep up to date with the latest news and content from HQLO and BioMed Central.

Open Access Research

A pilot Internet "Value of Health" Panel: recruitment, participation and compliance

Ken Stein1*, Matthew Dyer1, Tania Crabb1, Ruairidh Milne2, Alison Round1, Julie Ratcliffe3 and John Brazier3

Author Affiliations

1 Peninsula Technology Assessment Group, Peninsula Medical School, University of Exeter, Noy Scott House, Barrack Road, Exeter, EX2 5DW, UK

2 National Coordinating Centre for Health Technology Assessment, University of Southampton, Boldrewood, Bassett Crescent East, Southampton, SO16 7PX, UK

3 Sheffield Health Economics Group, School of Health and Related Research (ScHARR), University of Sheffield, 30 Regent Court, Sheffield, S1 4DA, UK

For all author emails, please log on.

Health and Quality of Life Outcomes 2006, 4:90  doi:10.1186/1477-7525-4-90


The electronic version of this article is the complete one and can be found online at: http://www.hqlo.com/content/4/1/90


Received:4 September 2006
Accepted:27 November 2006
Published:27 November 2006

© 2006 Stein et al; licensee BioMed Central Ltd.

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Objectives

To pilot using a panel of members of the public to provide preference data via the Internet

Methods

A stratified random sample of members of the general public was recruited and familiarised with the standard gamble procedure using an Internet based tool. Health states were perdiodically presented in "sets" corresponding to different conditions, during the study. The following were described: Recruitment (proportion of people approached who were trained); Participation (a) the proportion of people trained who provided any preferences and (b) the proportion of panel members who contributed to each "set" of values; and Compliance (the proportion, per participant, of preference tasks which were completed). The influence of covariates on these outcomes was investigated using univariate and multivariate analyses.

Results

A panel of 112 people was recruited. 23% of those approached (n = 5,320) responded to the invitation, and 24% of respondents (n = 1,215) were willing to participate (net = 5.5%). However, eventual recruitment rates, following training, were low (2.1% of those approached). Recruitment from areas of high socioeconomic deprivation and among ethnic minority communities was low. Eighteen sets of health state descriptions were considered over 14 months. 74% of panel members carried out at least one valuation task. People from areas of higher socioeconomic deprivation and unmarried people were less likely to participate. An average of 41% of panel members expressed preferences on each set of descriptions. Compliance ranged from 3% to 100%.

Conclusion

It is feasible to establish a panel of members of the general public to express preferences on a wide range of health state descriptions using the Internet, although differential recruitment and attrition are important challenges. Particular attention to recruitment and retention in areas of high socioeconomic deprivation and among ethnic minority communities is necessary. Nevertheless, the panel approach to preference measurement using the Internet offers the potential to provide specific utility data in a responsive manner for use in economic evaluations and to address some of the outstanding methodological uncertainties in this field.

Background

Although concerns have been expressed about the use of cost utility analyses (CUA)[1,2], the number of such analyses has increased in the past ten years[3]. Guidelines in the UK and Canada, and those proposed by the Washington Panel on cost effectiveness in the USA, promote CUA where the purpose of the analysis is informing public resource allocation [4-6] The UK's National Institute for Health and Clinical Excellence (NICE) has made cost utility an explicit aspect of policy making[6]. The UK and Washington Panel reference cases suggest that the perspective for the valuation of benefits in CUA should be that of the general public[5,6]. The arguments around adopting this perspective are beyond the scope of this article, but are described elsewhere[5,7-14,14]

A wide range of approaches has been taken to obtain utility data for economic evaluations[15] Although the widespread use of standard measures such as the EQ5D and SF6D[16] may address some of this inconsistency, this approach will not be appropriate in all situations and there remains a case for developing alternative methods for obtaining health state-specific utility data. We have piloted one approach, using the Internet to obtain preferences on written health state descriptions from a "standing panel" of members of the public.

Computer-based preference elicitation tools have been available for more than 15 years [17-23] with later use of the Internet [24-28]. Many preference elicitation tools, and studies employing them, are concerned with the psychology of preference elicitation[29,30] and are therefore less concerned with selection bias than Internet-based epidemiological[31,32], behavioural[33,34] or therapeutic studies[35,36]. While Internet based research faces many of the same challenges encountered in more traditional approaches, additional concerns are legitimate, in particular: sampling and sampling representativeness, competition for the attention of respondents, and barriers to participation related to literacy or disability[37]. Reported experience varies, with some studies reporting very disappointing results for recruitment and retention[38], and others showing rates which are comparable to traditional methods[39,40]. However, despite possible exceptions[31], it seems reasonably consistent that research participants in Internet-based studies are likely to be different from those recruited by other means [41-44]. Whether these differences matter in the context of preference elicitation studies remains uncertain.

In this paper we describe recruitment and participation in the pilot panel study and discuss the potential for extension of this approach to fulfil the need for eliciting utilities from the general public for research purposes and to support the need for these values to inform allocation policy decisions.

Methods

Recruitment and training

We recruited panel members from a convenience sample of four UK cities: Exeter, Sheffield, Glasgow and Aberdeen. A random sample was chosen from the electoral rolls for these cities in January 2004, stratified for socio-economic status using tertiles of the Index for Material Deprivation (IMD2000)[45]. We assumed a 15–20% response rate to the invitation to attend panel training based on the authors' previous experience with preference elicitation studies using face to face interviews and aimed for an arbitrary target sample size for the panel of 100.

Participants were invited by letter to express interest in joining the panel, accompanied with a short questionnaire seeking reasons for non-participation. Positive respondents were then invited to a three hour training session in each of the cities involved. Panel members were recruited and trained in two phases during summer and autumn 2004, involving eight training sessions.

Training sessions covered the following areas as background: research and policy making; role of modelling in estimating cost effectiveness; limitations of existing methods for utility assessment. Participants were familiarised with the standard gamble, using formats appropriate to whether the health states were considered better or worse than death, with one-to-one support from facilitators.

Health state descriptions were placed on the website for at least three weeks. Descriptions were posted on the website in sets containing different health states within the same condition (e.g. levels of severity or treatment side effects). States within a set were presented in random order. Sets included health states depicting the following diseases: congestive heart failure; eczema; hip osteoarthritis; Crohn's disease; colorectal cancer; depression; glioma; prostate cancer; insomnia; ovarian cancer; opiate abuse; and chronic obstructive pulmonary disease. Descriptions were developed using reports of quality of life using patient based disease-specific outcome measures and clinical expert opinion and presented in bullet point rather than narrative format[46].

We encouraged participants by email to provide preference values in this period and issued email reminders. Panel members who valued at least one description within the three week period were entered into a lottery for £50 Internet gift vouchers, held after each set of descriptions were taken off the Internet site. A regular newsletter was sent to participants reporting participation, results, website developments and other news regarding the project.

Preference elicitation

Panel members were asked to imagine themselves in the described health state, for at least twenty years, or, if they felt their life expectancy was likely to be less than that, for the rest of their life[47] The standard gamble method was used, based on the axiomatic advantage that it reflects choices made under conditions of uncertainty[48] This was carried out using bottom-up "titration", in which respondents work through choices with increasing probability of good outcome in the gamble option. We used this approach rather than an iterative approach where responses "ping-pong" between options with high and low probabilities of worst outcome in the gamble[49] in order to overcome reported difficulties with completion of the iterative approach[46].

Internet site development

The website was created in 2004 and piloted by the project team and panel members from the first phase of recruitment. It includes the standard gamble interface, information on the project, and a bulletin board for sharing questions and information on the project.

The standard gamble interface (Figure 1) has several features of interest:

thumbnailFigure 1. Standard gamble interface.

- It is not possible for participants to enter responses which are fundamentally illogical e.g. preferring the gamble at a given probability of restoration of full health, but then preferring the health state of interest when this probability increases. (Do you jump right into a gamble or start with a question asking preference between perfect health and the state of interest?)

- Participants who indicate that they would take the gamble where the probability of death is 1.0 must confirm that they consider the health state description worse than being dead. They are then automatically taken to an interface which presents the options appropriately for the elicitation of negative utility values.

- As the probabilities in the risky choice change, they are represented graphically as a bag of different coloured balls, each representing the potential outcomes of full health and death.

- Participants had three possible respondes to each choices in the standard gamble: choose to remain in the described health state; choose the risky option (with varying chance of death or full health); or "uncertain". Illogical chains of response (e.g. "remain in health state", followed by "uncertain", followed by "remain in health state") were not permitted and participants were required to repeat the choice which resulted in the illogical response. Choices at all levels of risk had to be completed before the response was accepted.

- The increments for changing probability in the gamble are set at 1% between probabilities of full health of 0.95 and 1.0 in the gamble option and 5% otherwise.

Responses were downloaded into a database with automatic calculation of respondent's utility for each health state description.

Analyses

Recruitment was described and the demographic characteristics of the pilot panel compared to data from the UK National Census carried out in 2001.

Completion of preference elicitation tasks was described in three ways. First, participation by panel member, defined as the proportion of panel members who carried out at least one valuation task during the study period. Second, for each set of health state descriptions, the proportion of panel members who responded was calculated – participation by health state description set. Third, for each panel member who carried out at least one valuation task (participant), we calculated compliance, defined as the proportion of health states valued by each participant.

Potential determinants of participation by panel member and compliance were explored through univariate and multivariate analyses using SPSS for windows version 11. Age, marital status, occupation and ethnicity were collected from panel members at recruitment. Socioeconomic status was attributed according to place of residence, using the Scottish Index of Material Deprivation (SIMD) for Aberdeen and Glasgow[50], calculated at postcode sector level and the 2004 version of the Index of Material Deprivation for Exeter and Sheffield at Lower Super Output Area (LSOA) level[51]. LSOAs contain populations of 1000–1500 people. For the purposes of the analysis, SIMD and IMD were treated as a single scale. Other variables considered were city of residence, nationality (Scottish or English) and training session.

Results

Recruitment and retention

Recruitment was carried out in two rounds. Initially, people in Exeter, Sheffield and Aberdeen were recruited and trained. It became clear that the target panel size would not be met from this sample and a further round of recruitment took place in Exeter and Glasgow to increase panel size. Overall, recruitment and training took about seven months. The panel carried out valuation tasks from August 2004 through March 2006, and we met our membership (n = 112) goal in November 2004. In Autumn 2004, therefore, we were recruiting new panel members while existing members were participating in valuation tasks.

Overall, 5,320 people were contacted through the electoral roll. Only 1215 (23%) of those approached responded to the initial invitation letter. Of this group, 286 (23.6%) expressed willingness to participate in the project and 112 (39% of those who agreed) attended a training session. Only people who attended a training session were considered part of the panel. Thus, the net final recruitment was 2.1% of those initially approached.

Residents from Exeter were more willing to participate (see Table 1: χ2 = 41.18, P < 0.001) and were more willing to give reasons for declining (see Table 2: χ2 = 12.86, P < 0.001) compared to residents from the other cities. Lack of Internet access was more frequently reported among respondents in cities other than Exeter (χ2 = 62.0, P < 0.001). Lack of time and Internet access were the most common reasons given for declining the initial invitation to participate. Other reasons included illness or disability, impending travel and not reaching the intended recipient because of incorrect address details or their decease.

Table 1. Recruitment by City

Table 2. Reasons for declining initial invitation

Panel member characteristics

The age range of panel members was 18 to 79 years with mean 48 years. The panel included a higher proportion of people in middle age than the UK population as a whole, and fewer younger and older people (see Figure 2).

thumbnailFigure 2. Value of Health Panel age structure vs UK population.

Table 3 shows the demographic characteristics of the panel members. There were more women (51.8%) than men (48.8%) (P = NS). Men were, on average, slightly older than women, though the difference was not significant. The panel had a higher proportion of married and retired people with correspondingly lower proportions of unmarried people and those in employment than the national population. However, only the proportions of married and single people and those from ethnic minorities were differed significantly from national data

Table 3. Panel member personal characteristics

Table 4 shows the proportion of panel members from each city whose area of residence fell into tertiles of IMD or SIMD scores ranked at a national level for Scotland or England. The distribution is significant (χ = 16.8, P < 0.025). If the panel reflected the national distribution of socioeconomic status as measured by the IMD/SIMD, the samples from each city would contain 33% of people in each national tertile. People from areas of high deprivation are under-represented in the panel, particularly in Exeter and Sheffield. The numbers of people recruited from Scotland were low, making this comparison imprecise.

Table 4. Panel compared to national distribution of socioeconomic status

Participation and compliance

During the first year of the project (October 2004–5), 25 members (22%) of the panel formally withdrew. Most of these panellists had completed some valuations before withdrawal. Having insufficient time, moving house, losing Internet access and personal or family illness were the main reasons cited. There was no statistical association between age, sex or socioeconomic status and this explicit withdrawal from the project.

Overall, 83 panel members (74.1%) participated in the project i.e. carried out at least one valuation. In almost all cases (94.5%), panellists who completed one health state in a set of health states, went on to complete the entire set. Most valuation tasks were carried out in one sitting: in only 13 (2.3%) were responses from a set received on more than one day. In these cases, respondents carried out valuations in no more than two sittings separated by 1 to 28 days (mean 6.9 days, median 6 days).

Panel members were asked to complete the valuation tasks within an arbitrary three week period, although in some cases descriptions were posted for longer. Figure 3 shows the cumulative probability of obtaining a set of values within 21 days. Where respondents carried out valuation on more than one day, the date of completion (i.e. the second date) was used in this calculation.

thumbnailFigure 3. Probability of participation within 21 days of a set of health state descriptions being posted.

Taking variations in panel membership into account, overall average participation by health state description set was 41% (range 24%–65%). This is the proportion of available panel members who completed each set of health state descriptions (see Figure 4). The drop in participation around presentation of the third set of decriptions results from a combination of (a) increased panel membership following the second round of recruitment and (b) initial access problems experienced by new panel members, mostly related to incorrect email addresses and incorrect or forgotten logins and passwords. Resolution of these problems resulted in an increase in participation, although this was followed by a gradual decline.

thumbnailFigure 4. Participation over time.

Univariate analysis showed no significant association with participation and age, sex, nationality, city, retirement status or training session. Data on ethnicity were incomplete and excluded from further analysis.

People with lower socioeconomic status were less likely to participate (t test, t = 3.713, P = 0.013) and those who were married were more likely to participate; 86% of married people participated versus 52.5% of unmarried people (χ2 = 13.90, P < 0.001).

Logistic regression confirmed the independent effects of socioeconomic status and marital status on participation. The odds ratio (95% confidence interval) for marital status was 0.57 (0.36 to 0.91), although odds ratios for specific categories were not significant. This analysis is therefore akin to a χ2 test for trend. In the same model, the odds ratio for participation according to IMD score was 0.94 (0.91 to 0.98) i.e. the odds of participation fell slightly as IMD (socioeconomic deprivation) increases. Pseudo-R2 for the model was low, at 0.12.

Compliance, defined as the proportion of health state valuations provided by each member as a percentage of the total for which they were eligible to complete, ranged from 3–100% (see Figure 5). A quarter of the panel carried out less than 20% of the elicitation tasks. There was no association between compliance and age (Spearman correlation, P = 0.92); sex (t test, P = 0.422); nationality (ANOVA, P = 0.23); city (ANOVA, P = 0.631); marital status (t test, P = 0.568); occupation (ANOVA, P = 0.19) or IMD/SIMD score (Spearman correlation, P = 0.40).

thumbnailFigure 5. Distribution of compliance.

Discussion

This is the first attempt, of which we are aware, to collect new utility data repeatedly from members of the public for the specific purpose of informing ongoing cost utility analyses. Although we have demonstrated basic feasibility, in so far as the panel was established as planned and utility data obtained within the required period, recruitment was very low and retention limited. This was, in part, driven by the need for attendence at a training session. Initial positive response to the invitation to participate was similar to that shown in studies previously carried out by one of the authors (JB) aiming to recruit for a single episode of health state valuation using face to face interviews.

Across health state description sets, participation was around 40%, giving a sample size range for each health state description of 28 to 62. Participation by health state description set declined during the study period, demonstrating the need for ongoing recruitment and training. However, around 30% of the panel continued to participate at one year, and appeared to stabilise, consistent with other accounts of Internet research[52]. It is perhaps not surprising that recruitment and retention were limited given the burden placed on respondents: to attend face to face training and respond to 18 sets of preference measurement with limited rewards (a small cash lottery).

Reips identified 25 advantages and disadvantages (and proposed solutions) of the Internet for psychological experiments[52]. Our study avoided the problem of multiple submissions by requiring logging into the standard gamble and checking the timing of submissions, and the potential for misunderstanding through lack of interaction was addressed by initial training sessions. However, drop out remained high despite the use of financial incentives, reminders, some personalisation and limited feedback. Feedback from the panel suggested that more detailed and personalised feedback on their utility data and the purposes to which they were put, and a certain payment rather than a lottery may have improved compliance.

The three week period chosen for valuation tasks was arbitrary but appears appropriate. The probability of completion by that time was very high, even where health state descriptions were available on the website for longer. This issue has not been addressed in previous studies. The shape of the curve for completion was surprising. We expected there would be an initial surge of responses after descriptions were posted which would quickly tail off, with smaller responses following reminders. Reminders were sent at varying points while each health care description set was posted on the Internet and this may account for the overall pattern shown i.e. that panellists carried out the valuation tasks fairly evenly throughout the three week period.

The demographic make up of this pilot panel does not reflect Scotland and England as a whole. This was not unexpected: one of the purposes of the pilot study was to understand better the determinants of recruitment, participation and compliance so as to inform the establishment of a larger, more representative panel. Representation of people from more deprived areas, and from ethnic minority groups, was particularly low, demonstrating the challenge for engagement which is shown in other types of study[53] This was despite stratification by socioeconomic status.

In addition to the low initial recruitment from areas of higher socioeconomic deprivation, lack of participation amongst people recruited to the panel was also associated with lower socioeconomic status. The association between marital status and participation is not explained by covariance with the other limited independent variables. Surprisingly, compliance was not associated with socioeconomic status, suggesting either that the number of participants was insufficiently large to demonstrate an effect, or that the principal impact of socioeconomic status is on participation. Lack of adequate access to the Internet or lack of effectiveness in training sessions would be consistent with the latter hypothesis. The association between participation and marital status was not shown for compliance, which showed no association with any of the other covariates measured.

The importance of the panel's lack of representativeness depends on the influence of demographic factors on utilities for hypothetical conditions, which is an area of limited previous study. Age [54-56], sex[54,56], marital status[54], nationality[57], educational level[58] and ethnicity[59] have been demonstrated as being significant predictors of utility. Experience of illness appears to be a particularly important determinant of variation in preferences for hypothetical states [60-62].

The underlying reasons for variation in utilities for hypothetical states is not clear but may relate to risk attitude[63], whose distribution is very unclear in the general population, or numeracy[64].

The extent to which the panel's utilities represent what would be obtained from a demographically representative panel is therefore unclear and may not, relative to other concerns, be of paramount importance. Firstly, most research to date has focussed on the effect of demographic factors on the absolute utilities for health states, rather than on the impact of these factors on the effect of moving between states. It is not clear, therefore, with the possible exception of current illness[62], whether demographic imbalance would result in different estimates of incremental effectiveness between health technologies competing for scarce health care resources. Secondly, variation in utilities arising from methodological factors (e.g. choice of rating task, perspective of rater) appear to be more influential. This suggests that, while analysts might be cautious about using utilities from a source which is not demographically balanced, they should be more averse to combining utilities from sources which use different methods in the same evaluation.

The use of computer-based preference elicitation is not new[17]. Sumner et al developed the Utiter programme in 1991[18]. This was followed by U-Maker[19], Gambler[20], iMPACT[21,22] and, more recently, ProSPEQT[23]. In addition, "one-off" computer based utility assessment has been used in a wide range of studies [65-67] and as a teaching tool[68]. Computer based utility measurement has potential advantages over interviewer-based methods: lower cost once software has been developed; elimination of interviewer variation; avoidance of transcription errors in data entry; potential to address logical errors automatically[22]; and increased flexibility over the time required to complete the task. Acceptability among members of the general public is reasonable, although the standard gamble has been rated as less acceptable than visual analogue scaling or time trade off in one study[69]

The use of the Internet is a logical extension to the development of computer-based utility measurement tools. The most technically sophisticated approach is iMPACT3, developed by Lenert and colleagues. This uses an object orientated approach to facilitate the depiction of health states using written descriptions or multi-media presentations[24] and includes automatic error correction[70] Ubel and colleagues have also developed a series of Internet-based tools, including the person trade off[71] for use in a range of experiments [25-28].

Lenert[72] suggests web based preference elicitation may reduce interviewer bias, although we are not aware of studies which have addressed this using the standard gamble. However, Damschroder et al[71] have compared computer based preference measurement using PTO to face to face interview and found no significant differences in: values obtained; occurrence of non-trading; or measures of logical consistency between the two modes.

Although the Value of Health Panel project shares many of the features of other Internet based preference measurement systems, it is unique in having recruited and maintained a group of members of the public who have expressed preferences on a wide range of health state descriptions. Recruitment was, however, not Internet-based. There are no published accounts of recruitment to preference studies using the Internet, although Ubel et al have reported obtaining a large representative sample of US citizens for one study[26] Validation of the data obtained from such panels remains important, and logical consistency and procedural invariance are methods which may be applied[73]. Although some work has been carried out in this project[74], the area remains relatively under-studied in general.

The establishment of Internet panels for market research has increased dramatically in the past five years. Harris Interactive, advertise a global panel of 1 million members, with 600,000 in the USA[75]. In the UK, YouGov has recruited a panel of 89,000 people through Internet advertising and floated on the Stock Exchange in 2005 [76]. However, Internet penetration in the UK is only around 52% and people who are likely to join Internet panels are more likely to be politically interested and knowledgeable than those less likely to participate[77].

Nevertheless, it seems likely that the upward trend in Internet access will continue, as will access to broadband technology. This presents important opportunities for preference measurement and research with, potentially, advantages over one-to-one interviews. For example, large numbers of people can be involved; alternatives to written descriptions can be used; costs are likely to be less than one to one interviews; automatic checks for illogical responses can be integrated; and various approaches to representing risk (or time) in preference measurement can be explored. In short, the potential for using the Internet in this field, to improve the application of cost utility analyses and address some of the important methodological challenges that exist in preference measurement, is only beginning to be exploited.

Authors' contributions

KS, RM, JB, JR and AR conceived the study and participated in its design.

KS coordinated the studied, participated in statistical analyses and drafted the manuscript.

MD and TC participated in recruitment, health state development and statistical analyses.

All authors read and approved the final manuscript.

Funding

NHS R&D Programme; National Institute for Health and Clinical Excellence (NICE); NHS Quality Improvement Scotland (NHSQIS).

Acknowledgements

We are extremely grateful to the following for their help:

The members of the Value of Health Panel for their hard work and patience;

The patients and clinicians who provided help in the development of health state descriptions;

Joanne Perry for her role as project administrator throughout the project;

Dan Fall (University of Sheffield) and Stephen Elliott (Llama Digital) for website development;

Sam Ballani and Pam Royle for providing IMD and SIMD data.

References

  1. Hutton J, Brown R: Use of economic evaluation in decision-making: What needs to change?

    Value Health 2002, 5:65-66. PubMed Abstract | Publisher Full Text OpenURL

  2. Neumann PJ: Why don't Americans use cost-effectiveness analysis?

    American Journal of Managed Care 2005, 10:308-312. OpenURL

  3. Sonnad S, Greenberg D, Rosen A, Neumann P: Diffusion of published cost-utility analyses in the field of health policy and practice.

    International Journal of Technology Assessment in Health Care 2005, 21:399-402. PubMed Abstract OpenURL

  4. Glennie J, Torrance GW, Baladi J, Berka C, Hubbard E, Menon D, Otten N, Riviera M: The revised Canadian Guidelines for the Economic Evaluation of Pharmaceuticals.

    Pharmacoeconomics 1999, 15:459-468. PubMed Abstract | Publisher Full Text OpenURL

  5. Weinstein MC, Siegel JE, Gold MR, Kamlet MS, Russell LB: recommendations of the panel on cost-effectiveness in health and medicine consensus statement.

    JAMA 1996, 276:1253-1258. PubMed Abstract | Publisher Full Text OpenURL

  6. National Institute for Clinical Excellence: Guide to the Methods of Technology Appraisal. London, National Institute for Clinical Excellence; 2003.

  7. Dolan P: Whose Preferences Count?

    Med Decis Making 1999, 19:482-486. PubMed Abstract | Publisher Full Text OpenURL

  8. Brazier J, Akehurst R, Brennan A, Dolan P, Claxton K, McCabe C, O'Hagan T, Sculpher M, Tsuchyia A: Should patients have a greater role in valuing health states: whose well-being is it anyway? [04/3]. Sheffield, School of Health and Related Research, University of Sheffield. Discussion Paper Series; 2004.

  9. Torrance G, Feeny D, Furlong W, Barr R, Zhang Y, Wang Q: Multiattribute utility functions for a comprehensive health status classification.

    Medical Care 1996, 34:702-722. PubMed Abstract | Publisher Full Text OpenURL

  10. Gafni A: Willingness to pay as a measure of benefits: relevant questions in the context of public decision making about health care programmes.

    Medical Care 1991, 29:1246-1252. PubMed Abstract | Publisher Full Text OpenURL

  11. De Wit GA, Busschbach JJ, De Charro FT: Sensitivity and perspective in the valuation of health status: whose values count?

    Health Econ 2000, 9:109-126. PubMed Abstract | Publisher Full Text OpenURL

  12. Buckingham K: A note on HYE (healthy years equivalent).

    Health Economics 1993, 12:301-309. Publisher Full Text OpenURL

  13. Ubel P, Richardson J, Menzel P: Societal value, the person trade-off, and the dilemma of whose values to measure for cost-effectiveness analysis.

    Health Economics 2000, 9:127-136. PubMed Abstract | Publisher Full Text OpenURL

  14. Furlong W, Oldridge N, Perkins A, Feeny D, Torrance GW: Community or Patient Preferences for Cost-Utility Analyses: Does it Matter?

    International Society for Pharmacoeconomics, ISPOR Conference, Arlington, Virginia 2003. OpenURL

  15. Stein K, Fry A, Round A, Milne R, Brazier J: What value health? A review of health state values used in early technology assessments for NICE.

    Applied Health Economics and Policy 2006. OpenURL

  16. Dolan P: The measurement of health related quality of life for use in resource allocation in health care. In Handbook of Health Economics. Edited by Culyer A, Newhouse J. London: Elsevier Science; 2002. OpenURL

  17. Brennan P, Strombom I: Improving health care by understanding patient preferences.

    Journal of the American Medical Informatics Association 1998, 5:257-262. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL

  18. Sumner W, Nease R, Littenberg B: U-titer: a utility assessment tool.

    Proceedings of the Annual Symposium on Computer Application in Medical Care 1991, 701-705. OpenURL

  19. Sonnenberg FA: UMaker. New Jersey, Clinical Informatics Research Group, University of Medicine and Dentistry; 1993.

  20. Gonzalez B, Eckman G, et al.: Gambler: a computer workstation for patient utility assessment.

    Medical Decision Making 1992, 12:350. OpenURL

  21. Lenert L, Michelson D, Flowers C, Bergen M: IMPACT: an object-orientated graphical environment for construction of multimedia patient interviewing software.

    Proceedings of the Annual Symposium on Computer Application in Medical Care 1995, 319-323. OpenURL

  22. Lenert LA, Sturley A, Watson ME: iMPACT3: Internet-Based Development and Administration of Utility Elicitation Protocols.

    Med Decis Making 2002, 22:464-474. PubMed Abstract | Publisher Full Text OpenURL

  23. McFarlane P, Bayoumi A, Pierratos A, Redelmeier D: The quality of life and cost utility of home nocturnal and conventional in-center haemodialysis.

    Kidney International 2003, 64:1004-1011. PubMed Abstract | Publisher Full Text OpenURL

  24. Goldstein MK, Clarke AE, Michelson D, Garber AM, Bergen MR, Lenert LA: Developing and Testing a Multimedia Presentation of a Health-state Description.

    Med Decis Making 1994, 14:336-344. PubMed Abstract OpenURL

  25. Damschroder L, Zikmund-Fisher B, Kulpa J, Ubel P: Considering adaptation in preference elicitations.

    Society for Medical Decision Making Annual Conference; San Francisco 2005. OpenURL

  26. Damschroder L, Muroff J, Smith D, Ubel P: A reversal in the public/patient discrepancy: utility ratings for pain from pain patients are lower than from non-patients.

    Society for Medical Decision Making Annual Conference; San Francisco 2005. OpenURL

  27. Damschroder L, Ubel P, Zikmund-Fisher B, Kim S, Johri M: A randomized trial of a web-based deliberation exercise: improving the quality of healthcare allocation preference surveys.

    Society for Medical Decision Making Annual Conference; San Francisco 2005. OpenURL

  28. Baron J, Ubel P: Types of inconsistency in health-state utility judgements.

    Organizational Behaviour and Human Decision Processes 2002, 89:1100-1118. Publisher Full Text OpenURL

  29. Lenert LA, Goldstein MK, Bergen MR, Garber AM: The Effects of the Content of Health State Descriptions on the Between-Subject Variability in Preferences. California, USA, Stanford University; 2005:1-21.

  30. Lenert LA, Ziegler J, Lee T, Unfred C, Mahmoud R: The Risks of Multimedia Methods: Effects of Actor's Race and Gender on Preferences for Health States.

    J of the American Informatics Assn 2000, 7:177-185. OpenURL

  31. Marquet RL, Bartelds AI, van Noort SP, Koppeschaar CE, Paget J, Schellevis FG, van der ZJ: Internet-based monitoring of influenza-like illness (ILI) in the general population of the Netherlands during the 2003-2004 influenza season.

    BMC Public Health 2006, 6:242. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text OpenURL

  32. Hubbard PA, Broome ME, Antia LA: Pain, coping, and disability in adolescents and young adults with cystic fibrosis: a Web-based study.

    Pediatr Nurs 2005, 31:82-86. PubMed Abstract OpenURL

  33. Bowen A, Williams M, Horvath K: Using the internet to recruit rural MSM for HIV risk assessment: sampling issues.

    AIDS Behav 2004, 8:311-319. PubMed Abstract | Publisher Full Text OpenURL

  34. Fernandez MI, Varga LM, Perrino T, Collazo JB, Subiaul F, Rehbein A, Torres H, Castro M, Bowen GS: The Internet as recruitment tool for HIV studies: viable strategy for reaching at-risk Hispanic MSM in Miami?

    AIDS Care 2004, 16:953-963. PubMed Abstract | Publisher Full Text OpenURL

  35. Clarke G, Reid E, Eubanks D, O'Connor E, DeBar LL, Kelleher C, Lynch F, Nunley S: Overcoming depression on the Internet (ODIN): a randomized controlled trial of an Internet depression skills intervention program.

    J Med Internet Res 2002, 4:E14. PubMed Abstract | Publisher Full Text OpenURL

  36. Formica M, Kabbara K, Clark R, McAlindon T: Can clinical trials requiring frequent participant contact be conducted over the Internet? Results from an online randomized controlled trial evaluating a topical ointment for herpes labialis.

    J Med Internet Res 2004, 6:e6. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL

  37. Rhodes SD, Bowie DA, Hergenrather KC: Collecting behavioural data using the world wide web: considerations for researchers.

    J Epidemiol Community Health 2003, 57:68-73. PubMed Abstract | Publisher Full Text OpenURL

  38. Koo M, Skinner H: Challenges of internet recruitment: a case study with disappointing results.

    J Med Internet Res 2005, 7:e6. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL

  39. Formica M, Kabbara K, Clark R, McAlindon T: Can clinical trials requiring frequent participant contact be conducted over the Internet? Results from an online randomized controlled trial evaluating a topical ointment for herpes labialis.

    J Med Internet Res 2004, 6:e6. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL

  40. Clarke G, Reid E, Eubanks D, O'Connor E, DeBar LL, Kelleher C, Lynch F, Nunley S: Overcoming depression on the Internet (ODIN): a randomized controlled trial of an Internet depression skills intervention program.

    J Med Internet Res 2002, 4:E14. PubMed Abstract | Publisher Full Text OpenURL

  41. Scholle SH, Peele PB, Kelleher KJ, Frank E, Jansen-McWilliams L, Kupfer D: Effect of different recruitment sources on the composition of a bipolar disorder case registry.

    Soc Psychiatry Psychiatr Epidemiol 2000, 35:220-227. PubMed Abstract | Publisher Full Text OpenURL

  42. Etter JF, Perneger TV: A comparison of cigarette smokers recruited through the Internet or by mail.

    Int J Epidemiol 2001, 30:521-525. PubMed Abstract | Publisher Full Text OpenURL

  43. Im EO, Chee W: Methodological issues in the recruitment of ethnic minority subjects to research via the Internet: a discussion paper.

    Int J Nurs Stud 2005, 42:923-929. PubMed Abstract | Publisher Full Text OpenURL

  44. Ross MW, Mansson SA, Daneback K, Cooper A, Tikkanen R: Biases in internet sexual health samples: comparison of an internet sexuality survey and a national sexual health survey in Sweden.

    Soc Sci Med 2005, 61:245-252. PubMed Abstract | Publisher Full Text OpenURL

  45. Index Multiple Deprivation 2000 [http://www.go-wm.gov.uk/regionalintelligence/deprivation] webcite

  46. Schunemann H, Stahl H, Austin P, Akl E, Armstrong D, Guyatt G: A comparison of narrative and table formats for presenting hypothetical health states to patients with gastrointestinal or pulmonary disease.

    Medical Decision Making 2004, 24:53-60. PubMed Abstract | Publisher Full Text OpenURL

  47. Dolan P, Gudex C: Time Preference, Duration and Health State Valuations.

    Health Economics 1995, 4:289-299. PubMed Abstract OpenURL

  48. von Neumann J, Morganstern O: THeory of Games and Economic Behaviour. 2nd edition. Princeton: Princeton University Press; 1947. OpenURL

  49. Lenert LA, Cher DJ, Goldstein MK, Bergen MR, Garber A: The Effect of Search Procedures on Utility Elicitations.

    Med Decis Making 1998, 18:76-83. PubMed Abstract | Publisher Full Text OpenURL

  50. Scottish Index of Multiple Deprivation: Summary Technical Report.

    Edinburgh, Scottish Executive 2004. OpenURL

  51. Noble M, Wright G, Dibben C, Smith GAN, McLennan D, Anttila C, Barnes H, Mokhtar C, Noble S, Avenell D, Gardner J, Covizzi I, Lloyd M: Indices of Deprivation 2004: Report to the Office of the Deputy Prime Minister. London, Neighbourhood Renewal Unit; 2004.

  52. Reips UD: Standards for Internet-based experimenting.

    Exp Psychol 2002, 49:243-256. PubMed Abstract | Publisher Full Text OpenURL

  53. Bartlett C, Doyal L, Ebrahim S, DAvey P, Bachmann M, Egger M, Dieppe P: The causes and effects of socio-demographic exclusions from clinical trials.

    Health Technology Assessment 2005., 9 PubMed Abstract | Publisher Full Text OpenURL

  54. Dolan P, Roberts J: To what extent can we explain time trade-off values from other information about respondents?

    Soc Sci Med 2002, 54:919-929. PubMed Abstract | Publisher Full Text OpenURL

  55. Dolan P: Effect of age on health state valuations.

    J Health Serv Res Policy 2000, 5:17-21. PubMed Abstract OpenURL

  56. Ashby J, Hanlon M, Buxton MJ: The time trade-off technique: how do the valuations of breast cancer patients compare to those of other groups?

    Quality of Life Research 1994, 3:257-265. PubMed Abstract | Publisher Full Text OpenURL

  57. Badia X, Roset M, Herdman M, Kind P: A comparison of United Kingdom and Spanish general population time trade-off values for EQ-5D health states.

    Medical Decision Making 2001, 21:7-16. PubMed Abstract | Publisher Full Text OpenURL

  58. Sims T, Garber A, Goldstein M: Does education really matter? Examining the role of education in health preferences among older adults.

    Society for Medical Decision Making, Annual Meeting; San Francisco 2005. OpenURL

  59. Cykert S, Joines JD, Kissling G, Hansen CJ: Racial differences in patients' perceptions of debilitated health states.

    J Gen Intern Med 1999, 14:217-222. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL

  60. Dolan P: The Effect of Experience of Illness on Health State Valuations.

    J Clin Epidemiol 1996, 49:551-564. PubMed Abstract | Publisher Full Text OpenURL

  61. King JT, Tsevat J, Roberts MS: Positive Association between Current Health and Health Values for Hypothetical Disease States.

    Medical Decision Making 2004, 24:367-378. PubMed Abstract | Publisher Full Text OpenURL

  62. Lenert L, Treadwell JR, Schwartz C: Associations Between Health Status and Utilities Implications for Policy – Impact of Illness.

    Med Care 1999, 37:479-489. PubMed Abstract | Publisher Full Text OpenURL

  63. Rosen A, Tsai J, DOwns S: Variations in risk attitude across race, gender and education.

    Medical Decision Making 2003, 23:511-517. PubMed Abstract | Publisher Full Text OpenURL

  64. Woloshin S, Schwartz L, Moncur M, Gabriel S, Tosteson A: Assessing values for health: Numeracy matters.

    Med Decis Making 2001, 21:382-390. PubMed Abstract | Publisher Full Text OpenURL

  65. Gerson L, Ullah N, Hastie T, Triadafilopoulos G, Goldstein M: Patient derived health state utilities for gastroesophageal reflux disease.

    American Journal of Gastroenterology 2005, 100:524-533. PubMed Abstract | Publisher Full Text OpenURL

  66. Munakata J, Woolcott J, Anis A, Sculpher M, Yu W, Sanders G, et al.: Design of a prospective economic evaluation for a tri-national clinical trial in HIV patients (OPTIMA).

    Society for Medical Decision Making Annual Conference; San Francisco 2005. OpenURL

  67. Tosteson A, Kneeland T, Nease R, Sumner W: Automated Current Health Time-Trade-Off Assessments in Women's Health.

    Value in Health 2002, 5:98-105. PubMed Abstract | Publisher Full Text OpenURL

  68. Utility Assessment [http://araw.mede.uic.edu/cgi-bin/utility.cgi] webcite

  69. Lenert LA, Sturley AE: Acceptability of Computerized Visual Analog Scale, Time Trade-off and Standard Gamble Rating Methods in Patients and the Public.

    AMI Association Proceedings 2001. OpenURL

  70. Lenert L, Sturley A, Rupnow M: Toward improved methods for measurement of utility: automated repair of errors in elicitation.

    Medical Decision Making 2003, 23:67-75. PubMed Abstract | Publisher Full Text OpenURL

  71. Damschroder L, Baron J, Hershey J, Asch D, Jepson C, Ubel P: The validity of person tradeoff measurements: randomized trial of computer elicitation versus face-to-face interview.

    Medical Decision Making 2004, 24:170-180. PubMed Abstract | Publisher Full Text OpenURL

  72. Lenert L: Web-based Assessment of Patients' Preferences. California, USA, University of California, San Diego; 2006.

  73. Lenert L: Validity and interpretation of preference-based measures of quality of life.

    2006.

  74. Stein K, Ratcliffe J, Milne R, Round A, Brazier J: Construct validity of utility data obtained from an internet panel of members of the public.

    Society for Medical Decision Making Annual Meeting Annual Meeting; Boston 2006. OpenURL

  75. Harris Interactive [http://www.harrisinteractive.com] webcite

    accessed December 2005

  76. YouGov: Polling for a Profit [http://www.yougov.com/corporate/pdf/analystYGFloat_1.pdf] webcite

  77. Baker K, Curtice J, Sparrow N: Internet Poll Trial: Research Report. London, ICM Research; 2003.