M. J. Beck, S. Fifer, and J. M. Rose, Can you ever be certain? Reducing hypothetical bias in stated choice experiments via respondent reported choice certainty, Transp. Res. Part B Methodol, vol.89, pp.149-167, 2016.

M. J. Beck, J. M. Rose, and D. A. Hensher, Consistently inconsistent: The role of certainty, acceptability and scale in choice, Transp. Res. Part E Logist. Transp. Rev, vol.56, pp.81-93, 2013.

,

K. Blumenschein, G. C. Blomquist, M. Johannesson, N. Horn, and P. Freeman, Eliciting Willingness to Pay Without Bias: Evidence from a Field Experiment*, Econ. J, vol.118, pp.114-137, 2008.

,

K. Blumenschein, M. Johannesson, K. K. Yokoyama, and P. R. Freeman, Hypothetical versus real willingness to pay in the health care sector: results from a field experiment, J. Health Econ, vol.20, pp.441-457, 2001.

T. Borger, Are Fast Responses More Random? Testing the Effect of Response Time on Scale in an, Online Choice Experiment. Environ. Resour. Econ, vol.65, pp.389-413, 2016.

,

P. C. Boxall, W. L. Adamowicz, J. Swait, M. Williams, and J. Louviere, A comparison of stated preference methods for environmental valuation, Ecol. Econ, vol.18, pp.243-253, 1996.

,

R. Brouwer, T. Dekker, J. Rolfe, and J. Windle, Choice Certainty and Consistency in Repeated Choice Experiments, Environ. Resour. Econ, vol.46, pp.93-109, 2010.

,

R. T. Carson, N. E. Flores, and N. F. Meade, Contingent Valuation: Controversies and Evidence, Environ. Resour. Econ, vol.19, pp.173-210, 2001.

T. Dekker, S. Hess, R. Brouwer, and M. Hofkes, Decision uncertainty in multi-attribute stated preference studies, Resour. Energy Econ, vol.43, pp.57-73, 2016.

,

J. R. Deshazo and G. Fermo, Designing Choice Sets for Stated Preference Methods: The Effects of Complexity on Choice Consistency, J. Environ. Econ. Manag, vol.44, pp.123-143, 2002.

S. Fifer, J. Rose, and S. Greaves, Hypothetical bias in Stated Choice Experiments: Is it a problem? And if so, how do we deal with it?, Transp. Res. Part Policy Pract, vol.61, pp.164-177, 2014.

,

R. Greiner, M. Bliemer, and J. Ballweg, Design considerations of a choice experiment to estimate likely participation by north Australian pastoralists in contractual biodiversity conservation, J. Choice Model, vol.10, pp.34-45, 2014.

V. Haefen, H. , R. Massey, D. M. Adamowicz, and W. L. , Serial Nonparticipation in Repeated Discrete Choice Models, Am. J. Agric. Econ, vol.87, pp.1061-1076, 2005.

,

D. Hensher, J. Louviere, and J. Swait, Combining sources of preference data, J. Econom, vol.89, pp.197-221, 1998.

S. Hess and J. M. Rose, Can scale and coefficient heterogeneity be separated in random coefficients models?, Transportation, vol.39, pp.1225-1239, 2012.

A. R. Hole, Small-sample properties of tests for heteroscedasticity in the conditional logit model, Econ. Bull, vol.3, pp.1-14, 2006.

D. Kahneman, A perspective on judgment and choice: mapping bounded rationality, 2003.

, Am. Psychol, vol.58, pp.697-720

N. Krucien, M. Ryan, and F. Hermens, Visual attention in multi-attributes choices: What can eye-tracking tell us?, J. Econ. Behav. Organ, vol.135, pp.251-267, 2017.

,

C. Li and L. Mattsson, Discrete Choice under Preference Uncertainty: An Improved Structural Model for Contingent Valuation, J. Environ. Econ. Manag, vol.28, pp.256-269, 1995.

,

G. Loomes, S. Orr, and R. Sugden, Taste uncertainty and status quo effects in consumer choice, J. Risk Uncertain, vol.39, pp.113-135, 2009.

J. Loomis, What's to Know About Hypothetical Bias in Stated Preference Valuation 9, 2011.

R. C. Ready, P. A. Champ, and J. L. Lawton, Using Respondent Uncertainty to Mitigate Hypothetical Bias in a Stated Choice Experiment, Land Econ, vol.86, pp.363-381, 2010.

D. A. Regier, S. J. Peacock, R. Pataky, K. Van-der-hoek, G. P. Jarvik et al., Societal preferences for the return of incidental findings from clinical genomic sequencing: a discrete-choice experiment, CMAJ Can. Med. Assoc. J. J. Assoc. Medicale Can, vol.187, pp.190-197, 2015.

D. A. Regier, V. Watson, H. Burnett, and W. J. Ungar, Task complexity and response certainty in discrete choice experiments: An application to drug treatments for juvenile idiopathic arthritis, J. Behav. Exp. Econ, vol.50, pp.40-49, 2014.

,

Z. Sándor and M. Wedel, Designing Conjoint Choice Experiments Using Managers' Prior Beliefs, J. Mark. Res, vol.38, pp.430-444, 2001.

J. Sicsic, N. Pelletier-fleury, and N. Moumjid, Women's Benefits and Harms Trade-Offs in Breast Cancer Screening: Results from a Discrete-Choice Experiment, Value Health, vol.21, pp.78-88, 2018.

H. A. Simon, Invariants of Human Behavior, Annu. Rev. Psychol, vol.41, pp.1-20, 1990.

,

H. A. Simon, Rational Decision Making in Business Organizations, Am. Econ. Rev, vol.69, pp.493-513, 1979.

K. E. Stanovich and R. F. West, Individual differences in reasoning: implications for the rationality debate?, Behav. Brain Sci, vol.23, pp.665-726, 2000.

S. Sudman, N. Bradburn, and N. Schwarz, Thinking about answers: The application of cognitive processes to survey methodology, 1996.

J. Swait and W. Adamowicz, Choice Environment, Market Complexity, and Consumer Behavior: A Theoretical and Empirical Approach for Incorporating Decision Complexity into Models of Consumer Choice, Organ. Behav. Hum. Decis. Process, vol.86, pp.141-167, 2001.

,

K. Uggeldahl, C. Jacobsen, T. H. Lundhede, and S. B. Olsen, Choice certainty in Discrete Choice Experiments: Will eye tracking provide useful measures?, J. Choice Model, vol.20, pp.35-48, 2016.

,

R. Viney, E. Savage, and J. Louviere, Empirical investigation of experimental design properties of discrete choice experiments in health care, Health Econ, vol.14, pp.349-362, 2005.

,

H. Wang, Treatment of "Don't-Know" Responses in Contingent Valuation Surveys: A Random Valuation Model, J. Environ. Econ. Manag, vol.32, pp.219-232, 1997.

,

C. , Results of the re-weighting models C.2.1. Case study 1

, SEs and 95%CI) estimated from case study 1 are presented in Table C1. As expected, the SLL increases in WECL2, The results of the WECL models (WTA

, The D-error and the SEs around two WTA estimates (namely, travel time and number of screening tests) are higher in WECL2 compared to WECL1, which is consistent with our theoretical predictions. In WECL 3 and 4, the SEs of all WTA estimates are lower as is the Derror, which is synonymous of higher statistical precision. The model with greatest impact on statistical efficiency is WECL4. In WECL4, statistical efficiency improvements range from 23% to 69%. These improvements are particularly important for two WTA measures, falsepositive results and screening tests: there is a 41% decrease in the SEs for WTA falsepositives and a 69% decrease in the SEs for WTA screening tests

, 89 additional screening tests to save one (statistical) life from breast cancer, WECL3 and 4, there is a decrease in the WTA estimates for false-positives and screening tests

, The results are in line with a-priori expectations and consistent with those from case study 1. The SLL increases in WECL2 compared to WECL1, and decreases in WECL3 and 4. We find the SEs around WTP estimates are all higher (from 7% to 9%) in WECL2 compared to WECL1, and the D-error is higher. Conversely, in WECL3 and 4, all the SEs around WTP estimates as well as the D-error are lower. WECL 3 provides the greatest improvement in statistical efficiency. Compared to case study 1, there is lower impact of the re-weighting function on the precision of welfare estimates (with reduction in SEs ranging from 2% to 4%), The results of the WECL models (WTP, SEs and 95%CI) estimated from case study 2 are presented in Table C2