The Uber users were recruited through Amazon’s Mechanical Turk (MTurk), an online crowdsourcing sys- tem. To address the issues of respondents from MTurk, we followed the four categories of screening techni- ques from Goodman, Cryder, and Cheema (2013). First, we limited the time to the minimum required time for taking a survey of 100 s. Since the survey contained a total of 57 questions, it was considered inadequate when a respondent took less than 100 s to complete the survey. Second, we included a pre-screening ques- tion, which required the respondents to have adopted the Uber mobile application in the past 12 months. At the beginning of the survey, we specifically asked a screening question (i.e., How many times have you used the Uber mobile application for requesting trans- portation in the last 12 months?) to check if respon- dents had used the Uber mobile application. The respondents who indicated that they had never used the Uber mobile application before were eliminated from the sample. Third, each respondent was compen- sated US$0.50 for successfully completing the survey, leading to a better response rate that could enhance the validity and generalizability of the data. Lastly, three attention check questions were included to effectively identify and remove “poor” participants from the data.