{"id":101299,"date":"2020-02-18T16:40:18","date_gmt":"2020-02-18T21:40:18","guid":{"rendered":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/2020\/02\/18\/two-common-checks-fail-to-catch-most-bogus-cases\/"},"modified":"2024-07-26T16:41:54","modified_gmt":"2024-07-26T20:41:54","slug":"two-common-checks-fail-to-catch-most-bogus-cases","status":"publish","type":"post","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/two-common-checks-fail-to-catch-most-bogus-cases\/","title":{"rendered":"4. Two common checks fail to catch most bogus cases"},"content":{"rendered":"<p class=\"wp-block-paragraph\">A number of data quality checks have been developed for online surveys. Examples include flagging respondents who fail an attention check (or trap) question, complete the survey too quickly (speeders), give rounded numeric answers, or give the same or nearly the same answer to each question in a battery of questions (straight-lining). Perhaps the two most common of these are the flags for failing an attention check and for speeding.[18. numoffset=&#8221;18&#8243; Some have <a href=\"https:\/\/www.qualtrics.com\/blog\/using-attention-checks-in-your-surveys-may-harm-data-quality\/\">recommended<\/a> against attention check questions as they have been found to harm data quality in questions asked later in the survey. That said, attention checks are still fairly common practice among researchers using opt-in sources.]<\/p>\n\n<figure class=\"wp-block-image alignright\"><a href=\"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/?attachment_id=819\" rel=\"attachment wp-att-819\"><img decoding=\"async\" class=\"wp-image-819\" src=\"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/wp-content\/uploads\/sites\/10\/2020\/02\/PM_02.18.20_dataquality-04-03.png\" alt=\"Most bogus respondents pass checks for speeding and attention\"><\/a><\/figure>\n\n<p class=\"wp-block-paragraph\">A key question is whether these common checks are sufficient for helping pollsters identify and remove bogus respondents before they bias public poll results. This analysis defines a bogus respondent as someone who did any of four things: reported living outside the country, gave multiple non sequitur answers, took the survey multiple times, or always said they approve\/favor regardless of what was asked.[19. This definition was selected because the behaviors are fairly egregious. Other behaviors (such as claiming to follow a very obscure news story) could conceivably be considered bogus. But to the extent that less egregious behaviors are included in the definition, the risk of mischaracterizing mostly genuine interviews increases.] The rate of bogus respondents was 7% in the crowdsourced poll, 5% on average in the three opt-in panel polls, and 1% on average in the two address-recruited panel polls.<\/p>\n\n<p class=\"wp-block-paragraph\">The attention check question in this study read, \u201cPaying attention and reading the instructions carefully is critical. If you are paying attention, please choose Silver below.\u201d Overall, 1.4% of the 62,639 respondents in the study failed the attention check by selecting an answer other than \u201cSilver.\u201d Among the bogus cases, most of them passed the attention check (84%). In other words, a standard attention check does not work for detecting the large majority of cases found to be giving the type of low quality, biasing data bogus respondents engage in. This result suggests that respondents giving bogus data do not answer at random and without reading the question \u2013 the behavior attention checks are designed to catch. Instead, this result corroborates the finding from the open-ended data that some bogus respondents, especially those from the crowdsourcing platform, are often trying very hard to give answers they think will be acceptable.<\/p>\n\n<figure class=\"wp-block-image alignright\"><a href=\"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/?attachment_id=818\" rel=\"attachment wp-att-818\"><img data-dominant-color=\"6e5d53\" data-has-transparency=\"false\" style=\"--dominant-color: #6e5d53;\" decoding=\"async\" sizes=\"(max-width: 1024px) 100vw, 1024px\" class=\"wp-image-818 not-transparent\" src=\"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/wp-content\/uploads\/sites\/10\/2020\/02\/PM_02.18.20_dataquality-04-02.png\" alt=\"In crowdsourced poll, bogus respondents took over 3 min. longer to complete the survey than others\"><\/a><\/figure>\n\n<p class=\"wp-block-paragraph\">Results for speeding were similar.[20. For five of the six samples, speeding was defined using screen-level response time data. For the crowdsourced sample, however, time spent on each screen was not available and so speeding is defined using the time it took to complete the entire survey, which includes time spent on the introduction and closing screens, as well as questions that were not administered to all samples (see <a href=\"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/wp-content\/uploads\/sites\/10\/2020\/02\/PM.02.18.20_dataquality_AppendixE.pdf\">Appendix E<\/a>). The proportion of the crowdsourced respondents flagged as speeding is, thus, lower than it otherwise would have been if timings at the level of the individual screens were available.] Overall, 1.5% of the 62,639 study respondents were flagged for speeding. Speeding was defined as completing the survey in under three minutes when the median response time was seven minutes. Among the bogus cases, about nine-in-ten (87%) were not speeders.[21. Sensitivity analysis shows that if speeding is defined as answering in under four minutes (instead of under three minutes) the share of all study respondents coded as speeding would increase from 1.5% to 5.6%. Under this more expansive definition of speeding, 75% of bogus respondents would still pass (i.e., not be flagged for speeding). ]<\/p>\n\n<p class=\"wp-block-paragraph\">This suggests that a check for too-fast interviews is largely ineffective for detecting cases that are either giving bogus answers or should not be in the survey at all. In the crowdsourced sample, the bogus respondents had a longer median completion time than other respondents (701 versus 489 seconds, respectively).<\/p>\n\n<p class=\"wp-block-paragraph\">These results are consistent with the findings from other research teams. Both Ahler and colleagues (2019) and TurkPrime (2018) found that fraudulent crowdsourced respondents were unlikely to speed through the questionnaire. Ahler and colleagues found that \u201cpotential trolls and potentially fraudulent IP addresses take significantly longer on the survey on average.\u201d The TurkPrime study found that crowdsourced workers operating through server farms to hide their true location took nearly twice as long to complete the questionnaire as those not using a server farm. They note that their result is consistent with the idea that respondents using server farms \u201ca) have a hard time reading and understanding English and so they spend longer on questions\u201d and \u201cb) are taking multiple HITs at once.\u201d<\/p>\n\n<figure class=\"wp-block-image alignright\"><a href=\"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/?attachment_id=817\" rel=\"attachment wp-att-817\"><img decoding=\"async\" class=\"wp-image-817\" src=\"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/wp-content\/uploads\/sites\/10\/2020\/02\/PM_02.18.20_dataquality-04-01.png\" alt=\"After removing speeders and attention check failures, most bogus cases remain\"><\/a><\/figure>\n\n<p class=\"wp-block-paragraph\">Using the union of the two flags is also only partially effective as a means of identifying bogus respondents. About three-quarters (76%) of bogus cases pass both the attention check and the fast response check. Purging based on speeding and a trap question appears to be somewhat more effective for opt-in and address-recruited panels than the crowdsourced sample. On average, those flags removed 29% of the cases identified as bogus in the opt-in and address-recruited panels but just 7% of the bogus cases in the crowdsourced sample. In sum, these two common data quality checks seem to help but appear to be far from sufficient in terms of removing most bogus interviews.<\/p>\n\n<h4 id=\"respondents-taking-the-survey-multiple-times-was-rare-and-limited-to-opt-in-sources\" class=\"wp-block-heading\">Respondents taking the survey multiple times was rare and limited to opt-in sources<\/h4>\n\n<p class=\"wp-block-paragraph\">Another possible quality check is to look for instances where two or more respondents have highly similar answers across the board. Similar to looking at duplicate IP addresses, having similar sets of answers could be an indicator of the same person taking the survey more than once.<\/p>\n\n<p class=\"wp-block-paragraph\">Whether a pair of interviews having the same answers on a large proportion of closed-ended questions indicates duplication is <a href=\"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2016\/02\/23\/evaluating-a-new-proposal-for-detecting-data-falsification-in-surveys\/\">exceedingly tricky<\/a> to figure out, because various survey features such as the number of questions, the number of response options, the number of respondents, and the homogeneity within the surveyed population affect how natural it is for any two respondents to have very similar answers. However, because the questionnaire in this study also included six open-ended questions, it becomes possible to identify potential duplicate respondents with much higher confidence.<\/p>\n\n<p class=\"wp-block-paragraph\">For each open-ended question, researchers compared each respondent\u2019s answer to all the other respondents\u2019 answers using a metric for measuring the similarity between two strings of text.[22. It is also possible that the same respondent might end up in more than one sample and thus take the survey more than once that way, but the computational cost of comparing open-ended responses between samples was judged to be too high.] This was done separately for each of the six samples. If, for a particular pair of respondents, three or more of their answers to the six open-ended questions exceeded a certain threshold, that pair was flagged for manual review. A researcher then reviewed each pair to assess whether they were a probable duplicate based on word choice and phrasing across multiple open-ended questions. When similar answers consisted entirely of short, common words (e.g., \u201cgood\u201d or \u201cnot sure\u201d), researchers did not consider that sufficiently strong evidence of a duplicate, as there is not enough lexical content to make a confident determination.<\/p>\n\n<p class=\"wp-block-paragraph\">At the end of this process, researchers found that duplicates represented 0.3% of all interviews. The incidence of duplicates was highest in the crowdsourced sample (1.1%), while in the opt-in samples, the incidence ranged from 0.1 to 0.3%. No duplicate interviews were identified in the address-recruited samples.<\/p>\n\n<p class=\"wp-block-paragraph\">Researchers examined whether the having an IP address flagged as a duplicate (as described in Chapter 3) was related to the interview being flagged as a duplicate based on this analysis of open-end answers. While there was a relationship, relying on IP addresses alone to detect people answering the survey multiple times is insufficient. Out of the 172 respondents flagged as duplicates based on their open-ended answers, there were 150 unique IP addresses.<\/p>\n\n<figure class=\"wp-block-image aligncenter\"><a href=\"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/?attachment_id=816\" rel=\"attachment wp-att-816\"><img decoding=\"async\" class=\"wp-image-816\" src=\"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/wp-content\/uploads\/sites\/10\/2020\/02\/PM_02.18.20_dataquality-04-00.png\" alt=\"Open-ended questions helped to identify instances of people taking the survey multiple times\"><\/a><\/figure>","protected":false},"excerpt":{"rendered":"<p>A number of data quality checks have been developed for online surveys. Examples include flagging respondents who fail an attention check (or trap) question, complete the survey too quickly (speeders), give rounded numeric answers, or give the same or nearly the same answer to each question in a battery of questions (straight-lining). Perhaps the two [&hellip;]<\/p>\n","protected":false},"author":367,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_crdt_document":"","sub_headline":"","sub_title":"","_prc_public_revisions":[],"_ppp_expiration_hours":0,"_ppp_enabled":false,"ai_generated_summary":"","prc_watchers":[],"relatedPosts":[],"reportMaterials":[],"multiSectionReport":[],"package_parts__enabled":false,"package_parts":[],"_prc_fork_parent":0,"_prc_fork_status":"","_prc_active_fork":0,"datacite_doi":"","datacite_doi_citation":"","_prc_seo_qr_attachment_id":0,"spoken_article_player_enabled":true,"bylines":[],"acknowledgements":[],"displayBylines":true,"footnotes":""},"categories":[36,359],"tags":[],"bylines":[968,719,631,2198,697,779,967],"collection":[],"datasets":[2007],"level_of_effort":[],"primary_audience":[],"information_type":[],"_post_visibility":[],"formats":[458],"_fund_pool":[],"languages":[],"regions-countries":[],"research-teams":[528],"workflow-status":[],"class_list":["post-101299","post","type-post","status-publish","format-standard","hentry","category-methodological-research","category-nonprobability-surveys","bylines-andrew-mercer","bylines-arnold-lau","bylines-courtney-kennedy","bylines-dorene-asare-marfo","bylines-joshua-ferno","bylines-nick-hatley","bylines-scott-keeter","datasets-assessing-risk-to-online-polls-dataset","formats-report","research-teams-methods"],"label":false,"post_parent":101287,"word_count":1291,"canonical_url":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/two-common-checks-fail-to-catch-most-bogus-cases\/","art_direction":{"A1":{"id":121059,"rawUrl":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-content\/uploads\/sites\/20\/2020\/02\/PM_20.02.18_Panel-Data-Quality_promo_fetaured_crop.png","url":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-content\/uploads\/sites\/20\/2020\/02\/PM_20.02.18_Panel-Data-Quality_promo_fetaured_crop.png?w=564&h=317&crop=1","width":564,"height":317,"chartArt":false},"A2":{"id":121059,"rawUrl":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-content\/uploads\/sites\/20\/2020\/02\/PM_20.02.18_Panel-Data-Quality_promo_fetaured_crop.png","url":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-content\/uploads\/sites\/20\/2020\/02\/PM_20.02.18_Panel-Data-Quality_promo_fetaured_crop.png?w=268&h=151&crop=1","width":268,"height":151,"chartArt":false},"A3":{"id":121059,"rawUrl":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-content\/uploads\/sites\/20\/2020\/02\/PM_20.02.18_Panel-Data-Quality_promo_fetaured_crop.png","url":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-content\/uploads\/sites\/20\/2020\/02\/PM_20.02.18_Panel-Data-Quality_promo_fetaured_crop.png?w=194&h=110&crop=1","width":194,"height":110,"chartArt":false},"A4":{"id":121059,"rawUrl":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-content\/uploads\/sites\/20\/2020\/02\/PM_20.02.18_Panel-Data-Quality_promo_fetaured_crop.png","url":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-content\/uploads\/sites\/20\/2020\/02\/PM_20.02.18_Panel-Data-Quality_promo_fetaured_crop.png?w=268&h=151&crop=1","width":268,"height":151,"chartArt":false},"XL":{"id":121059,"rawUrl":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-content\/uploads\/sites\/20\/2020\/02\/PM_20.02.18_Panel-Data-Quality_promo_fetaured_crop.png","url":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-content\/uploads\/sites\/20\/2020\/02\/PM_20.02.18_Panel-Data-Quality_promo_fetaured_crop.png?w=720&h=405&crop=1","width":720,"height":405,"chartArt":false},"social":{"id":121056,"rawUrl":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-content\/uploads\/sites\/20\/2020\/02\/PM_20.02.18_Panel-Data-Quality_promo_Social-media-image640px.png","url":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-content\/uploads\/sites\/20\/2020\/02\/PM_20.02.18_Panel-Data-Quality_promo_Social-media-image640px.png?w=1200&h=628&crop=1","width":1200,"height":628,"chartArt":false}},"_embeds":[],"watchers":[],"table_of_contents":[{"id":101287,"title":"Assessing the Risks to Online Polls From Bogus Respondents","slug":"assessing-the-risks-to-online-polls-from-bogus-respondents","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/assessing-the-risks-to-online-polls-from-bogus-respondents\/","is_active":false},{"id":101292,"title":"1. Answers that did not match the question were concentrated in opt-in polls","slug":"answers-that-did-not-match-the-question-were-concentrated-in-opt-in-polls","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/answers-that-did-not-match-the-question-were-concentrated-in-opt-in-polls\/","is_active":false},{"id":101295,"title":"2. Respondents who approve of everything","slug":"respondents-who-approve-of-everything","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/respondents-who-approve-of-everything\/","is_active":false},{"id":101296,"title":"3. Imperfect metrics of whether respondents live in the U.S.","slug":"imperfect-metrics-of-whether-respondents-live-in-the-u-s","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/imperfect-metrics-of-whether-respondents-live-in-the-u-s\/","is_active":false},{"id":101299,"title":"4. Two common checks fail to catch most bogus cases","slug":"two-common-checks-fail-to-catch-most-bogus-cases","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/two-common-checks-fail-to-catch-most-bogus-cases\/","is_active":true},{"id":101302,"title":"5. Bogus respondents bias poll results, not merely add noise","slug":"bogus-respondents-bias-poll-results-not-merely-add-noise","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/bogus-respondents-bias-poll-results-not-merely-add-noise\/","is_active":false},{"id":101308,"title":"6. Cases tripping flags for bogus data disproportionately say they are Hispanic","slug":"cases-tripping-flags-for-bogus-data-disproportionately-say-they-are-hispanic","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/cases-tripping-flags-for-bogus-data-disproportionately-say-they-are-hispanic\/","is_active":false},{"id":101315,"title":"7. Other tests for attentiveness show mixed results","slug":"other-tests-for-attentiveness-show-mixed-results","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/other-tests-for-attentiveness-show-mixed-results\/","is_active":false},{"id":101323,"title":"8. Results from a follow-up data collection","slug":"results-from-a-follow-up-data-collection","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/results-from-a-follow-up-data-collection\/","is_active":false},{"id":101328,"title":"9. Conclusions","slug":"conclusions","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/conclusions\/","is_active":false},{"id":101335,"title":"Acknowledgements","slug":"acknowledgements-13-2","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/acknowledgements-13-2\/","is_active":false},{"id":101341,"title":"Appendix A: Survey methodology","slug":"appendix-a-survey-methodology-2-4","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/appendix-a-survey-methodology-2-4\/","is_active":false}],"report_materials":[{"key":"3ec84beb-92a5-4222-bc0d-e7f9dbc2ca9b","type":"report","url":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/wp-content\/uploads\/sites\/10\/2020\/02\/PM_02.18.20_dataquality_FULL.REPORT.pdf","label":"","icon":"","attachmentId":""},{"key":"bee99ed7-c1b7-4760-93e4-7766985a5601","type":"link","url":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/wp-content\/uploads\/sites\/10\/2020\/02\/PM_02.18.20_dataquality_Appendix-B.pdf","label":"Appendix B: Protocol for coding open-ended answers","icon":"supplemental","attachmentId":""},{"key":"6275fa1a-e0bd-492c-bc9e-0b7ccb459ed7","type":"link","url":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/wp-content\/uploads\/sites\/10\/2020\/02\/PM_02.18.20_dataquality_AppendixC.pdf","label":"Appendix C: Reliability analysis for open-ended codes","icon":"supplemental","attachmentId":""},{"key":"293dc043-aa67-48cd-9a01-0bf0599340b7","type":"link","url":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/wp-content\/uploads\/sites\/10\/2020\/02\/PM_02.18.20.dataquality_APPENDIX-D-.xlsx","label":"Appendix D: Plagiarized websites","icon":"report","attachmentId":""},{"key":"6ad19242-cdbc-43fa-bd7e-18b2082dc6fb","type":"link","url":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/wp-content\/uploads\/sites\/10\/2020\/02\/PM.02.18.20_dataquality_AppendixE.pdf","label":"Appendix E: Questionnaire","icon":"topline","attachmentId":""},{"key":"420d41e1-ebf1-433e-ab6a-a041f08a73c6","type":"link","url":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/dataset\/assessing-the-risks-to-online-polls-follow-up-study-dataset\/","label":"Dataset: Follow-up study","icon":"detailed-tables","attachmentId":""},{"type":"dataset","id":2007,"label":"Assessing Risk to Online Polls Dataset","url":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/dataset\/assessing-risk-to-online-polls-dataset\/"}],"report_pagination":{"current_post":{"id":101299,"title":"4. Two common checks fail to catch most bogus cases","slug":"two-common-checks-fail-to-catch-most-bogus-cases","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/two-common-checks-fail-to-catch-most-bogus-cases\/","is_active":true,"page_num":5},"next_post":{"id":101302,"title":"5. Bogus respondents bias poll results, not merely add noise","slug":"bogus-respondents-bias-poll-results-not-merely-add-noise","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/bogus-respondents-bias-poll-results-not-merely-add-noise\/","is_active":false,"page_num":6},"previous_post":{"id":101296,"title":"3. Imperfect metrics of whether respondents live in the U.S.","slug":"imperfect-metrics-of-whether-respondents-live-in-the-u-s","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/imperfect-metrics-of-whether-respondents-live-in-the-u-s\/","is_active":false,"page_num":4},"pagination_items":[{"id":101287,"title":"Assessing the Risks to Online Polls From Bogus Respondents","slug":"assessing-the-risks-to-online-polls-from-bogus-respondents","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/assessing-the-risks-to-online-polls-from-bogus-respondents\/","is_active":false,"page_num":1},{"id":101292,"title":"1. Answers that did not match the question were concentrated in opt-in polls","slug":"answers-that-did-not-match-the-question-were-concentrated-in-opt-in-polls","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/answers-that-did-not-match-the-question-were-concentrated-in-opt-in-polls\/","is_active":false,"page_num":2},{"id":101295,"title":"2. Respondents who approve of everything","slug":"respondents-who-approve-of-everything","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/respondents-who-approve-of-everything\/","is_active":false,"page_num":3},{"id":101296,"title":"3. Imperfect metrics of whether respondents live in the U.S.","slug":"imperfect-metrics-of-whether-respondents-live-in-the-u-s","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/imperfect-metrics-of-whether-respondents-live-in-the-u-s\/","is_active":false,"page_num":4},{"id":101299,"title":"4. Two common checks fail to catch most bogus cases","slug":"two-common-checks-fail-to-catch-most-bogus-cases","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/two-common-checks-fail-to-catch-most-bogus-cases\/","is_active":true,"page_num":5},{"id":101302,"title":"5. Bogus respondents bias poll results, not merely add noise","slug":"bogus-respondents-bias-poll-results-not-merely-add-noise","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/bogus-respondents-bias-poll-results-not-merely-add-noise\/","is_active":false,"page_num":6},{"id":101308,"title":"6. Cases tripping flags for bogus data disproportionately say they are Hispanic","slug":"cases-tripping-flags-for-bogus-data-disproportionately-say-they-are-hispanic","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/cases-tripping-flags-for-bogus-data-disproportionately-say-they-are-hispanic\/","is_active":false,"page_num":7},{"id":101315,"title":"7. Other tests for attentiveness show mixed results","slug":"other-tests-for-attentiveness-show-mixed-results","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/other-tests-for-attentiveness-show-mixed-results\/","is_active":false,"page_num":8},{"id":101323,"title":"8. Results from a follow-up data collection","slug":"results-from-a-follow-up-data-collection","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/results-from-a-follow-up-data-collection\/","is_active":false,"page_num":9},{"id":101328,"title":"9. Conclusions","slug":"conclusions","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/conclusions\/","is_active":false,"page_num":10},{"id":101335,"title":"Acknowledgements","slug":"acknowledgements-13-2","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/acknowledgements-13-2\/","is_active":false,"page_num":11},{"id":101341,"title":"Appendix A: Survey methodology","slug":"appendix-a-survey-methodology-2-4","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/appendix-a-survey-methodology-2-4\/","is_active":false,"page_num":12}]},"parent_info":{"parent_title":"Assessing the Risks to Online Polls From Bogus Respondents","parent_id":101287},"materialsOrdered":[],"chaptersOrdered":[],"partsOrdered":[],"partsEnabled":false,"datacite_doi":"","prc_seo_data":{"title":"4. Two common checks fail to catch most bogus cases","description":"A number of data quality checks have been developed for online surveys. Examples include flagging respondents who fail an attention check (or trap) question, complete the survey too quickly (speeders),&hellip;","og_title":"4. Two common checks fail to catch most bogus cases","og_description":"A number of data quality checks have been developed for online surveys. Examples include flagging respondents who fail an attention check (or trap) question, complete the survey too quickly (speeders),&hellip;","schema_type":"Article","noindex":false,"canonical_url":"","primary_terms":{"category":36,"research-teams":528},"custom_schema":[],"og_image":121056,"indexnow_submitted_at":null,"gsc_index_status":null},"prepublish_checks":{"prc-image-alt-text":{"status":"incomplete","message":"4 images are missing alt text.","data":{"count":4}},"prc-about-this-research":{"status":"incomplete","message":"Add an \"About this research\" details block.","data":null},"prc-paragraph-count":{"status":"complete","message":"Found 12 paragraphs.","data":{"count":12}},"prc-internal-link":{"status":"complete","message":"Found 6 internal links.","data":{"count":6}}},"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"relatedPostsOrdered":[],"bylinesOrdered":[],"acknowledgementsOrdered":[],"_links":{"self":[{"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/posts\/101299","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/users\/367"}],"replies":[{"embeddable":true,"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/comments?post=101299"}],"version-history":[{"count":3,"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/posts\/101299\/revisions"}],"predecessor-version":[{"id":183204,"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/posts\/101299\/revisions\/183204"}],"wp:attachment":[{"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/media?parent=101299"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/categories?post=101299"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/tags?post=101299"},{"taxonomy":"bylines","embeddable":true,"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/bylines?post=101299"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/collection?post=101299"},{"taxonomy":"datasets","embeddable":true,"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/datasets?post=101299"},{"taxonomy":"level_of_effort","embeddable":true,"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/level_of_effort?post=101299"},{"taxonomy":"primary_audience","embeddable":true,"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/primary_audience?post=101299"},{"taxonomy":"information_type","embeddable":true,"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/information_type?post=101299"},{"taxonomy":"_post_visibility","embeddable":true,"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/_post_visibility?post=101299"},{"taxonomy":"formats","embeddable":true,"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/formats?post=101299"},{"taxonomy":"_fund_pool","embeddable":true,"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/_fund_pool?post=101299"},{"taxonomy":"languages","embeddable":true,"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/languages?post=101299"},{"taxonomy":"regions-countries","embeddable":true,"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/regions-countries?post=101299"},{"taxonomy":"research-teams","embeddable":true,"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/research-teams?post=101299"},{"taxonomy":"workflow-status","embeddable":true,"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/workflow-status?post=101299"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}