{"id":101323,"date":"2020-02-18T16:40:18","date_gmt":"2020-02-18T21:40:18","guid":{"rendered":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/2020\/02\/18\/results-from-a-follow-up-data-collection\/"},"modified":"2024-07-26T16:41:55","modified_gmt":"2024-07-26T20:41:55","slug":"results-from-a-follow-up-data-collection","status":"publish","type":"post","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/results-from-a-follow-up-data-collection\/","title":{"rendered":"8. Results from a follow-up data collection"},"content":{"rendered":"<p class=\"wp-block-paragraph\">In analyzing the data, researchers identified two issues that had the potential to affect the study\u2019s conclusions. First, the survey was designed to be administered the same way for each of the six online sources. But after interviewing was completed, researchers discovered that there was a discrepancy with respect to whether respondents were allowed to skip questions. Respondents in the two address-recruited and the one crowdsourced sample were not required to answer each question, but those in the opt-in samples were. This presented a potential problem, as forcing respondents to answer each question could conceivably affect their behavior and, in particular, their likelihood of giving answers that flagged them as a bogus respondent. Researchers needed to know if the higher incidence of bogus respondents in the opt-in samples was attributable to this difference. To find the answer, it was necessary to field the survey again on the opt-in sources, this time without forcing respondents to answer each question.<\/p>\n\n<p class=\"wp-block-paragraph\">The second issue concerned the approve-of-everything response pattern. As discussed in <a href=\"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/respondents-who-approve-of-everything\">Chapter 2<\/a>, a small share of respondents answered with \u201capprove\u201d or \u201cfavorable\u201d each time such a question was asked. This behavior was concentrated in the opt-in samples. As this report explains, the most likely explanation is that opt-in polls are primarily used for market research, and so offering rote \u201capprove\u201d answers is logical on the assumption that such answers would please the sponsor. This is a key finding because it demonstrates that bogus respondents, rather than just adding noise, stand to bias certain estimates.<\/p>\n\n<p class=\"wp-block-paragraph\">An alternative explanation for the approve-of-everything response style is what is known in polling as a primacy effect. A primacy effect is the tendency for some respondents to select answers shown at or near the top of the answer list. For example, in the question asking about the President\u2019s job performance the first answer choice was \u201cStrongly approve\u201d and the last was \u201cStrongly disapprove.\u201d Conceivably, the approve-of-everything respondents could have simply been selecting answers near the top, which in this study happen to be positively-valanced. To test this, it was necessary to field the survey again, this time presenting the negative answer choices first. If the approve-of-everything behavior was observed, even when such answers were shown near the bottom, this would show that the behavior is purposeful and that rotating the answer choices does not help.<\/p>\n\n<p class=\"wp-block-paragraph\">Researchers addressed both potential concerns by fielding a follow-up data collection. The survey was fielded again from Dec. 2 \u2013 7, 2019 with 10,122 interviews from opt-in panel 1 and 10,165 interviews from opt-in panel 3. Respondents to the first survey were ineligible for the follow-up study. Opt-in panel 2 was not used because it was not needed to answer the two questions raised above. The rates of bogus responding and approve-of-everything response style were similar across the three opt-in sources. If we learned that permitting respondents to skip questions or rotating the approve\/disapprove options increased data quality in panels 1 and 3, it would be very reasonable to assume that that would also hold for panel 2. All three opt-in panels generally performed about the same.<\/p>\n\n<p class=\"wp-block-paragraph\">The important difference between the main study and the follow-up study was two-fold. First, respondents were allowed to skip questions. Second, a split-form experiment was administered. A random 50% of respondents received the same response ordering as the main study with positive (approve\/favorable) answers shown first, and the other random 50% of respondents received the reverse ordering with negative (disapprove\/unfavorable) answers shown first. The follow-up study asked the same questions as the main study, with two minor exceptions. Because a new British Prime Minister took office between the first and second data collection, the name was updated in the question (Theresa May to Boris Johnson). Also, a language preference question was added to better assign English versus Spanish.<\/p>\n\n<figure class=\"wp-block-image alignright\"><a href=\"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/?attachment_id=833\" rel=\"attachment wp-att-833\"><img decoding=\"async\" class=\"wp-image-833\" src=\"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/wp-content\/uploads\/sites\/10\/2020\/02\/PM_02.18.20_dataquality-08-01.png\" alt=\"Approve-of-everything responding is not simply a primacy effect\"><\/a><\/figure>\n\n<p class=\"wp-block-paragraph\">If the approve-of-everything behavior was merely a primacy effect (not purposeful), the follow-up study would have found a lower rate of the behavior when negative answers were shown first, as opposed to second. But that did not happen.<\/p>\n\n<p class=\"wp-block-paragraph\">The incidence of respondents giving uniformly \u201capprove\u201d\/\u201dfavorable\u201d answers was basically the same regardless of the ordering of the answer choice. In opt-in panel 3, 3% of respondents approved of everything when positive answers were shown first, and the same amount did this when negative answers were shown first. The pattern was the same for opt-in panel 1, though with both rates being lower.<\/p>\n\n<p class=\"wp-block-paragraph\">This result indicates that the small but measurable share of opt-in respondents who apparently approve no matter what is asked about do so intentionally. They sought out the positive answers even when they had to look for them. They were not lazily selecting the first answer shown. This suggests that randomizing the response options would not eliminate this source of apparent bias. Interestingly, the overall incidence of this behavior was the same in the follow-up study as it was in the main study. This bolsters confidence in the generalizability of the main study findings.<\/p>\n\n<p class=\"wp-block-paragraph\">There are several other data points worth noting that discredit the notion that the approve-of-everything pattern is merely a primacy effect. One might expect that those answering approve\/favor regardless of the question are always selecting the <em>first<\/em> answer choice. For example, on a four-point scale (e.g., \u201cvery favorable,\u201d \u201cmostly favorable,\u201d \u201cmostly unfavorable,\u201d and \u201cvery unfavorable\u201d), perhaps the always approving cases consistently select the most positive answer available (\u201cvery favorable\u201d). That is not the case. For example, when the main study asked for an overall opinion of Vladimir Putin, 45% of the always approving respondents say \u201cvery favorable\u201d while 55% say \u201cmostly favorable.\u201d Most approve-of-everything respondents selected the second choice, not the first. The same pattern was observed for the questions asking about Merkel, Macron and May.<\/p>\n\n<p class=\"wp-block-paragraph\">In addition, if approve-of-everything respondents were simply picking answers near the top of every question, most would have answered the attention check (or trap question) incorrectly. In fact, 93% of the always approve cases answered this attention check correctly in the main study, and a nearly identical 94% of the always approve cases did so in the follow-up. In sum, a good deal of randomized and non-randomized data indicated that the approve-of-everything behavior is largely purposeful. It may be exacerbated when positive choices are offered first, but the follow-up study showed that even when positive choices are not offered first this small segment of opt-in respondents will seek them out.<\/p>\n\n<p class=\"wp-block-paragraph\">The follow-up study also tested whether allowing opt-in respondents to skip questions would reduce the bogus incidence. Researchers created a flag for bogus cases in the follow-up study using the same definition as the main study. In one opt-in panel, the bogus rate was <em>higher<\/em> when respondents could skip, while in the other panel it was lower. For opt-in panel 3, the incidence of bogus cases was 6% in the main study that prohibited skipping for opt-in respondents, and it was 8% in the follow-up study that allowed respondents to skip. For opt-in panel 1, the incidence of bogus cases was 6% in the main study that prohibited skipping for opt-in respondents, and it was 3% in the follow-up study that allowed respondents to skip. In neither case was the rate of bogus respondents as low as it was for the address-recruited panels (1%).<\/p>\n\n<figure class=\"wp-block-image aligncenter\"><a href=\"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/?attachment_id=832\" rel=\"attachment wp-att-832\"><img decoding=\"async\" class=\"wp-image-832\" src=\"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/wp-content\/uploads\/sites\/10\/2020\/02\/PM_02.18.20_dataquality-08-00.png\" alt=\"Opt-in polls still have higher rates of bogus data when respondents can skip items\"><\/a><\/figure>\n\n<p class=\"wp-block-paragraph\">In general, the follow-up study sample from opt-in panel 1 showed better data quality than the main study sample. The incidence of non sequitur open-ends and self-reports of living outside the U.S. were lower in the follow-up. In opt-in panel 3, by contrast, the follow-up sample had poorer data quality than the main study sample. The incidence of non sequitur answers and self-reports of living outside the U.S. were both higher in the follow-up. Interestingly, while none of the opt-in panel 3 respondents plagiarized an open-ended answer in the main study, 15 respondents from that panel did so in the follow-up study (see Appendix D). They pulled from several of the sources tracked in the main study, including websites for Mount Vernon and the Washington State Legislature, as well as a website helping non-English speakers answer \u201cHow are you feeling today?\u201d<\/p>\n\n<p class=\"wp-block-paragraph\">If allowing opt-in respondents to skip questions was the key to achieving good data quality then we would have seen the bogus rates in both opt-in panels decline in the follow-up study, perhaps to the low level observed for the address-recruited samples. But that is not what happened. Opt-in panel 1 did perform better when answering was not required, but the incidence of bogus cases was still significantly higher than the levels observed in the address-recruited samples. Meanwhile, opt-in panel 3 got worse, with the incidence of bogus cases climbing to a striking 8% in the follow-up.<\/p>\n\n<p class=\"wp-block-paragraph\">Given that one opt-in panel did worse when skipping was allowed but another panel did better, it is not clear that requiring respondents to answer questions has a strong, systematic effect on the incidence of bogus cases. It is worth noting that opt-in panels 1 and 3 source respondents from many of the same third party companies. In this study alone, sources used by both panels 1 and 3 include CashCrate, A&amp;K International, DISQO, Market Cube, MySoapBox, Persona.ly, Tellwut and TheoremReach. The variance in data quality may have more to do with the relative shares of respondents coming from such sources than it necessarily does with the forced response setting. This is a topic worthy of future investigation.<\/p>\n\n<p class=\"wp-block-paragraph\">Notably, all of the key findings from the main study were replicated in the follow up. For example, most bogus respondents (76%) in the main study passed both an attention check and a check for speeding. The share of bogus cases passing those same two checks in the follow-up was similar (70%). Similarly, a suspiciously high share of bogus cases in the main study reported being Hispanic (30%). In the follow-up this rate was 31%. The follow up study also replicated the finding that bogus respondents can have a small systematic effect on approval-type questions. For example, the estimated share expressing a favorable view of Vladimir Putin dropped four percentage points (from 20% to 16%) in the follow up-study when bogus respondents were removed from the opt-in panel 3 sample, and this estimate dropped one percentage point when bogus respondents were removed from the opt-in panel 1 sample (from 14% to 13%).<\/p>\n\n<p class=\"wp-block-paragraph\">\u00a0<\/p>","protected":false},"excerpt":{"rendered":"<p>In analyzing the data, researchers identified two issues that had the potential to affect the study\u2019s conclusions. First, the survey was designed to be administered the same way for each of the six online sources. But after interviewing was completed, researchers discovered that there was a discrepancy with respect to whether respondents were allowed to [&hellip;]<\/p>\n","protected":false},"author":367,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_crdt_document":"","sub_headline":"","sub_title":"","_prc_public_revisions":[],"_ppp_expiration_hours":0,"_ppp_enabled":false,"ai_generated_summary":"","bylines":[],"acknowledgements":[],"displayBylines":true,"prc_watchers":[],"relatedPosts":[],"reportMaterials":[],"multiSectionReport":[],"package_parts__enabled":false,"package_parts":[],"_prc_fork_parent":0,"_prc_fork_status":"","_prc_active_fork":0,"datacite_doi":"","datacite_doi_citation":"","_prc_seo_qr_attachment_id":0,"spoken_article_player_enabled":true,"footnotes":""},"categories":[36,359],"tags":[],"bylines":[968,719,631,2198,697,779,967],"collection":[],"datasets":[2007],"level_of_effort":[],"primary_audience":[],"information_type":[],"_post_visibility":[],"formats":[458],"_fund_pool":[],"languages":[],"regions-countries":[],"research-teams":[528],"workflow-status":[],"class_list":["post-101323","post","type-post","status-publish","format-standard","hentry","category-methodological-research","category-nonprobability-surveys","bylines-andrew-mercer","bylines-arnold-lau","bylines-courtney-kennedy","bylines-dorene-asare-marfo","bylines-joshua-ferno","bylines-nick-hatley","bylines-scott-keeter","datasets-assessing-risk-to-online-polls-dataset","formats-report","research-teams-methods"],"label":false,"post_parent":101287,"word_count":1680,"canonical_url":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/results-from-a-follow-up-data-collection\/","art_direction":{"A1":{"id":121059,"rawUrl":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-content\/uploads\/sites\/20\/2020\/02\/PM_20.02.18_Panel-Data-Quality_promo_fetaured_crop.png","url":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-content\/uploads\/sites\/20\/2020\/02\/PM_20.02.18_Panel-Data-Quality_promo_fetaured_crop.png?w=564&h=317&crop=1","width":564,"height":317,"chartArt":false},"A2":{"id":121059,"rawUrl":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-content\/uploads\/sites\/20\/2020\/02\/PM_20.02.18_Panel-Data-Quality_promo_fetaured_crop.png","url":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-content\/uploads\/sites\/20\/2020\/02\/PM_20.02.18_Panel-Data-Quality_promo_fetaured_crop.png?w=268&h=151&crop=1","width":268,"height":151,"chartArt":false},"A3":{"id":121059,"rawUrl":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-content\/uploads\/sites\/20\/2020\/02\/PM_20.02.18_Panel-Data-Quality_promo_fetaured_crop.png","url":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-content\/uploads\/sites\/20\/2020\/02\/PM_20.02.18_Panel-Data-Quality_promo_fetaured_crop.png?w=194&h=110&crop=1","width":194,"height":110,"chartArt":false},"A4":{"id":121059,"rawUrl":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-content\/uploads\/sites\/20\/2020\/02\/PM_20.02.18_Panel-Data-Quality_promo_fetaured_crop.png","url":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-content\/uploads\/sites\/20\/2020\/02\/PM_20.02.18_Panel-Data-Quality_promo_fetaured_crop.png?w=268&h=151&crop=1","width":268,"height":151,"chartArt":false},"XL":{"id":121059,"rawUrl":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-content\/uploads\/sites\/20\/2020\/02\/PM_20.02.18_Panel-Data-Quality_promo_fetaured_crop.png","url":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-content\/uploads\/sites\/20\/2020\/02\/PM_20.02.18_Panel-Data-Quality_promo_fetaured_crop.png?w=720&h=405&crop=1","width":720,"height":405,"chartArt":false},"social":{"id":121056,"rawUrl":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-content\/uploads\/sites\/20\/2020\/02\/PM_20.02.18_Panel-Data-Quality_promo_Social-media-image640px.png","url":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-content\/uploads\/sites\/20\/2020\/02\/PM_20.02.18_Panel-Data-Quality_promo_Social-media-image640px.png?w=1200&h=628&crop=1","width":1200,"height":628,"chartArt":false}},"_embeds":[],"watchers":[],"table_of_contents":[{"id":101287,"title":"Assessing the Risks to Online Polls From Bogus Respondents","slug":"assessing-the-risks-to-online-polls-from-bogus-respondents","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/assessing-the-risks-to-online-polls-from-bogus-respondents\/","is_active":false},{"id":101292,"title":"1. Answers that did not match the question were concentrated in opt-in polls","slug":"answers-that-did-not-match-the-question-were-concentrated-in-opt-in-polls","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/answers-that-did-not-match-the-question-were-concentrated-in-opt-in-polls\/","is_active":false},{"id":101295,"title":"2. Respondents who approve of everything","slug":"respondents-who-approve-of-everything","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/respondents-who-approve-of-everything\/","is_active":false},{"id":101296,"title":"3. Imperfect metrics of whether respondents live in the U.S.","slug":"imperfect-metrics-of-whether-respondents-live-in-the-u-s","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/imperfect-metrics-of-whether-respondents-live-in-the-u-s\/","is_active":false},{"id":101299,"title":"4. Two common checks fail to catch most bogus cases","slug":"two-common-checks-fail-to-catch-most-bogus-cases","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/two-common-checks-fail-to-catch-most-bogus-cases\/","is_active":false},{"id":101302,"title":"5. Bogus respondents bias poll results, not merely add noise","slug":"bogus-respondents-bias-poll-results-not-merely-add-noise","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/bogus-respondents-bias-poll-results-not-merely-add-noise\/","is_active":false},{"id":101308,"title":"6. Cases tripping flags for bogus data disproportionately say they are Hispanic","slug":"cases-tripping-flags-for-bogus-data-disproportionately-say-they-are-hispanic","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/cases-tripping-flags-for-bogus-data-disproportionately-say-they-are-hispanic\/","is_active":false},{"id":101315,"title":"7. Other tests for attentiveness show mixed results","slug":"other-tests-for-attentiveness-show-mixed-results","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/other-tests-for-attentiveness-show-mixed-results\/","is_active":false},{"id":101323,"title":"8. Results from a follow-up data collection","slug":"results-from-a-follow-up-data-collection","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/results-from-a-follow-up-data-collection\/","is_active":true},{"id":101328,"title":"9. Conclusions","slug":"conclusions","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/conclusions\/","is_active":false},{"id":101335,"title":"Acknowledgements","slug":"acknowledgements-13-2","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/acknowledgements-13-2\/","is_active":false},{"id":101341,"title":"Appendix A: Survey methodology","slug":"appendix-a-survey-methodology-2-4","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/appendix-a-survey-methodology-2-4\/","is_active":false}],"report_materials":[{"key":"3ec84beb-92a5-4222-bc0d-e7f9dbc2ca9b","type":"report","url":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/wp-content\/uploads\/sites\/10\/2020\/02\/PM_02.18.20_dataquality_FULL.REPORT.pdf","label":"","icon":"","attachmentId":""},{"key":"bee99ed7-c1b7-4760-93e4-7766985a5601","type":"link","url":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/wp-content\/uploads\/sites\/10\/2020\/02\/PM_02.18.20_dataquality_Appendix-B.pdf","label":"Appendix B: Protocol for coding open-ended answers","icon":"supplemental","attachmentId":""},{"key":"6275fa1a-e0bd-492c-bc9e-0b7ccb459ed7","type":"link","url":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/wp-content\/uploads\/sites\/10\/2020\/02\/PM_02.18.20_dataquality_AppendixC.pdf","label":"Appendix C: Reliability analysis for open-ended codes","icon":"supplemental","attachmentId":""},{"key":"293dc043-aa67-48cd-9a01-0bf0599340b7","type":"link","url":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/wp-content\/uploads\/sites\/10\/2020\/02\/PM_02.18.20.dataquality_APPENDIX-D-.xlsx","label":"Appendix D: Plagiarized websites","icon":"report","attachmentId":""},{"key":"6ad19242-cdbc-43fa-bd7e-18b2082dc6fb","type":"link","url":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/wp-content\/uploads\/sites\/10\/2020\/02\/PM.02.18.20_dataquality_AppendixE.pdf","label":"Appendix E: Questionnaire","icon":"topline","attachmentId":""},{"key":"420d41e1-ebf1-433e-ab6a-a041f08a73c6","type":"link","url":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/dataset\/assessing-the-risks-to-online-polls-follow-up-study-dataset\/","label":"Dataset: Follow-up study","icon":"detailed-tables","attachmentId":""},{"type":"dataset","id":2007,"label":"Assessing Risk to Online Polls Dataset","url":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/dataset\/assessing-risk-to-online-polls-dataset\/"}],"report_pagination":{"current_post":{"id":101323,"title":"8. Results from a follow-up data collection","slug":"results-from-a-follow-up-data-collection","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/results-from-a-follow-up-data-collection\/","is_active":true,"page_num":9},"next_post":{"id":101328,"title":"9. Conclusions","slug":"conclusions","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/conclusions\/","is_active":false,"page_num":10},"previous_post":{"id":101315,"title":"7. Other tests for attentiveness show mixed results","slug":"other-tests-for-attentiveness-show-mixed-results","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/other-tests-for-attentiveness-show-mixed-results\/","is_active":false,"page_num":8},"pagination_items":[{"id":101287,"title":"Assessing the Risks to Online Polls From Bogus Respondents","slug":"assessing-the-risks-to-online-polls-from-bogus-respondents","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/assessing-the-risks-to-online-polls-from-bogus-respondents\/","is_active":false,"page_num":1},{"id":101292,"title":"1. Answers that did not match the question were concentrated in opt-in polls","slug":"answers-that-did-not-match-the-question-were-concentrated-in-opt-in-polls","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/answers-that-did-not-match-the-question-were-concentrated-in-opt-in-polls\/","is_active":false,"page_num":2},{"id":101295,"title":"2. Respondents who approve of everything","slug":"respondents-who-approve-of-everything","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/respondents-who-approve-of-everything\/","is_active":false,"page_num":3},{"id":101296,"title":"3. Imperfect metrics of whether respondents live in the U.S.","slug":"imperfect-metrics-of-whether-respondents-live-in-the-u-s","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/imperfect-metrics-of-whether-respondents-live-in-the-u-s\/","is_active":false,"page_num":4},{"id":101299,"title":"4. Two common checks fail to catch most bogus cases","slug":"two-common-checks-fail-to-catch-most-bogus-cases","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/two-common-checks-fail-to-catch-most-bogus-cases\/","is_active":false,"page_num":5},{"id":101302,"title":"5. Bogus respondents bias poll results, not merely add noise","slug":"bogus-respondents-bias-poll-results-not-merely-add-noise","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/bogus-respondents-bias-poll-results-not-merely-add-noise\/","is_active":false,"page_num":6},{"id":101308,"title":"6. Cases tripping flags for bogus data disproportionately say they are Hispanic","slug":"cases-tripping-flags-for-bogus-data-disproportionately-say-they-are-hispanic","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/cases-tripping-flags-for-bogus-data-disproportionately-say-they-are-hispanic\/","is_active":false,"page_num":7},{"id":101315,"title":"7. Other tests for attentiveness show mixed results","slug":"other-tests-for-attentiveness-show-mixed-results","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/other-tests-for-attentiveness-show-mixed-results\/","is_active":false,"page_num":8},{"id":101323,"title":"8. Results from a follow-up data collection","slug":"results-from-a-follow-up-data-collection","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/results-from-a-follow-up-data-collection\/","is_active":true,"page_num":9},{"id":101328,"title":"9. Conclusions","slug":"conclusions","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/conclusions\/","is_active":false,"page_num":10},{"id":101335,"title":"Acknowledgements","slug":"acknowledgements-13-2","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/acknowledgements-13-2\/","is_active":false,"page_num":11},{"id":101341,"title":"Appendix A: Survey methodology","slug":"appendix-a-survey-methodology-2-4","link":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/methods\/2020\/02\/18\/appendix-a-survey-methodology-2-4\/","is_active":false,"page_num":12}]},"parent_info":{"parent_title":"Assessing the Risks to Online Polls From Bogus Respondents","parent_id":101287},"materialsOrdered":[],"chaptersOrdered":[],"partsOrdered":[],"partsEnabled":false,"datacite_doi":"","prc_seo_data":{"title":"Results from a follow-up data collection of online opt-in respondents %page%","description":"In analyzing the data, researchers identified two issues that had the potential to affect the study\u2019s conclusions. First, the survey was designed to be administered the same way for each&hellip;","og_title":"Results from a follow-up data collection of online opt-in respondents %page%","og_description":"In analyzing the data, researchers identified two issues that had the potential to affect the study\u2019s conclusions. First, the survey was designed to be administered the same way for each&hellip;","schema_type":"Article","noindex":false,"canonical_url":"","primary_terms":{"category":36,"research-teams":528},"custom_schema":[],"og_image":121056,"indexnow_submitted_at":null,"gsc_index_status":null},"prepublish_checks":{"prc-image-alt-text":{"status":"incomplete","message":"2 images are missing alt text.","data":{"count":2}},"prc-about-this-research":{"status":"incomplete","message":"Add an \"About this research\" details block.","data":null},"prc-paragraph-count":{"status":"complete","message":"Found 15 paragraphs.","data":{"count":15}},"prc-internal-link":{"status":"complete","message":"Found 3 internal links.","data":{"count":3}}},"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"relatedPostsOrdered":[],"bylinesOrdered":[],"acknowledgementsOrdered":[],"_links":{"self":[{"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/posts\/101323","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/users\/367"}],"replies":[{"embeddable":true,"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/comments?post=101323"}],"version-history":[{"count":3,"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/posts\/101323\/revisions"}],"predecessor-version":[{"id":183208,"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/posts\/101323\/revisions\/183208"}],"wp:attachment":[{"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/media?parent=101323"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/categories?post=101323"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/tags?post=101323"},{"taxonomy":"bylines","embeddable":true,"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/bylines?post=101323"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/collection?post=101323"},{"taxonomy":"datasets","embeddable":true,"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/datasets?post=101323"},{"taxonomy":"level_of_effort","embeddable":true,"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/level_of_effort?post=101323"},{"taxonomy":"primary_audience","embeddable":true,"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/primary_audience?post=101323"},{"taxonomy":"information_type","embeddable":true,"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/information_type?post=101323"},{"taxonomy":"_post_visibility","embeddable":true,"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/_post_visibility?post=101323"},{"taxonomy":"formats","embeddable":true,"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/formats?post=101323"},{"taxonomy":"_fund_pool","embeddable":true,"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/_fund_pool?post=101323"},{"taxonomy":"languages","embeddable":true,"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/languages?post=101323"},{"taxonomy":"regions-countries","embeddable":true,"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/regions-countries?post=101323"},{"taxonomy":"research-teams","embeddable":true,"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/research-teams?post=101323"},{"taxonomy":"workflow-status","embeddable":true,"href":"https:\/\/alpha.pewresearch.org\/pewresearch-org\/wp-json\/wp\/v2\/workflow-status?post=101323"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}