Publications

Advanced Filters

Forthcoming

Rodriguez, Pedro, Arthur Spirling, and Brandon M. Stewart. “Embedding Regression: Models for Context-Specific Description and Inference.” American Political Science Review n. pag. Print.

2023

Mathur, Arunesh, Angelina Wang, Carsten Schwemmer, Maia Hamin, Brandon M. Stewart, and Arvind Narayanan. “Manipulative Tactics Are the Norm in Political Emails: Evidence from 300K Emails from the 2020 U.S. Election Cycle.” Big Data & Society 10.1 (2023): n. pag.

We collect and analyze a corpus of more than 300,000 political emails sent during the 2020 US election cycle. These emails were sent by over 3000 political campaigns and organizations including federal and state level candidates as well as Political Action Committees. We find that in this corpus, manipulative tactics—techniques using some level of deception or clickbait—are the norm, not the exception. We measure six specific tactics senders use to nudge recipients to open emails. Three of these tactics—“dark patterns”—actively deceive recipients through the email user interface, for example, by formatting “from:” fields so that they create the false impression the message is a continuation of an ongoing conversation. The median active sender uses such tactics 5% of the time. The other three tactics, like sensationalistic clickbait—used by the median active sender 37% of the time—are not directly deceptive, but instead, exploit recipients’ curiosity gap and impose pressure to open emails. This can further expose recipients to deception in the email body, such as misleading claims of matching donations. Furthermore, by collecting emails from different locations in the US, we show that senders refine these tactics through A/B testing. Finally, we document disclosures of email addresses between senders in violation of privacy policies and recipients’ expectations. Cumulatively, these tactics undermine voters’ autonomy and welfare, exacting a particularly acute cost for those with low digital literacy. We offer the complete corpus of emails at https://electionemails2020.org for journalists and academics, which we hope will support future work.

2022

Feder, Amir, Katherine A. Keith, Emaad Manzoor, Reid Pryzant, Dhanya Sridhar, Zach Wood-Doughty, Jacob Eisenstein, Justin Grimmer, Roi Reichart, Margaret E. Roberts, Brandon M. Stewart, Victor Veitch, and Diyi Yang. “Causal Inference in Natural Language Processing: Estimation, Prediction, Interpretation and Beyond.” Transactions of the Association for Computational Linguistics 10 (2022): n. pag.
Grimmer, Justin, Margaret E. Roberts, and Brandon M. Stewart. Text As Data: A New Framework for Machine Learning and the Social Sciences. Princeton University Press, 2022.
Ying, Luwei, Jacob Montgomery, and Brandon M. Stewart. “Topics, Concepts, and Measurement: A Crowdsourced Procedure for Validating Topics As Measures.” Political Analysis 30.4 (2022): n. pag.
Egami, Naoki, Christian J. Fong, Justin Grimmer, Margaret E. Roberts, and Brandon M. Stewart. “How to Make Causal Inferences Using Texts.” Science Advances 8.42 (2022): n. pag. Print.

2021

Grimmer, Justin, Margaret E. Roberts, and Brandon M. Stewart. “Machine Learning for Social Science: An Agnostic Approach.” Annual Review of Political Science 24 (2021): n. pag.
Lundberg, Ian, Rebecca Johnson, and Brandon M. Stewart. “What Is Your Estimand? Defining the Target Quantity Connects Statistical Evidence to Theory.” American Sociological Review 86.3 (2021): 532–565. Print.

2020

Salganik, Matthew J., and others. “Measuring the Predictability of Life Outcomes With a Scientific Mass Collaboration.” Proceedings of the National Academy of Sciences 117.15 (2020): n. pag.
Lundberg, Ian, and Brandon M. Stewart. “Comment: Summarizing Income Mobility With Multiple Smooth Quantiles Instead of Parameterized Means.” Sociological Methodology 50 (2020): 96–111.
Marchi’, ’Scott, and Brandon M. Stewart. “Computational and Machine Learning Models: The Necessity of Connecting Theory and Empirics.” SAGE Handbook of Research Methods in Political Science and International Relations. N.p., 2020. Print.
Roberts, Margaret E., Brandon M. Stewart, and Richard Nielsen. “Adjusting for Confounding With Text Matching.” American Journal of Political Science 64.4 (2020): 887–903.

2019

Horowitz, Michael, Brandon M. Stewart, Dustin Tingley, Michael Bishop, Laura Resnick Samotin, Margaret Roberts, Welton Chang, Barbara Mellers, and Philip Tetlock. “What Makes Foreign Policy Teams Tick: Explaining Variation in Group Performance at Geopolitical Forecasting.” The Journal of Politics 81.4 (2019): 1388–1404.
When do groups—be they countries, administrations, or other organizations—more or less accurately understand the world around them and assess political choices? Some argue that group decision-making processes often fail due to biases induced by groupthink. Others argue that groups, by aggregating knowledge, are better at analyzing the foreign policy world. To advance knowledge about the intersection of politics and group decision making, this paper draws on evidence from a multiyear geopolitical forecasting tournament with thousands of participants sponsored by the US government. We find that teams outperformed individuals in making accurate geopolitical predictions, with regression discontinuity analysis demonstrating specific teamwork effects. Moreover, structural topic models show that more cooperative teams outperformed less cooperative teams. These results demonstrate that information sharing through groups, cultivating reasoning to hedge against cognitive biases, and ensuring all perspectives are heard can lead to greater success for groups at forecasting and understanding politics.
Roberts, Margaret, Brandon Stewart, and Dustin Tingley. “Stm: An R Package for Structural Topic Models.” Journal of Statistical Software 91.2 (2019): 1–40.
This paper demonstrates how to use the R package stm for structural topic modeling. The structural topic model allows researchers to flexibly estimate a topic model that includes document-level metadata. Estimation is accomplished through a fast variational approximation. The stm package provides many useful features, including rich ways to explore topics, estimate uncertainty, and visualize quantities of interest.

2018

Khodak, Mikhail, Nikunj Saunshi, Yingyu Liang, Tengyu Ma, Brandon Stewart, and Sanjeev Arora. “A La Carte Embedding: Cheap But Effective Induction of Semantic Feature Vectors.” Proceedings of the Association of Computational Linguistics 2018: n. pag. Print.
Yeomans, Michael, Brandon M. Stewart, Kimia Mavon, Alex Kindel, Dustin Tingley, and Justin Reich. “The Civic Mission of MOOCs: Computational Measures of Engagement Across Differences in Online Courses.” International Journal of Artificial Intelligence in Education 28.4 (2018): 553–589.
Chaney, Allison J.B., Brandon M. Stewart, and Barbara E. Engelhardt. “How Algorithmic Confounding in Recommendation Systems Increases Homogeneity and Decreases Utility.” Twelfth ACM Conference on Recommender Systems (RecSys ’18) 2018.
Simmons, Beth A., Paulette Lloyd, and Brandon M. Stewart. “The Global Diffusion of Law: Transnational Crime and the Case of Human Trafficking.” International Organization 72.2 (2018): 249–281.

2017

Kindel, Alexander, Michael Yeomans, Justin Reich, Brandon Stewart, and Dustin Tingley. “Discourse: MOOC Discussion Forum Analysis at Scale.” Proceedings of the Fourth (2017) ACM Conference on Learning @ Scale. New York, NY, USA: ACM, 2017. 141–142.

2016

Reich, Justin, Brandon Stewart, Kimia Mavon, and Dustin Tingley. “The Civic Mission of MOOCs: Measuring Engagement across Political Differences in Forums.” Proceedings of the Third (2016) ACM Conference on Learning @ Scale 2016: 1–10.

In this study, we develop methods for computationally measuring the degree to which students engage in MOOC forums with other students holding different political beliefs. We examine a case study of a single MOOC about education policy, Saving Schools, where we obtain measures of student education policy preferences that correlate with political ideology. Contrary to assertions that online spaces often become echo chambers or ideological silos, we find that students in this case hold diverse political beliefs, participate equitably in forum discussions, directly engage (through replies and upvotes) with students holding opposing beliefs, and converge on a shared language rather than talking past one another. Research that focuses on the civic mission of MOOCs helps ensure that open online learning engages the same breadth of purposes that higher education aspires to serve.

Roberts, Margaret E., Brandon M. Stewart, and Edoardo Airoldi. “A Model of Text for Experimentation in the Social Sciences.” Journal of the American Statistical Association 111.515 (2016): 988–1003.

Statistical models of text have become increasingly popular in statistics and computer science as a method of exploring large document collections. Social scientists often want to move beyond exploration, to measurement and experimentation, and make inference about social and political processes that drive discourse and content. In this paper, we develop a model of text data that supports this type of substantive research.
Our approach is to posit a hierarchical mixed membership model for analyzing topical content of documents, in which mixing weights are parameterized by observed covariates. In this model, topical prevalence and topical content are specified as a simple generalized linear model on an arbitrary number of document-level covariates, such as news source and time of release, enabling researchers to introduce elements of the experimental design that informed document collection into the model, within a generally applicable framework. We demonstrate the proposed methodology by analyzing a collection of news reports about China, where we allow the prevalence of topics to evolve over time and vary across newswire services. Our methods quantify the effect of news wire source on both the frequency and nature of topic coverage.

Roberts, Margaret, Brandon Stewart, and Dustin Tingley. “Navigating the Local Modes of Big Data: The Case of Topic Models.” Computational Social Science: Discovery and Prediction. New York: Cambridge University Press, 2016.

2015

Chuang, Jason, Margaret Roberts, Brandon Stewart, Rebecca Weiss, Dustin Tingley, Justin Grimmer, and Jeffrey Heer. “TopicCheck: Interactive Alignment for Assessing Topic Model Stability.” North American Chapter of the Association for Computational Linguistics Human Language Technologies (NAACL HLT) 2015: n. pag. Print.

Content analysis, a widely-applied social science research method, is increasingly being supplemented by topic modeling. However, while the discourse on content analysis centers heavily on reproducibility, computer scientists often focus more on scalability and less on coding reliability, leading to growing skepticism on the usefulness of topic models for automated content analysis. In response, we introduce TopicCheck, an interactive tool for assessing topic model stability. Our contributions are threefold. First, from established guidelines on reproducible content analysis, we distill a set of design requirements on how to computationally assess the stability of an automated coding process. Second, we devise an interactive alignment algorithm for matching latent topics from multiple models, and enable sensitivity evaluation across a large number of models. Finally, we demonstrate that our tool enables social scientists to gain novel insights into three active research questions.

Romney, David, Brandon Stewart, and Dustin Tingley. “Plain Text: Transparency in the Acquisition, Analysis, and Access Stages of the Computer-Assisted Analysis of Texts.” Qualitative and Multi-Method Research 13.1 (2015): 32–37. Print.
Reich, Justin, Dustin Tingley, Jetson Leder-Luis, Margaret Roberts, and Brandon Stewart. “Computer Assisted Reading and Discovery for Student Generated Text in Massive Open Online Courses.” Journal of Learning Analytics 2.1 (2015): 156–184. Print.

Dealing with the vast quantities of text that students generate in a Massive Open Online Course (MOOC) is a daunting challenge. Computational tools are needed to help instructional teams uncover themes and patterns as MOOC students write in forums, assignments, and surveys. This paper introduces to the learning analytics community the Structural Topic Model, an approach to language processing that can (1) find syntactic patterns with semantic meaning in unstructured text, (2) identify variation in those patterns across covariates, and (3) uncover archetypal texts that exemplify the documents within a topical pattern. We show examples of computationally- aided discovery and reading in three MOOC settings: mapping students’ self-reported motivations, identifying themes in discussion forums, and uncovering patterns of feedback in course evaluations. 

Lucas, Christopher, Richard Nielsen, Margaret E. Roberts, Brandon M. Stewart, Alex Storer, and Dustin Tingley. “Computer Assisted Text Analysis for Comparative Politics..” Political Analysis 23.2 (2015): 254–277. Print.

Recent advances in research tools for the systematic analysis oftextual data are enabling exciting new research throughout the socialsciences. For comparative politics scholars who are often interestedin non-English and possibly multilingual textual datasets, theseadvances may be difficult to access. This paper discusses practicalissues that arise in the the processing, management, translation andanalysis of textual data with a particular focus on how proceduresdiffer across languages. These procedures are combined in two appliedexamples of automated text analysis using the recently introducedStructural Topic Model. We also show how the model can be used toanalyze data that has been translated into a single language viamachine translation tools. All the methods we describe here are implemented in open-source software packages available from the authors.

2014

Chuang, Jason, John Wilkerson, Rebecca Weiss, Dustin Tingley, Brandon Stewart, Margaret Roberts, Forough Poursabzi-Sangdeh, Justin Grimmer, Leah Findlater, Jordan Boyd-Graber, and Jeff Heer. “Computer-Assisted Content Analysis: Topic Models for Exploring Multiple Subjective Interpretations.” Advances in Neural Information Processing Systems Workshop on Human-Propelled Machine Learning 2014: n. pag. Print.

Content analysis, a labor-intensive but widely-applied research method, is increasingly being supplemented by computational techniques such as statistical topic modeling. However, while the discourse on content analysis centers heavily on reproducibility, computer scientists often focus more on increasing the scale of analysis and less on establishing the reliability of analysis results. The gap between user needs and available tools leads to justified skepticism, and limits the adoption and effective use of computational approaches. We argue that enabling human-in-the-loop machine learning requires establishing users’ trust in computer-assisted analysis. To this aim, we introduce our ongoing work on analysis tools for interac- tively exploring the space of available topic models. To aid tool development, we propose two studies to examine how a computer-aided workflow affects the uncovered codes, and how machine-generated codes impact analysis outcome. We present our prototypes and findings currently under submission. 

Coppola, Antonio, and Brandon Stewart. Lbfgs: Efficient L-BFGS and OWL-QN Optimization in R. 2014. Cambridge.

This vignette introduces the lbfgs package for R, which consists of a wrapper built around the libLBFGS optimization library written by Naoaki Okazaki. The lbfgs package implements both the Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) and the Orthant-Wise Limited-memory Quasi-Newton (OWL-QN) optimization algorithms. The L-BFGS algorithm solves the problem of minimizing an objective, given its gradient, by iteratively computing approximations of the inverse Hessian matrix. The OWL-QN algorithm finds the optimum of an objective plus the L1 norm of the problem’s parameters. The package offers a fast and memory-efficient implementation of these optimization routines, which is particularly suited for high-dimensional problems. The lbfgs package compares favorably with other optimization packages for R in microbenchmark tests.

Stewart, Brandon M. Latent Factor Regressions for the Social Sciences. N.p., 2014. Print.

In this paper I present a general framework for regression in the presence of complex dependence structures between units such as in time-series cross-sectional data, relational/network data, and spatial data. These types of data are challenging for standard multilevel models because they involve multiple types of structure (e.g. temporal effects and cross-sectional effects) which are interactive. I show that interactive latent factor models provide a powerful modeling alternative that can address a wide range of data types. Although related models have previously been proposed in several different fields, inference is typically cumbersome and slow. I introduce a class of fast variational inference algorithms that allow for models to be fit quickly and accurately.

Roberts, Margaret, Brandon Stewart, Dustin Tingley, Christopher Lucas, Jetson Leder-Luis, Shana Gadarian, Bethany Albertson, and David Rand. “Structural Topic Models for Open-Ended Survey Responses.” American Journal of Political Science 58 (2014): 1064–1082. Print.

Collection and especially analysis of open-ended survey responses are relatively rare in the discipline and when conducted are almost exclusively done through human coding. We present an alternative, semi-automated approach, the structural topic model (STM) (Roberts, Stewart, and Airoldi 2013; Roberts et al. 2013), that draws on recent developments in machine learning based analysis of textual data. A crucial contribution of the method is that it incorporates information about the document, such as the author’s gender, political affiliation, and treatment assignment (if an experimental study). This article focuses on how the STM is helpful for survey researchers and experimentalists. The STM makes analyzing open-ended responses easier, more revealing, and capable of being used to estimate treatment effects. We illustrate these innovations with analysis of text from surveys and experiments.

2013

Roberts, Margaret, Brandon Stewart, Dustin Tingley, and Edoardo Airoldi. “The Structural Topic Model and Applied Social Science.” Advances in Neural Information Processing Systems Workshop on Topic Models: Computation, Application, and Evaluation 2013: n. pag. Print.
Andersen, Judith Pizarro, Roxane Cohen Silver, Brandon Stewart, Billie Koperwas, and Clemens Kirschbaum. “Psychological and Physiological Responses Following Repeated Peer Death.” PLOS One 8 (2013): 1–9. Print.
O’Connor, Brendan, Brandon Stewart, and Noah Smith. “Learning to Extract International Relations from Political Context.” Association of Computational Linguistics 2013: n. pag. Print.
Grimmer, Justin, and Brandon Stewart. “Text As Data: The Promise and Pitfalls of Automatic Content Analysis Methods for Political Texts.” Political Analysis 21 (2013): 267–297. Print.

Politics and political conflict often occur in the written and spoken word. Scholars have long recognized this, but the massive costs of analyzing even moderately sized collections of texts have prevented political scientists from using texts in their research. Here lies the promise of automated text analysis: it substantially reduces the costs of analyzing large collections of text. We provide a guide to this exciting new area of research and show how, in many instances, the methods have already obtained part of their promise. But there are pitfalls to using automated methods. Automated text methods are useful, but incorrect, models of language: they are no substitute for careful thought and close reading. Rather, automated text methods augment and amplify human reading abilities. Using the methods requires extensive validation in any one application. With these guiding principles to using automated methods, we clarify misconceptions and errors in the literature and identify open questions in the application of automated text analysis in political science. For scholars to avoid the pitfalls of automated methods, methodologists need to develop new methods specifically for how social scientists use quantitative text methods.

Zhukov, Yuri M., and Brandon M. Stewart. “Choosing Your Neighbors: Networks of Diffusion in International Relations.” International Studies Quarterly 57 (2013): 271–287. Print.

In examining the diffusion of social and political phenomena like regime transition, conflict, and policy change, scholars routinely make choices about how proximity is defined and which neighbors should be considered more important than others. Since each specification offers an alternative view of the networks through which diffusion can take place, one’s decision can exert a significant influence on the magnitude and scope of estimated diffusion effects. This problem is widely recognized, but is rarely the subject of direct analysis. In international relations research, connectivity choices are usually ad hoc, driven more by data availability than by theoretically informed decision criteria. We take a closer look at the assumptions behind these choices, and propose a more systematic method to asses the structural similarity of two or more alternative networks, and select one that most plausibly relates theory to empirics. We apply this method to the spread of democratic regime change, and offer an illustrative example of how neighbor choices might impact predictions and inferences in the case of the 2011 Arab Spring.

2012

Lloyd, Paulette, Beth Simmons, Brandon Stewart, Andre Nolkaemper, Michael Zurn, and Randy Peerenboom. “Combating Transnational Crime: The Role of Learning and Norm Diffusion in the Current Rule of Law Wave.” Rule of Law Dynamics: In an Era of International and Transnational Governance. N.p., 2012. Print.

2009

Stewart, Brandon M., and Yuri M. Zhukov. “Use of Force and civil–military Relations in Russia: An Automated Content Analysis.” Small Wars & Insurgencies 20 (2009): 319–343. Print.

Russia’s intervention in the Georgian–South Ossetian conflict has highlighted the need to rigorously examine trends in the public debate over the use of force in Russia. Approaching this debate through the prism of civil–military relations, we take advantage of recent methodological advances in automated content analysis and generate a new dataset of 8000 public statements made by Russia’s political and military leaders during the Putin period. The data show little evidence that military elites exert a restraining influence on Russian foreign and defence policy. Although more hesitant than their political counterparts to embrace an interventionist foreign policy agenda, Russian military elites are considerably more activist in considering the use of force as an instrument of foreign policy.

2007

Shellman, Stephen, and Brandon Stewart. “Predicting Risk Factors Associated With Forced Migration: An Early Warning Model of Haitian Flight.” Civil Wars 9 (2007): 174–199. Print.

This study predicts forced migration events by predicting the civil violence, poor economic conditions, and foreign interventions known to cause individuals to flee their homes in search of refuge. If we can predict forced migration, policy-makers can better plan for humanitarian crises. While the study is limited to predicting Haitian flight to the United States, its strength is its ability to predict weekly flows as opposed to annual flows, providing a greater level of predictive detail than its ‘country-year’ counterparts. We focus on Haiti given that it exhibits most, if not all, of the independent variables included in theories and models of forced migration. Within our temporal domain (1994–2004), Haiti experienced economic instability, low intensity civil conflict, state repression, rebel dissent, and foreign intervention and influence. Given the model’s performance, the study calls for the collection of disaggregated data in additional countries to provide more precise and useful early-warning models of forced migrant events.

Shellman, Stephen, and Brandon Stewart. “Political Persecution or Economic Deprivation? A Time-Series Analysis of Haitian Exodus, 1990-2004.” Conflict Management and Peace Science 24 (2007): 121–137. Print.

This study addresses the factors that lead individuals to flee their homes in search of refuge. Many argue that individuals abandon their homes in favor of an uncertain life elsewhere because of economic hardship, while others argue that threats to their lives, physical person, and liberty cause them to flee. This study engages the debate by analyzing flight patterns over time from Haiti to the United States as a function of economic and security factors. Which factors have the largest influence on Haitian-U.S. migratory patterns? Our results show that both economics and security play a role. However, our analyses are able to distinguish between the effects of different individual economic and security indicators on Haitian-U.S. migration.

2006

Reeves, Andrew, Stephen Shellman, and Brandon Stewart. Fair & Balanced or Fit to Print? The Effects of Media Sources on Statistical Inferences. 2006. Athens, GA.

This paper examines the effects of source bias on statistical inferences drawn from event data analyses. Most event data projects use a single source to code events. For example most of the early Kansas Event Data System (KEDS) datasets code only Reuters and Agence France Presse (AFP) reports. One of the goals of Project Civil Strife (PCS) –a new internal conflict-cooperation event data project– is to code event data from several news sources to garner the most extensive coverage of events and control for bias often found in a single source. Herein, we examine the effects that source bias has on the inferences we draw from statistical time-series models. In this study, we examine domestic political conflict in Indonesia and Cambodia from 1980-2004 using automated content analyzed datasets collected from multiple sources (i.e. Associated Press, British Broadcasting Corporation, Japan Economic Newswire, United Press International, and Xinhua). The analyses show that we draw different inferences across sources, especially when we disaggregate domestic political groups. We then combine our sources together and eliminate duplicate events to create a multi-source dataset and compare the results to the single-source models. We conclude that there are important differences in the inferences drawn dependent upon source use. Therefore, researchers should (1) check their results across multiple sources and/or (2) analyze multi-source data to test hypotheses when possible.