lynx   »   [go: up one dir, main page]

IDEAS home Printed from https://ideas.repec.org/p/ecl/stabus/4146.html
   My bibliography  Save this paper

Machine Learning Who to Nudge: Causal vs Predictive Targeting in a Field Experiment on Student Financial Aid Renewal

Author

Listed:
  • Athey, Susan

    (Stanford U)

  • Keleher, Niall

    (Innovations for Poverty Action, New Haven)

  • Spiess, Jann

    (Stanford U)

Abstract
In many settings, interventions may be more effective for some individuals than others, so that targeting interventions may be beneficial. We analyze the value of targeting in the context of a large-scale field experiment with over 53,000 college students, where the goal was to use "nudges" to encourage students to renew their financial-aid applications before a non-binding deadline. We begin with baseline approaches to targeting. First, we target based on a causal forest that estimates heterogeneous treatment effects and then assigns students to treatment according to those estimated to have the highest treatment effects. Next, we evaluate two alternative targeting policies, one targeting students with low predicted probability of renewing financial aid in the absence of the treatment, the other targeting those with high probability. The predicted baseline outcome is not the ideal criterion for targeting, nor is it a priori clear whether to prioritize low, high, or intermediate predicted probability. Nonetheless, targeting on low baseline outcomes is common in practice, for example because the relationship between individual characteristics and treatment effects is often difficult or impossible to estimate with historical data. We propose hybrid approaches that incorporate the strengths of both predictive approaches (accurate estimation) and causal approaches (correct criterion); we show that targeting intermediate baseline outcomes is most effective, while targeting based on low baseline outcomes is detrimental. In one year of the experiment, nudging all students improved early filing by an average of 6.4 percentage points over a baseline average of 37% filing, and we estimate that targeting half of the students using our preferred policy attains around 75% of this benefit.

Suggested Citation

  • Athey, Susan & Keleher, Niall & Spiess, Jann, 2023. "Machine Learning Who to Nudge: Causal vs Predictive Targeting in a Field Experiment on Student Financial Aid Renewal," Research Papers 4146, Stanford University, Graduate School of Business.
  • Handle: RePEc:ecl:stabus:4146
    as

    Download full text from publisher

    File URL: https://www.gsb.stanford.edu/faculty-research/working-papers/machine-learning-who-nudge-causal-vs-predictive-targeting-field
    Download Restriction: no
    ---><---

    Other versions of this item:

    References listed on IDEAS

    as
    1. Jon Kleinberg & Jens Ludwig & Sendhil Mullainathan & Ziad Obermeyer, 2015. "Prediction Policy Problems," American Economic Review, American Economic Association, vol. 105(5), pages 491-495, May.
    2. Toru Kitagawa & Aleksey Tetenov, 2018. "Who Should Be Treated? Empirical Welfare Maximization Methods for Treatment Choice," Econometrica, Econometric Society, vol. 86(2), pages 591-616, March.
    3. Xinkun Nie & Emma Brunskill & Stefan Wager, 2021. "Learning When-to-Treat Policies," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 116(533), pages 392-409, January.
    4. Susan Athey & Peter J. Bickel & Aiyou Chen & Guido W. Imbens & Michael Pollmann, 2021. "Semiparametric Estimation of Treatment Effects in Randomized Experiments," Papers 2109.02603, arXiv.org, revised Aug 2023.
    5. Zhengyuan Zhou & Susan Athey & Stefan Wager, 2023. "Offline Multi-Action Policy Learning: Generalization and Optimization," Operations Research, INFORMS, vol. 71(1), pages 148-183, January.
    6. Stefan Wager & Susan Athey, 2018. "Estimation and Inference of Heterogeneous Treatment Effects using Random Forests," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 113(523), pages 1228-1242, July.
    7. Susan Athey & Guido W. Imbens, 2019. "Machine Learning Methods That Economists Should Know About," Annual Review of Economics, Annual Reviews, vol. 11(1), pages 685-725, August.
    8. Matthew S. Johnson & David I. Levine & Michael W. Toffel, 2023. "Improving Regulatory Effectiveness through Better Targeting: Evidence from OSHA," American Economic Journal: Applied Economics, American Economic Association, vol. 15(4), pages 30-67, October.
    9. Susan Athey & Stefan Wager, 2021. "Policy Learning With Observational Data," Econometrica, Econometric Society, vol. 89(1), pages 133-161, January.
    10. Johannes Haushofer & Paul Niehaus & Carlos Paramo & Edward Miguel & Michael Walker, 2025. "Targeting Impact versus Deprivation," American Economic Review, American Economic Association, vol. 115(6), pages 1936-1974, June.
    11. Michael C. Knaus & Michael Lechner & Anthony Strittmatter, 2022. "Heterogeneous Employment Effects of Job Search Programs: A Machine Learning Approach," Journal of Human Resources, University of Wisconsin Press, vol. 57(2), pages 597-636.
    12. Yingqi Zhao & Donglin Zeng & A. John Rush & Michael R. Kosorok, 2012. "Estimating Individualized Treatment Rules Using Outcome Weighted Learning," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 107(499), pages 1106-1118, September.
    13. Sendhil Mullainathan & Jann Spiess, 2017. "Machine Learning: An Applied Econometric Approach," Journal of Economic Perspectives, American Economic Association, vol. 31(2), pages 87-106, Spring.
    14. Benjamin L. Castleman & Lindsay C. Page, 2016. "Freshman Year Financial Aid Nudges: An Experiment to Increase FAFSA Renewal and College Persistence," Journal of Human Resources, University of Wisconsin Press, vol. 51(2), pages 389-415.
    15. Athey, Susan & Imbens, Guido W., 2019. "Machine Learning Methods Economists Should Know About," Research Papers 3776, Stanford University, Graduate School of Business.
    16. Glynn, Adam N. & Quinn, Kevin M., 2010. "An Introduction to the Augmented Inverse Propensity Weighted Estimator," Political Analysis, Cambridge University Press, vol. 18(1), pages 36-56, January.
    17. Jeremy Yang & Dean Eckles & Paramveer Dhillon & Sinan Aral, 2024. "Targeting for Long-Term Outcomes," Management Science, INFORMS, vol. 70(6), pages 3841-3855, June.
    18. Lihui Zhao & Lu Tian & Tianxi Cai & Brian Claggett & L. J. Wei, 2013. "Effectively Selecting a Target Population for a Future Comparative Study," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 108(502), pages 527-539, June.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Anya Shchetkina & Ron Berman, 2024. "When Is Heterogeneity Actionable for Personalization?," Papers 2411.16552, arXiv.org.
    2. Athey, Susan & Palikot, Emil, 2024. "The Value of Non-traditional Credentials in the Labor Market," Research Papers 4189, Stanford University, Graduate School of Business.
    3. Kirill Ponomarev & Vira Semenova, 2024. "On the Lower Confidence Band for the Optimal Welfare in Policy Learning," Papers 2410.07443, arXiv.org, revised Sep 2025.
    4. Chowdhury, Shyamal & Hasan, Syed & Sharma, Uttam, 2024. "The Role of Trainee Selection in the Effectiveness of Vocational Training: Evidence from a Randomized Controlled Trial in Nepal," IZA Discussion Papers 16705, Institute of Labor Economics (IZA).
    5. Bruno Fava, 2024. "Predicting the Distribution of Treatment Effects via Covariate-Adjustment, with an Application to Microcredit," Papers 2407.14635, arXiv.org, revised Jul 2025.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Tobias Cagala & Ulrich Glogowsky & Johannes Rincke & Anthony Strittmatter, 2021. "Optimal Targeting in Fundraising: A Machine-Learning Approach," Economics working papers 2021-08, Department of Economics, Johannes Kepler University Linz, Austria.
    2. Goller, Daniel & Lechner, Michael & Pongratz, Tamara & Wolff, Joachim, 2025. "Active labor market policies for the long-term unemployed: New evidence from causal machine learning," Labour Economics, Elsevier, vol. 94(C).
    3. Tobias Cagala & Ulrich Glogowsky & Johannes Rincke & Anthony Strittmatter, 2021. "Optimal Targeting in Fundraising: A Causal Machine-Learning Approach," Papers 2103.10251, arXiv.org, revised Sep 2021.
    4. Gabriel Okasa, 2022. "Meta-Learners for Estimation of Causal Effects: Finite Sample Cross-Fit Performance," Papers 2201.12692, arXiv.org.
    5. Michael C Knaus, 2022. "Double machine learning-based programme evaluation under unconfoundedness [Econometric methods for program evaluation]," The Econometrics Journal, Royal Economic Society, vol. 25(3), pages 602-627.
    6. Cockx, Bart & Lechner, Michael & Bollens, Joost, 2023. "Priority to unemployed immigrants? A causal machine learning evaluation of training in Belgium," Labour Economics, Elsevier, vol. 80(C).
    7. Augustine Denteh & Helge Liebert, 2022. "Who Increases Emergency Department Use? New Insights from the Oregon Health Insurance Experiment," Papers 2201.07072, arXiv.org, revised Apr 2023.
    8. Combes, Pierre-Philippe & Gobillon, Laurent & Zylberberg, Yanos, 2022. "Urban economics in a historical perspective: Recovering data with machine learning," Regional Science and Urban Economics, Elsevier, vol. 94(C).
    9. Michael Lechner, 2023. "Causal Machine Learning and its use for public policy," Swiss Journal of Economics and Statistics, Springer;Swiss Society of Economics and Statistics, vol. 159(1), pages 1-15, December.
    10. Bas Bosma & Arjen Witteloostuijn, 2024. "Machine learning in international business," Journal of International Business Studies, Palgrave Macmillan;Academy of International Business, vol. 55(6), pages 676-702, August.
    11. Falco J. Bargagli Stoffi & Kenneth De Beckker & Joana E. Maldonado & Kristof De Witte, 2021. "Assessing Sensitivity of Machine Learning Predictions.A Novel Toolbox with an Application to Financial Literacy," Papers 2102.04382, arXiv.org.
    12. Carlos Fernández-Loría & Foster Provost & Jesse Anderton & Benjamin Carterette & Praveen Chandar, 2023. "A Comparison of Methods for Treatment Assignment with an Application to Playlist Generation," Information Systems Research, INFORMS, vol. 34(2), pages 786-803, June.
    13. Lundberg, Ian & Brand, Jennie E. & Jeon, Nanum, 2022. "Researcher reasoning meets computational capacity: Machine learning for social science," SocArXiv s5zc8, Center for Open Science.
    14. Akash Malhotra, 2021. "A hybrid econometric–machine learning approach for relative importance analysis: prioritizing food policy," Eurasian Economic Review, Springer;Eurasia Business and Economics Society, vol. 11(3), pages 549-581, September.
    15. Christensen, Peter & Francisco, Paul & Myers, Erica & Shao, Hansen & Souza, Mateus, 2024. "Energy efficiency can deliver for climate policy: Evidence from machine learning-based targeting," Journal of Public Economics, Elsevier, vol. 234(C).
    16. Sophie-Charlotte Klose & Johannes Lederer, 2020. "A Pipeline for Variable Selection and False Discovery Rate Control With an Application in Labor Economics," Papers 2006.12296, arXiv.org, revised Jun 2020.
    17. Kyle Colangelo & Ying-Ying Lee, 2019. "Double debiased machine learning nonparametric inference with continuous treatments," CeMMAP working papers CWP72/19, Centre for Microdata Methods and Practice, Institute for Fiscal Studies.
    18. Kyle Colangelo & Ying-Ying Lee, 2019. "Double debiased machine learning nonparametric inference with continuous treatments," CeMMAP working papers CWP54/19, Centre for Microdata Methods and Practice, Institute for Fiscal Studies.
    19. Garbero, Alessandra & Sakos, Grayson & Cerulli, Giovanni, 2023. "Towards data-driven project design: Providing optimal treatment rules for development projects," Socio-Economic Planning Sciences, Elsevier, vol. 89(C).
    20. Rama K. Malladi, 2024. "Benchmark Analysis of Machine Learning Methods to Forecast the U.S. Annual Inflation Rate During a High-Decile Inflation Period," Computational Economics, Springer;Society for Computational Economics, vol. 64(1), pages 335-375, July.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:ecl:stabus:4146. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: the person in charge (email available below). General contact details of provider: https://edirc.repec.org/data/gsstaus.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.
    Лучший частный хостинг