lynx   »   [go: up one dir, main page]

This category needs an editor. We encourage you to help if you are qualified.
Volunteer, or read more about what this involves.
Related

Contents
185 found
Order:
1 — 50 / 185
Material to categorize
  1. Reasons, rationality, and opaque sweetening: Hare's “No Reason” argument for taking the sugar.Ryan Doody - forthcoming - Noûs.
    Caspar Hare presents a compelling argument for “taking the sugar” in cases of opaque sweetening: you have no reason to take the unsweetened option, and you have some reason to take the sweetened one. I argue that this argument fails—there is a perfectly good sense in which you do have a reason to take the unsweetened option. I suggest a way to amend Hare's argument to overcome this objection. I then argue that, although the improved version fares better, there is (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  2. Theorie und Heuristik der individuellen Risikoanalyse.Sebastian Simmert - 2021 - Baden-Baden: Tectum.
  3. Suspension of Judgment, Non-additivity, and Additivity of Possibilities.Aldo Filomeno - 2025 - Acta Analytica 40 (1):21-42.
    In situations where we ignore everything but the space of possibilities, we ought to suspend judgment—that is, remain agnostic—about which of these possibilities is the case. This means that we cannot sum our degrees of belief in different possibilities, something that has been formalised as an axiom of non-additivity. Consistent with this way of representing our ignorance, I defend a doxastic norm that recommends that we should nevertheless follow a certain additivity of possibilities: even if we cannot sum degrees of (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  4. On the Offense against Fanaticism.Christopher Bottomley & Timothy Luke Williamson - 2024 - Ethics 135 (2):320-332.
    Fanatics claim that we must give up guaranteed goods in pursuit of extremely improbable Utopia. Recently, Wilkinson has defended Fanaticism by arguing that nonfanatics must violate at least one plausible rational requirement. We reject Fanaticism. We show that by taking stakes-sensitive risk attitudes seriously, we can resist the core premises in Wilkinson’s argument.
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  5. Making Transformative Decisions.Petronella Randell - 2024 - Dissertation, University of St. Andrews
    This thesis investigates the question of whether we can make transformative decisions rationally. The first chapter introduces and explores the nature of transformative experiences: what are they, and how do they bring about such drastic change? I argue that there is an tension between Paul’s (2014) characterisation of transformative experience and arguments that transformative experiences are imaginable. I propose a broader characterisation of transformative experiences on which transformation isn’t driven by experiential acquaintance. From Chapter 2 onwards, the thesis focuses on (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  6. Probability, Normalcy, and the Right against Risk Imposition.Martin Smith - 2024 - Journal of Ethics and Social Philosophy 27 (3).
    Many philosophers accept that, as well as having a right that others not harm us, we also have a right that others not subject us to a risk of harm. And yet, when we attempt to spell out precisely what this ‘right against risk imposition’ involves, we encounter a series of notorious puzzles. Existing attempts to deal with these puzzles have tended to focus on the nature of rights – but I propose an approach that focusses instead on the nature (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  7. How a pure risk of harm can itself be a harm: A reply to Rowe.H. Orri Stefánsson - 2024 - Analysis 84 (1):112-116.
    Rowe has recently argued that pure risk of harm cannot itself be a harm. I respond to Rowe and argue that given an appropriate understanding of objective probabilities, pure objective risk of harm can itself be a harm.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  8. Climate Change and Decision Theory.Andrea S. Asker & H. Orri Stefánsson - 2023 - In Gianfranco Pellegrino & Marcello Di Paola, Handbook of the Philosophy of Climate Change. Cham: Springer. pp. 267-286.
    Many people are worried about the harmful effects of climate change but nevertheless enjoy some activities that contribute to the emission of greenhouse gas (driving, flying, eating meat, etc.), the main cause of climate change. How should such people make choices between engaging in and refraining from enjoyable greenhouse-gas-emitting activities? In this chapter, we look at the answer provided by decision theory. Some scholars think that the right answer is given by interactive decision theory, or game theory; and moreover think (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  9. In defence of Pigou-Dalton for chances.Stefánsson H. Orri - 2023 - Utilitas 35 (4):292-311.
    I defend a weak version of the Pigou-Dalton principle for chances. The principle says that it is better to increase the survival chance of a person who is more likely to die rather than a person who is less likely to die, assuming that the two people do not differ in any other morally relevant respect. The principle justifies plausible moral judgements that standard ex post views, such as prioritarianism and rank-dependent egalitarianism, cannot accommodate. However, the principle can be justified (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  10. An objection to the modal account of risk.Martin Smith - 2023 - Synthese 201 (5):1-9.
    In a recent paper in this journal Duncan Pritchard responds to an objection to the modal account of risk pressed by Ebert, Smith and Durbach ( 2020 ). In this paper, I expand upon the objection and argue that it still stands. I go on to consider a more general question raised by this exchange – whether risk is ‘objective’, or whether it is something that varies from one perspective to another.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  11. Ignore risk; Maximize expected moral value.Michael Zhao - 2021 - Noûs 57 (1):144-161.
    Many philosophers assume that, when making moral decisions under uncertainty, we should choose the option that has the greatest expected moral value, regardless of how risky it is. But their arguments for maximizing expected moral value do not support it over rival, risk-averse approaches. In this paper, I present a novel argument for maximizing expected value: when we think about larger series of decisions that each decision is a part of, all but the most risk-averse agents would prefer that we (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  12. Risk, Overdiagnosis and Ethical Justifications.Wendy A. Rogers, Vikki A. Entwistle & Stacy M. Carter - 2019 - Health Care Analysis 27 (4):231-248.
    Many healthcare practices expose people to risks of harmful outcomes. However, the major theories of moral philosophy struggle to assess whether, when and why it is ethically justifiable to expose individuals to risks, as opposed to actually harming them. Sven Ove Hansson has proposed an approach to the ethical assessment of risk imposition that encourages attention to factors including questions of justice in the distribution of advantage and risk, people’s acceptance or otherwise of risks, and the scope individuals have to (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  13. Varieties of Risk.Philip A. Ebert, Martin Smith & Ian Durbach - 2020 - Philosophy and Phenomenological Research 101 (2):432-455.
    The notion of risk plays a central role in economics, finance, health, psychology, law and elsewhere, and is prevalent in managing challenges and resources in day-to-day life. In recent work, Duncan Pritchard (2015, 2016) has argued against the orthodox probabilistic conception of risk on which the risk of a hypothetical scenario is determined by how probable it is, and in favour of a modal conception on which the risk of a hypothetical scenario is determined by how modally close it is. (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   21 citations  
Existential Risk
  1. (1 other version)Existentialist risk and value misalignment.Ariela Tubert & Justin Tiehen - 2025 - Philosophical Studies 182 (7).
    We argue that two long-term goals of AI research stand in tension with one another. The first involves creating AI that is safe, where this is understood as solving the problem of value alignment. The second involves creating artificial general intelligence, meaning AI that operates at or beyond human capacity across all or many intellectual domains. Our argument focuses on the human capacity to make what we call “existential choices”, choices that transform who we are as persons, including transforming what (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  2. Review of Mulgan's Philosophy for an Ending World[REVIEW]Felipe Pereira - forthcoming - Journal of Moral Philosophy.
  3. Against the Manhattan project framing of AI alignment.Simon Friederich & Leonard Dung - forthcoming - Mind and Language.
    In response to the worry that autonomous generally intelligent artificial agents may at some point take over control of human affairs a common suggestion is that we should “solve the alignment problem” for such agents. We show that current discourse around this suggestion often uses a particular framing of artificial intelligence (AI) alignment as binary, a natural kind, mainly a technical‐scientific problem, realistically achievable, or clearly operationalizable. Each of these assumptions may not actually be true. We further argue that this (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  4. Fanaticism and Knowledge.Frank Hong - 2025 - Synthese 206 (1):1-30.
    It is estimated that five hundred billion dollars are spent on philanthropy every year. How should we spend those resources to do the most good? One possible answer, based on expected-value reasoning, is that we should spend those resources “fanatically” on interventions that can possibly produce enormous benefit, but with minuscule chance of success. This paper develops a new kind of knowledge-first decision theory that implies that we should not spend those resources fanatically. As such, this paper would be of (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  5. Time Machine as Existential Risk.Alexey Turchin - manuscript
    Does the potential creation of a time machine present an existential risk to our current timeline? Time travel is theoretically possible under general relativity, and there is steady progress (similar to Moore's Law) in developing ideas about how to create time machines with decreasing effort. While time travel may seem like a remote possibility due to its dependence on space travel to black holes, there is a concept of a quantum time machine (suggested by Deutsch in 1991 and further developed (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  6. A Timing Problem for Instrumental Convergence.Rhys Southan, Helena Ward & Jen Semler - forthcoming - Philosophical Studies:1-24.
    Those who worry about a superintelligent AI destroying humanity often appeal to the instrumental convergence thesis—the claim that even if we don’t know what a superintelligence’s ultimate goals will be, we can expect it to pursue various instrumental goals which are useful for achieving most ends. In this paper, we argue that one of these proposed goals is mistaken. We argue that instrumental goal preservation—the claim that a rational agent will tend to preserve its goals—is false on the basis of (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  7. Extension and replacement.Michal Masny - 2025 - Philosophical Studies 182 (5):1115-1132.
    Many people believe that it is better to extend the length of a happy life than to create a new happy life, even if the total welfare is the same in both cases. Despite the popularity of this view, one would be hard-pressed to find a fully compelling justification for it in the literature. This paper develops a novel account of why and when extension is better than replacement that applies not just to persons but also to non-human animals and (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  8. Misalignment or misuse? The AGI alignment tradeoff.Max Hellrigel-Holderbaum & Leonard Dung - forthcoming - Philosophical Studies.
    Creating systems that are aligned with our goals is seen as a leading approach to create safe and beneficial AI in both leading AI companies and the academic field of AI safety. We defend the view that misaligned AGI – future, generally intelligent (robotic) AI agents – poses catastrophic risks. At the same time, we support the view that aligned AGI creates a substantial risk of catastrophic misuse by humans. While both risks are severe and stand in tension with one (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  9. Will artificial agents pursue power by default?Christian Tarsney - manuscript
    Researchers worried about catastrophic risks from advanced AI have argued that we should expect sufficiently capable AI agents to pursue power over humanity because power is a convergent instrumental goal, something that is useful for a wide range of final goals. Others have recently expressed skepticism of these claims. This paper aims to formalize the concepts of instrumental convergence and power-seeking in an abstract, decision-theoretic framework, and to assess the claim that power is a convergent instrumental goal. I conclude that (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  10. Simulations and Catastrophic Risks.Bradford Saad - 2023 - Sentience Institute Report.
  11. The argument for near-term human disempowerment through AI.Leonard Dung - 2025 - AI and Society 40 (3):1195-1208.
    Many researchers and intellectuals warn about extreme risks from artificial intelligence. However, these warnings typically came without systematic arguments in support. This paper provides an argument that AI will lead to the permanent disempowerment of humanity, e.g. human extinction, by 2100. It rests on four substantive premises which it motivates and defends: first, the speed of advances in AI capability, as well as the capability level current systems have already reached, suggest that it is practically possible to build AI systems (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  12. Capitalism and the Very Long Term.Nikhil Venkatesh - 2025 - Moral Philosophy and Politics 12 (1):33-58.
    Capitalism is defined as the economic structure in which decisions over production are largely made by or on behalf of individuals in virtue of their private property ownership, subject to the incentives and constraints of market competition. In this paper, I will argue that considerations of long-term welfare, such as those developed by Greaves and MacAskill (2021), support anticapitalism in a weak sense (reducing the extent to which the economy is capitalistic) and perhaps support anticapitalism in a stronger sense (establishing (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  13. Meaningful Lives and Meaningful Futures.Michal Masny - 2025 - Journal of Ethics and Social Philosophy 30 (1).
    What moral reasons, if any, do we have to prevent the extinction of humanity? In “Unfinished Business,” Jonathan Knutzen argues that certain further developments in culture would make our history more “collectively meaningful” and that premature extinction would be bad because it would close off that possibility. Here, I critically examine this proposal. I argue that if collective meaningfulness is analogous to individual meaningfulness, then our meaning-based reasons to prevent the extinction of humanity are substantially different from the reasons discussed (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  14. Expected value, to a point: Moral decision‐making under background uncertainty.Christian Tarsney - forthcoming - Noûs.
    Expected value maximization gives plausible guidance for moral decision‐making under uncertainty in many situations. But it has unappetizing implications in ‘Pascalian’ situations involving tiny probabilities of extreme outcomes. This paper shows, first, that under realistic levels of ‘background uncertainty’ about sources of value independent of one's present choice, a widely accepted and apparently innocuous principle—stochastic dominance—requires that prospects be ranked by the expected value of their consequences in most ordinary choice situations. But second, this implication does not hold when differences (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  15. Reducing Existential Risk By Reducing The Allure Of Unwarranted Antibiotics: Two low-cost interventions.Nick Byrd & Olivia Parlow - manuscript
    Over one million annual deaths have been attributed to bacterial antimicrobial resistance. Although antibiotics have saved countless other lives, overuse and misuse of antibiotics increases this global threat. Developing new antibiotics and retraining clinicians can be undermined by patients who pressure clinicians to prescribe unnecessary antibiotics. So we validated two low-cost, scalable interventions for improving antibiotic decisions in an online randomized control trial and a pre-registered replication (N = 985). Both first-person vignette experiments found that an infographic and text message (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  16. Effective Altruism, Disaster Prevention, and the Possibility of Hell: A Dilemma for Secular Longtermists (12th edition).Eric Sampson - forthcoming - Oxford Studies in Philosophy of Religion.
    Abstract: Longtermist Effective Altruists (EAs) aim to mitigate the risk of existential catastrophes. In this paper, I have three goals. First, I identify a catastrophic risk that EAs have completely ignored. I call it religious catastrophe: the threat that (as Christians and Muslims have warned for centuries) billions of people stand in danger of going to hell for all eternity. Second, I argue that, even by secular EA lights, religious catastrophe is at least as bad and at least as probable (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  17. Existential risk and equal political liberty.J. Joseph Porter & Adam F. Gibbons - 2024 - Asian Journal of Philosophy 3 (2):1-26.
    Rawls famously argues that the parties in the original position would agree upon the two principles of justice. Among other things, these principles guarantee equal political liberty—that is, democracy—as a requirement of justice. We argue on the contrary that the parties have reason to reject this requirement. As we show, by Rawls’ own lights, the parties would be greatly concerned to mitigate existential risk. But it is doubtful whether democracy always minimizes such risk. Indeed, no one currently knows which political (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  18. Artificial intelligence, existential risk and equity: the need for multigenerational bioethics.Kyle Fiore Law, Stylianos Syropoulos & Brian D. Earp - 2024 - Journal of Medical Ethics 50 (12):799-801.
    > Future people count. There could be a lot of them. We can make their lives better. > > -- William MacAskill, What We Owe The Future > > [Longtermism is] quite possibly the most dangerous secular belief system in the world today. > > -- Émile P. Torres, Against Longtermism Philosophers,1 2 psychologists,3 4 politicians5 and even some tech billionaires6 have sounded the alarm about artificial intelligence (AI) and the dangers it may pose to the long-term future of humanity. (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  19. Existential risk and the justice turn in bioethics.Paolo Corsico - 2024 - Journal of Medical Ethics 50 (12):824-824.
    ‘Who argues what’ bears a certain relevance in relation to what is being argued. We are entitled to know those personal circumstances which play a significant role in relation to the argument one supports, so that we can take those circumstances into consideration when evaluating their argument. This is why journals have conflict of interest declarations, and why we value reflexivity in the social sciences. We also often perform double-blind peer review. We recognise that the evaluation of certain statements of (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  20. “Emergent Abilities,” AI, and Biosecurity: Conceptual Ambiguity, Stability, and Policy.Alex John London - 2024 - Disincentivizing Bioweapons: Theory and Policy Approaches.
    Recent claims that artificial intelligence (AI) systems demonstrate “emergent abilities” have fueled excitement but also fear grounded in the prospect that such systems may enable a wider range of parties to make unprecedented advances in areas that include the development of chemical or biological weapons. Ambiguity surrounding the term “emergent abilities” has added avoidable uncertainty to a topic that has the potential to destabilize the strategic landscape, including the perception of key parties about the viability of nonproliferation efforts. To avert (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  21. Artificial Intelligence 2024 - 2034: What to expect in the next ten years.Demetrius Floudas - 2024 - 'Agi Talks' Series at Daniweb.
    In this public communication, AI policy theorist Demetrius Floudas introduces a novel era classification for the AI epoch and reveals the hidden dangers of AGI, predicting the potential obsolescence of humanity. In retort, he proposes a provocative International Control Treaty. -/- According to this scheme, the age of AI will unfold in three distinct phases, introduced here for the first time. An AGI Control & non-Proliferation Treaty may be humanity’s only safeguard. This piece aims to provide a publicly accessible exposé (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  22. 'Everything you always wanted to know about Atomic Warfare but were afraid to ask': Nuclear Strategy in the Ukraine War era.Demetrius Floudas - forthcoming - Cambridge Existential Risk Initiative Termly Lectures; Emmanuel College, University of Cambridge.
    The ongoing conflict in Ukraine constitutes a poignant reminder of the enduring relevance and potential devastation associated with nuclear weapons. For decades, the possibility of such catastrophic conflict has not seemed so imminent as in the current world affairs. -/- This contribution presents a comprehensive analysis of nuclear strategy for the 21st century. By examining the evolving geostrategic landscape the talk illuminates key concepts such as nuclear posture, credible deterrence, first & second strike capabilities, flexible response, EMP , variable yield, (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  23. AI Survival Stories: a Taxonomic Analysis of AI Existential Risk.Herman Cappelen, Simon Goldstein & John Hawthorne - forthcoming - Philosophy of Ai.
    Since the release of ChatGPT, there has been a lot of debate about whether AI systems pose an existential risk to humanity. This paper develops a general framework for thinking about the existential risk of AI systems. We analyze a two-premise argument that AI systems pose a threat to humanity. Premise one: AI systems will become extremely powerful. Premise two: if AI systems become extremely powerful, they will destroy humanity. We use these two premises to construct a taxonomy of ‘survival (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  24. Why prevent human extinction?James Fanciullo - 2024 - Philosophy and Phenomenological Research 109 (2):650-662.
    Many of us think human extinction would be a very bad thing, and that we have moral reasons to prevent it. But there is disagreement over what would make extinction so bad, and thus over what grounds these moral reasons. Recently, several theorists have argued that our reasons to prevent extinction stem not just from the value of the welfare of future lives, but also from certain additional values relating to the existence of humanity itself (for example, humanity’s “final” value, (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  25. Deep Uncertainty and Incommensurability: General Cautions about Precaution.Rush T. Stewart - forthcoming - Philosophy of Science.
    The precautionary principle is invoked in a number of important personal and policy decision contexts. Peterson shows that certain ways of making the principle precise are inconsistent with other criteria of decision-making. Some object that the results do not apply to cases of deep uncertainty or value incommensurability which are alleged to be in the principle’s wheelhouse. First, I show that Peterson’s impossibility results can be generalized considerably to cover cases of both deep uncertainty and incommensurability. Second, I contrast an (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  26. Mistakes in the moral mathematics of existential risk.David Thorstad - 2024 - Ethics 135 (1):122-150.
    Longtermists have recently argued that it is overwhelmingly important to do what we can to mitigate existential risks to humanity. I consider three mistakes that are often made in calculating the value of existential risk mitigation. I show how correcting these mistakes pushes the value of existential risk mitigation substantially below leading estimates, potentially low enough to threaten the normative case for existential risk mitigation. I use this discussion to draw four positive lessons for the study of existential risk. -/- (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  27. Life-Suspending Technologies, Cryonics, and Catastrophic Risks.Andrea Sauchelli - 2024 - Science and Engineering Ethics 30 (37):1-16.
    I defend the claim that life-suspending technologies can constitute a catastrophic and existential security factor for risks structurally similar to those related to climate change. The gist of the argument is that, under certain conditions, life-suspending technologies such as cryonics can provide self-interested actors with incentives to efficiently tackle such risks—in particular, they provide reasons to overcome certain manifestations of generational egoism, a risk factor of several catastrophic and existential risks. Provided we have reasons to decrease catastrophic and existential risks (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  28. Rethinking the Redlines Against AI Existential Risks.Yi Zeng, Xin Guan, Enmeng Lu & Jinyu Fan - manuscript
    The ongoing evolution of advanced AI systems will have profound, enduring, and significant impacts on human existence that must not be overlooked. These impacts range from empowering humanity to achieve unprecedented transcendence to potentially causing catastrophic threats to our existence. To proactively and preventively mitigate these potential threats, it is crucial to establish clear redlines to prevent AI-induced existential risks by constraining and regulating advanced AI and their related AI actors. This paper explores different concepts of AI existential risk, connects (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  29. What We Owe the Future by William MacAskill. [REVIEW]Daniel John Sportiello - 2024 - American Catholic Philosophical Quarterly 98 (1):121-124.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  30. Nuclear Fine-Tuning and the Illusion of Teleology.Ember Reed - 2022 - Sound Ideas.
    Recent existential-risk thinkers have noted that the analysis of the fine-tuning argument for God’s existence, and the analysis of certain forms of existential risk, employ similar types of reasoning. This paper argues that insofar as the “many worlds objection” undermines the inference to God’s existence from universal fine-tuning, then a similar many worlds objection undermines the inference that the historic risk of global nuclear catastrophe has been low from the lack of such a catastrophe has occurred in our world. A (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  31. (1 other version)Non-Additive Axiologies in Large Worlds.Christian Tarsney & Teruji Thomas - 2024 - Ergo: An Open Access Journal of Philosophy 11.
    Is the overall value of a world just the sum of values contributed by each value-bearing entity in that world? Additively separable axiologies (like total utilitarianism, prioritarianism, and critical level views) say 'yes', but non-additive axiologies (like average utilitarianism, rank-discounted utilitarianism, and variable value views) say 'no'. This distinction appears to be practically important: among other things, additive axiologies generally assign great importance to large changes in population size, and therefore tend to strongly prioritize the long-term survival of humanity over (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  32. Existential Risk, Climate Change, and Nonideal Justice.Alex McLaughlin - 2024 - The Monist 107 (2):190-206.
    Climate change is often described as an existential risk to the human species, but this terminology has generally been avoided in the climate-justice literature in analytic philosophy. I investigate the source of this disconnect and explore the prospects for incorporating the idea of climate change as an existential risk into debates about climate justice. The concept of existential risk does not feature prominently in these discussions, I suggest, because assumptions that structure ‘ideal’ accounts of climate justice ensure that the prospect (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  33. Concepts of Existential Catastrophe.Hilary Greaves - 2024 - The Monist 107 (2):109-129.
    The notion of existential catastrophe is increasingly appealed to in discussion of risk management around emerging technologies, but it is not completely clear what this notion amounts to. Here, I provide an opinionated survey of the space of plausibly useful definitions of existential catastrophe. Inter alia, I discuss: whether to define existential catastrophe in ex post or ex ante terms, whether an ex ante definition should be in terms of loss of expected value or loss of potential, and what kind (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  34. Should longtermists recommend hastening extinction rather than delaying it?Richard Pettigrew - 2024 - The Monist 107 (2):130-145.
    Longtermism is the view that the most urgent global priorities, and those to which we should devote the largest portion of our resources, are those that focus on (i) ensuring a long future for humanity, and perhaps sentient or intelligent life more generally, and (ii) improving the quality of the lives that inhabit that long future. While it is by no means the only one, the argument most commonly given for this conclusion is that these interventions have greater expected goodness (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  35. Existential Risk, Astronomical Waste, and the Reasonableness of a Pure Time Preference for Well-Being.S. J. Beard & Patrick Kaczmarek - 2024 - The Monist 107 (2):157-175.
    In this paper, we argue that our moral concern for future well-being should reduce over time due to important practical considerations about how humans interact with spacetime. After surveying several of these considerations (around equality, special duties, existential contingency, and overlapping moral concern) we develop a set of core principles that can both explain their moral significance and highlight why this is inherently bound up with our relationship with spacetime. These relate to the equitable distribution of (1) moral concern in (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  36. Risk, Non-Identity, and Extinction.Kacper Kowalczyk & Nikhil Venkatesh - 2024 - The Monist 107 (2):146–156.
    This paper examines a recent argument in favour of strong precautionary action—possibly including working to hasten human extinction—on the basis of a decision-theoretic view that accommodates the risk-attitudes of all affected while giving more weight to the more risk-averse attitudes. First, we dispute the need to take into account other people’s attitudes towards risk at all. Second we argue that a version of the non-identity problem undermines the case for doing so in the context of future people. Lastly, we suggest (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  37. Extinction Risks from AI: Invisible to Science?Vojtech Kovarik, Christiaan van Merwijk & Ida Mattsson - manuscript
    In an effort to inform the discussion surrounding existential risks from AI, we formulate Extinction-level Goodhart’s Law as “Virtually any goal specification, pursued to the extreme, will result in the extinction of humanity”, and we aim to understand which formal models are suitable for investigating this hypothesis. Note that we remain agnostic as to whether Extinction-level Goodhart’s Law holds or not. As our key contribution, we identify a set of conditions that are necessary for a model that aims to be (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 185
Лучший частный хостинг