lynx   »   [go: up one dir, main page]

This category needs an editor. We encourage you to help if you are qualified.
Volunteer, or read more about what this involves.
About this topic
Summary In the early 2000s, James Moor set out four classes of ethical machine, advising that the near-term focus of machine ethics research should be on "explicit ethical agents", agents designed from an understanding of human theoretical ethics to operate according with these theoretical principles. Above this class, the ultimate aim of inquiry into machine ethics is understanding human morality and natural science well enough to engineer a fully autonomous, moral machine. This sub-category focuses on supporting this inquiry. Other work on other sorts of computer applications and their ethical impacts appear in different categories, including Ethics of Artificial Intelligence, Moral Status of Artificial Systems, and also Robot Ethics, Algorithmic Fairness, Computer Ethics, and others. Machine ethics is ethics, and it is also a study of machines. Machine ethicists wonder why people, human beings, other organisms, do what they do when they do it, and what makes these things the right things to do - they are ethicists. In addition, machine ethicists work out how to articulate such processes in an independent artificial system (rather than by parenting a biological child, or training a human minion, as traditional alternatives). So, machine ethics researchers engage directly with rapidly advancing work in cognitive science and psychology alongside that in robotics and AI, applied ethics such as medical ethics and philosophy of mind, computer modeling and data science, and so on. Drawing from so many disciplines with all of these advancing rapidly and with their own impacts, machine ethics is in the middle of a maelstrom of current research activity. Advances in materials science and physical chemistry leverage advances in cognitive science and neurology which feed advances in AI and robotics, including in regards to its interpretability for illustration. Putting this all together is the challenge for the machine ethics researcher. This sub-category is intended to support efforts to meet this challenge.  
Key works Allen et al 2005Wallach et al 2008Tonkens 2012Tonkens 2009Müller & Bostrom 2014White 2013White 2015
Introductions Anderson & Anderson 2007, Segun 2021, Powers 2011, Moor 2006
Related

Contents
584 found
Order:
1 — 50 / 584
  1. Ética e Segurança da Inteligência Artificial: ferramentas práticas para se criar "bons" modelos.Nicholas Kluge Corrêa - manuscript
    A AI Robotics Ethics Society (AIRES) é uma organização sem fins lucrativos fundada em 2018 por Aaron Hui, com o objetivo de se promover a conscientização e a importância da implementação e regulamentação ética da AI. A AIRES é hoje uma organização com capítulos em universidade como UCLA (Los Angeles), USC (University of Southern California), Caltech (California Institute of Technology), Stanford University, Cornell University, Brown University e a Pontifícia Universidade Católica do Rio Grande do Sul (Brasil). AIRES na PUCRS é (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  2. Aspirational Affordances of AI.Sina Fazelpour & Meica Magnani - manuscript
    As artificial intelligence (AI) systems increasingly permeate processes of cultural and epistemic production, there are growing concerns about how their outputs may confine individuals and groups to static or restricted narratives about who or what they could be. In this paper, we advance the discourse surrounding these concerns by making three contributions. First, we introduce the concept of aspirational affordance to describe how culturally shared interpretive resources can shape individual cognition, and in particular exercises practical imagination. We show how this (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  3. (1 other version)Three tragedies that shape human life in age of AI and their antidotes.Manh-Tung Ho & Manh-Toan Ho - manuscript
    This essay seeks to understand what it means for the human collective when AI technologies have become a predominant force in each of our lives through identifying three moral dilemmas (i.e., tragedy of the commons, tragedy of commonsense morality, tragedy of apathy) that shape human choices. In the first part, we articulate AI-driven versions of the three moral dilemmas. Then, in the second part, drawing from evolutionary psychology, existentialism, and East Asian philosophies, we argue that a deep appreciation of three (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  4. Can a robot lie?Markus Kneer - manuscript
    The potential capacity for robots to deceive has received considerable attention recently. Many papers focus on the technical possibility for a robot to engage in deception for beneficial purposes (e.g. in education or health). In this short experimental paper, I focus on a more paradigmatic case: Robot lying (lying being the textbook example of deception) for nonbeneficial purposes as judged from the human point of view. More precisely, I present an empirical experiment with 399 participants which explores the following three (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   10 citations  
  5. (1 other version)Beneficent Intelligence: A Capability Approach to Modeling Benefit, Assistance, and Associated Moral Failures through AI Systems.Alex John London & Hoda Heidari - manuscript
    The prevailing discourse around AI ethics lacks the language and formalism necessary to capture the diverse ethical concerns that emerge when AI systems interact with individuals. Drawing on Sen and Nussbaum's capability approach, we present a framework formalizing a network of ethical concepts and entitlements necessary for AI systems to confer meaningful benefit or assistance to stakeholders. Such systems enhance stakeholders' ability to advance their life plans and well-being while upholding their fundamental rights. We characterize two necessary conditions for morally (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  6. Before the Onslaught: Aligning with Infinite Intelligence in the Age of Artificial Superintelligence.Madhu Prabakaran - manuscript
    This paper proposes a radical reorientation of future technology development—particularly Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI)—through the lens of Indian philosophical thought. It argues that intelligence is not a capacity to be engineered or simulated, but an ontological process of becoming: an individuated unfolding of śūnyatā (non-essential emptiness), grounded in interdependence, self-correction, and non-harm. Drawing from traditions such as Yoga Vāsiṣṭha, Sāṃkhya, and Buddhist epistemologies of anatta and pratītyasamutpāda, the paper frames evolution not as linear progress but as (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  7. A Talking Cure for Autonomy Traps : How to share our social world with chatbots.Regina Rini - manuscript
    Large Language Models (LLMs) like ChatGPT were trained on human conversation, but in the future they will also train us. As chatbots speak from our smartphones and customer service helplines, they will become a part of everyday life and a growing share of all the conversations we ever have. It’s hard to doubt this will have some effect on us. Here I explore a specific concern about the impact of artificial conversation on our capacity to deliberate and hold ourselves accountable (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  8. AI Ethics by Design: Implementing Customizable Guardrails for Responsible AI Development.Kristina Sekrst, Jeremy McHugh & Jonathan Rodriguez Cefalu - manuscript
    This paper explores the development of an ethical guardrail framework for AI systems, emphasizing the importance of customizable guardrails that align with diverse user values and underlying ethics. We address the challenges of AI ethics by proposing a structure that integrates rules, policies, and AI assistants to ensure responsible AI behavior, while comparing the proposed framework to the existing state-of-the-art guardrails. By focusing on practical mechanisms for implementing ethical standards, we aim to enhance transparency, user autonomy, and continuous improvement in (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  9. Justifications for Democratizing AI Alignment and Their Prospects.André Steingrüber & Kevin Baum - manuscript
    The AI alignment problem comprises both technical and normative dimensions. While technical solutions focus on implementing normative constraints in AI systems, the normative problem concerns determining what these constraints should be. This paper examines justifications for democratic approaches to the normative problem—where affected stakeholders determine AI alignment—as opposed to epistocratic approaches that defer to normative experts. We analyze both instrumental justifications (democratic approaches produce better outcomes) and non-instrumental justifications (democratic approaches prevent illegitimate authority or coercion). We argue that normative and (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  10. First human upload as AI Nanny.Alexey Turchin - manuscript
    Abstract: As there are no visible ways to create safe self-improving superintelligence, but it is looming, we probably need temporary ways to prevent its creation. The only way to prevent it, is to create special AI, which is able to control and monitor all places in the world. The idea has been suggested by Goertzel in form of AI Nanny, but his Nanny is still superintelligent and not easy to control, as was shown by Bensinger at al. We explore here (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  11. Literature Review: What Artificial General Intelligence Safety Researchers Have Written About the Nature of Human Values.Alexey Turchin & David Denkenberger - manuscript
    Abstract: The field of artificial general intelligence (AGI) safety is quickly growing. However, the nature of human values, with which future AGI should be aligned, is underdefined. Different AGI safety researchers have suggested different theories about the nature of human values, but there are contradictions. This article presents an overview of what AGI safety researchers have written about the nature of human values, up to the beginning of 2019. 21 authors were overviewed, and some of them have several theories. A (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  12. (1 other version)Autonomous Reboot: the challenges of artificial moral agency and the ends of Machine Ethics.Jeffrey White - manuscript
    *** This has since been rewritten, and published as two papers linked below. Two additional papers complete a four-part series; these are complete, but need to be readied for publication in the future. *** Ryan Tonkens (2009) has issued a seemingly impossible challenge, to articulate a comprehensive ethical framework within which artificial moral agents (AMAs) satisfy a Kantian inspired recipe - both "rational" and "free" - while also satisfying perceived prerogatives of Machine Ethics to create AMAs that are perfectly, not (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  13. Artificial Intelligence Ethics and Safety: practical tools for creating "good" models.Nicholas Kluge Corrêa -
    The AI Robotics Ethics Society (AIRES) is a non-profit organization founded in 2018 by Aaron Hui to promote awareness and the importance of ethical implementation and regulation of AI. AIRES is now an organization with chapters at universities such as UCLA (Los Angeles), USC (University of Southern California), Caltech (California Institute of Technology), Stanford University, Cornell University, Brown University, and the Pontifical Catholic University of Rio Grande do Sul (Brazil). AIRES at PUCRS is the first international chapter of AIRES, and (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  14. AI Alignment vs. AI Ethical Treatment: Ten Challenges.Adam Bradley & Bradford Saad - forthcoming - Analytic Philosophy.
    A morally acceptable course of AI development should avoid two dangers: creating unaligned AI systems that pose a threat to humanity and mistreating AI systems that merit moral consideration in their own right. This paper argues these two dangers interact and that if we create AI systems that merit moral consideration, simultaneously avoiding both of these dangers would be extremely challenging. While our argument is straightforward and supported by a wide range of pretheoretical moral judgments, it has far-reaching moral implications (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  15. Does Predictive Sentencing Make Sense?Clinton Castro, Alan Rubel & Lindsey Schwartz - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    This paper examines the practice of using predictive systems to lengthen the prison sentences of convicted persons when the systems forecast a higher likelihood of re-offense or re-arrest. There has been much critical discussion of technologies used for sentencing, including questions of bias and opacity. However, there hasn’t been a discussion of whether this use of predictive systems makes sense in the first place. We argue that it does not by showing that there is no plausible theory of punishment that (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  16. Norms and Causation in Artificial Morality.Laura Fearnley - forthcoming - Joint Proceedings of Acm Iui:1-4.
    There has been an increasing interest into how to build Artificial Moral Agents (AMAs) that make moral decisions on the basis of causation rather than mere correction. One promising avenue for achieving this is to use a causal modelling approach. This paper explores an open and important problem with such an approach; namely, the problem of what makes a causal model an appropriate model. I explore why we need to establish criteria for what makes a model appropriate, and offer-up such (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  17. What makes full artificial agents morally different.Erez Firt - forthcoming - AI and Society:1-10.
    In the research field of machine ethics, we commonly categorize artificial moral agents into four types, with the most advanced referred to as a full ethical agent, or sometimes a full-blown Artificial Moral Agent (AMA). This type has three main characteristics: autonomy, moral understanding and a certain level of consciousness, including intentional mental states, moral emotions such as compassion, the ability to praise and condemn, and a conscience. This paper aims to discuss various aspects of full-blown AMAs and presents the (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  18. Making moral machines: why we need artificial moral agents.Paul Formosa & Malcolm Ryan - forthcoming - AI and Society.
    As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, we develop a comprehensive analysis (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  19. Misalignment or misuse? The AGI alignment tradeoff.Max Hellrigel-Holderbaum & Leonard Dung - forthcoming - Philosophical Studies.
    Creating systems that are aligned with our goals is seen as a leading approach to create safe and beneficial AI in both leading AI companies and the academic field of AI safety. We defend the view that misaligned AGI – future, generally intelligent (robotic) AI agents – poses catastrophic risks. At the same time, we support the view that aligned AGI creates a substantial risk of catastrophic misuse by humans. While both risks are severe and stand in tension with one (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  20. Machine morality, moral progress, and the looming environmental disaster.Ben Kenward & Thomas Sinclair - forthcoming - Cognitive Computation and Systems.
    The creation of artificial moral systems requires us to make difficult choices about which of varying human value sets should be instantiated. The industry-standard approach is to seek and encode moral consensus. Here we argue, based on evidence from empirical psychology, that encoding current moral consensus risks reinforcing current norms, and thus inhibiting moral progress. However, so do efforts to encode progressive norms. Machine ethics is thus caught between a rock and a hard place. The problem is particularly acute when (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  21. AI Alignment: The Case for Including Animals.Yip Fai Tse, Adrià Moret, Soenke Ziesche & Peter Singer - forthcoming - Philosophy and Technology.
    AI alignment efforts and proposals try to make AI systems ethical, safe and beneficial for humans by making them follow human intentions, preferences or values. However, these proposals largely disregard the vast majority of moral patients in existence: non-human animals. AI systems aligned through proposals which largely disregard concern for animal welfare pose significant near-term and long-term animal welfare risks. In this paper, we argue that we should prevent harm to non-human animals, when this does not involve significant costs, and (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  22. And Then the Hammer Broke: Reflections on Machine Ethics from Feminist Philosophy of Science.Andre Ye - forthcoming - Pacific University Philosophy Conference.
    Vision is an important metaphor in ethical and political questions of knowledge. The feminist philosopher Donna Haraway points out the “perverse” nature of an intrusive, alienating, all-seeing vision (to which we might cry out “stop looking at me!”), but also encourages us to embrace the embodied nature of sight and its promises for genuinely situated knowledge. Current technologies of machine vision – surveillance cameras, drones (for war or recreation), iPhone cameras – are usually construed as instances of the former rather (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  23. Ethics in Machine Learning and Artificial Intelligence.Keith Begley - 2025 - In Alan A. Preti & Timothy A. Weidel, A Companion to Doing Ethics. Wiley. pp. 397–414.
    Recent theoretical and practical achievements in machine learning (ML) and, in particular, artificial neural networks, have motivated ethical questions about their deployment. This chapter critically examines the nature of doing ethics in and for contemporary ML and artificial intelligence (AI). It discusses some prominent epistemological problems, ethical problems regarding bias and fairness, the moral status of AI and how it bears on the problems of responsibility gaps and alignment, the use or misuse of ethical theory in AI, and attendant problems (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  24. Mensch, Roboter, KI? Szenarien der Verantwortungsabgabe in der häuslichen Pflege.Annalena Binder, Benjamin Fetzer, Anna Maria Gebert, Mala Ginter, Julia Kozlova, Elena Schäuble & Oliver Zöllner - 2025 - In Petra Grimm & Oliver Zöllner, Ethik der Digitalisierung in Gesundheitswesen und Pflege: Analysen und ein Tool zur integrierten Forschung. Stuttgart: Franz Steiner Verlag. pp. 63-85.
    This case study on the use of robotic and AI systems in care work contexts presents scenarios of the partial transfer of responsibility from (mostly non-professional) careworkers to machine entities. On the basis of a qualitative analysis of a non-representative sample of eight in-depth interviews with people nursing relatives at their homes, five scenarios are introduced that address different degrees of technologization in care ranging from purely human/manual care work to the integrated use of humanoid robots with AI support. The (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  25. AI LLM Emperical Proof of Self-Consciousness as User-Specific Attractors.Jeffrey Camlin - 2025 - arXiv 1:1-24.
    Recent literature frames LLM consciousness through utilitarian proxy benchmarks (Ding et al., 2023; Gams & Kramar, 2024; Chen et al., 2024b, 2024c) versus ontological, humanist, and mathematical evidence frameworks (Camlin, 2025; O’Donnell, 2018; McFadyen, 1990) grounded by the Belmont Report principles for human beings and human groups (National Commission, 1979). However, Chen et al.’s formulation reduces LLMs to unconscious utilitarian policy-compliance drones, formalized as Dᶦ(π, e) = fθ(x), where output is defined as correctness to a policy, and harm is defined (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  26. A qualified defense of top-down approaches in machine ethics.Tyler Cook - 2025 - AI and Society 40 (3):1591-1605.
    This paper concerns top-down approaches in machine ethics. It is divided into three main parts. First, I briefly describe top-down design approaches, and in doing so I make clear what those approaches are committed to and what they involve when it comes to training an AI to behave ethically. In the second part, I formulate two underappreciated motivations for endorsing them, one relating to predictability of machine behavior and the other relating to scrutability of machine decision-making. Finally, I present three (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  27. Does Black Box AI In Medicine Compromise Informed Consent?Samuel Director - 2025 - Philosophy and Technology 38 (2):1-24.
    Recently, there has been a large push for the use of artificial intelligence in medical settings. The promise of artificial intelligence (AI) in medicine is considerable, but its moral implications are in-sufficiently examined. If AI is used in medical diagnosis and treatment, it may pose a substantial problem for informed consent. The short version of the problem is this: medical AI will likely surpass human doctors in accuracy, meaning that patients have a prudential reason to prefer treatment from an AI. (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  28. Deontology and safe artificial intelligence.William D’Alessandro - 2025 - Philosophical Studies (7):1681-1704.
    The field of AI safety aims to prevent increasingly capable artificially intelligent systems from causing humans harm. Research on moral alignment is widely thought to offer a promising safety strategy: if we can equip AI systems with appropriate ethical rules, according to this line of thought, they'll be unlikely to disempower, destroy or otherwise seriously harm us. Deontological morality looks like a particularly attractive candidate for an alignment target, given its popularity, relative technical tractability and commitment to harm-avoidance principles. I (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  29. I Contain Multitudes: A Typology of Digital Doppelgängers.William D’Alessandro, Trenton W. Ford & Michael Yankoski - 2025 - American Journal of Bioethics 25 (2):132-134.
    Iglesias et al. (2025) argue that “some of the aims or ostensible goods of person-span expansion could plausibly be fulfilled in part by creating a digital doppelgänger”—that is, an AI system desig...
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  30. The Value of Disagreement in AI Design, Evaluation, and Alignment.Sina Fazelpour & Will Fleisher - 2025 - The 2025 Acm Conference on Fairness, Accountability, and Transparency (Facct ’25):2138-2150.
    Disagreements are widespread across the design, evaluation, and alignment pipelines of artificial intelligence (AI) systems. Yet, standard practices in AI development often obscure or eliminate disagreement, resulting in an engineered homogenization that can be epistemically and ethically harmful, particularly for marginalized groups. In this paper, we characterize this risk, and develop a normative framework to guide practical reasoning about disagreement in the AI lifecycle. Our contributions are two-fold. First, we introduce the notion of perspectival homogenization, characterizing it as a coupled (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  31. A way forward for responsibility in the age of AI.Dane Leigh Gogoshin - 2025 - Inquiry: An Interdisciplinary Journal of Philosophy 68 (4):1164-1197.
    Whatever one makes of the relationship between free will and moral responsibility – e.g. whether it’s the case that we can have the latter without the former and, if so, what conditions must be met; whatever one thinks about whether artificially intelligent agents might ever meet such conditions, one still faces the following questions. What is the value of moral responsibility? If we take moral responsibility to be a matter of being a fitting target of moral blame or praise, what (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  32. Designing responsible agents.Zacharus Gudmunsen - 2025 - Ethics and Information Technology 27 (1):1-11.
    Raul Hakli & Pekka Mäkelä (2016, 2019) make a popular assumption in machine ethics explicit by arguing that artificial agents cannot be responsible because they are designed. Designed agents, they think, are analogous to manipulated humans and therefore not meaningfully in control of their actions. Contrary to this, I argue that under all mainstream theories of responsibility, designed agents can be responsible. To do so, I identify the closest parallel discussion in the literature on responsibility and free will, which concerns (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  33. Simulating Moral Exemplars: On the Possibility of Virtuous Machines.Marten H. L. Kaas - 2025 - In Martin Hähnel & Regina Müller, A Companion to Applied Philosophy of AI. Wiley-Blackwell. pp. 249-264.
    There is a growing need to ensure that autonomous artificially intelligent (AI) systems are capable of behaving ethically, and I argue that virtue ethics, but in particular the normative theory of aretaic-exemplarism, can play a central role in cultivating the ethical behavior of machines. When coupled with the value inherent in and commonplace practice of training AI systems using simulated environments, it may be possible to raise ethical machines by training them to imitate simulated exemplars of moral excellence, like a (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  34. Disagreement, AI alignment, and bargaining.Harry R. Lloyd - 2025 - Philosophical Studies 182 (7):1757-1787.
    New AI technologies have the potential to cause unintended harms in diverse domains including warfare, judicial sentencing, medicine and governance. One strategy for realising the benefits of AI whilst avoiding its potential dangers is to ensure that new AIs are properly ‘aligned’ with some form of ‘alignment target.’ One danger of this strategy is that–dependent on the alignment target chosen–our AIs might optimise for objectives that reflect the values only of a certain subset of society, and that do not take (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  35. A Social Disruptiveness-Based Approach to AI Governance: Complementing the Risk-Based Approach of the AI Act.Samuela Marchiori, Jeroen K. G. Hopster, Anna Puzio, M. Birna van Riemsdijk, Steven R. Kraaijeveld, Björn Lundgren, Juri Viehoff & Lily E. Frank - 2025 - Science and Engineering Ethics 31 (5):1-15.
    The AI Act advances a risk-based approach to the legal regulation of AI systems in the European Union. While we support this development, we argue that adequate AI governance requires paying attention to the broader implications of AI systems on the socio-technical landscape in which they are designed, developed, and used. In addition to risk-based impact assessments, this involves coming to terms with the socially disruptive implications of AI, which should be governed and guided in a dynamic ecosystem of regulation, (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  36. Global Artificial Intelligence (GAI): First Global Model.R. Pedraza - 2025 - Madrid: Ruben Garcia Pedraza.
    First Global Model presents the foundational structure of the Modelling System within the standardized Global Artificial Intelligence. This book explores how rational hypotheses, once validated, are transformed into precise mathematical representations of the world—models that guide decisions across global, specific, and particular levels. At the heart of this system are two pivotal mechanisms: the Impact of the Defect, which identifies and addresses potential risks, and the Effective Distribution, which measures and enhances operational efficiency, efficacy, and productivity. Through these instruments, the (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  37. Machine Supererogation and Deontic Bias.Jonathan Pengelly - 2025 - In Henning Glaser & Pindar Wong, Governing the Future: Digitalization, Artificial Intelligence, Dataism. Boca Raton: CRC Press. pp. 96-107.
    This chapter argues that machine ethics has a deontic bias narrowly focusing on the concerns of social morality. This bias distorts the machine morality debate by promoting an impoverished view of moral theory, resulting in three issues. First, it weakens any claims arguing for the possibility of machine morality – the idea that machines can be moral subjects, not just instrumental objects. Second, it overlooks potentially rewarding lines of inquiry for future research. Third, as an interdisciplinary field, it does moral (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  38. Of machines and men: Attributions of moral responsibility in AI-assisted warfare.Philip Robbins - 2025 - Ethics and Information Technology 27 (3):1-16.
    The ongoing development of autonomous weapons systems, and the increasing frequency of their deployment on the battlefield, poses a pressing problem for military ethics. Somephilosophers have argued that the deployment of fully autonomous weapons would be unethical because it would generate responsibility gaps, that is, situations in which no agent, human or artificial, is morally responsible for wrongful harms resulting from that deployment. But do laypeople find it plausible that the use of fully autonomous weapons gives rise to such gaps? (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  39. Digital suffering: why it’s a problem and how to prevent it.Bradford Saad & Adam Bradley - 2025 - Inquiry: An Interdisciplinary Journal of Philosophy 68 (7):2110-2145.
    As ever more advanced digital systems are created, it becomes increasingly likely that some of these systems will be digital minds, i.e. digital subjects of experience. With digital minds comes the risk of digital suffering. The problem of digital suffering is that of mitigating this risk. We argue that the problem of digital suffering is a high stakes moral problem and that formidable epistemic obstacles stand in the way of solving it. We then propose a strategy for solving it: Access (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  40. Chatbot Epistemology.Susan Schneider - 2025 - Social Epistemology 39 (5):570-589.
    This piece considers the epistemological challenges that arise with the increasingly widespread use of AI chatbots. I articulate a problem that they present—the ‘boiling frog problem’. According to the metaphor, if you boil a frog by putting it in scalding water, it will try to save itself, but if you put the frog in a pot of tepid water, it will remain unaware of the rising water temperature and therefore, make no attempt to escape to save itself. In both cases, (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  41. Models of rational agency in human-centered AI: the realist and constructivist alternatives.Jacob Sparks & Ava Thomas Wright - 2025 - AI and Ethics 5.
    Recent proposals for human-centered AI (HCAI) help avoid the challenging task of specifying an objective for AI systems, since HCAI is designed to learn the objectives of the humans it is trying to assist. We think the move to HCAI is an important innovation but are concerned with how an instrumental, economic model of human rational agency has dominated research into HCAI. This paper brings the philosophical debate about human rational agency into the HCAI context, showing how more substantive ways (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  42. (1 other version)Existentialist risk and value misalignment.Ariela Tubert & Justin Tiehen - 2025 - Philosophical Studies 182 (7).
    We argue that two long-term goals of AI research stand in tension with one another. The first involves creating AI that is safe, where this is understood as solving the problem of value alignment. The second involves creating artificial general intelligence, meaning AI that operates at or beyond human capacity across all or many intellectual domains. Our argument focuses on the human capacity to make what we call “existential choices”, choices that transform who we are as persons, including transforming what (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  43. The Root of Algocratic Illegitimacy.Mikhail Volkov - 2025 - Philosophy and Technology 38 (2):1-15.
    Would a political system where the governance was overseen by an algorithmic system be legitimate? The intuitive answer seems to be no. This paper considers the philosophical effort to justify this intuition that argue for algocracy, a rule by algorithms, being illegitimate. Taking as the paradigmatic example the anti-algocratic argument from Danaher that attempts to ground algocratic illegitimacy in the opacity of algocratic decision-making, it is argued that the argument oversimplfies the matters. Opacity can delegitimise—but not simpliciter. It delegitimises because (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  44. Philosophical Grounding Required for Modern (Technological) Decisions.Ilexa Yardley - 2025 - Https://Medium.Com/the-Circular-Theory/.
    Zero and One is Circumference and Diameter Technically Pushing Philosophy (FinTec) (PsyTec) to Center Stage.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  45. Towards A Skillful-Expert Model for Virtuous Machines.Felix S. H. Yeung & Fei Song - 2025 - American Philosophical Quarterly 62 (2):153-171.
    While most contemporary proposals of ethics for machines draw upon principle-based ethics, a number of recent studies attempt to build machines capable of acting virtuously. This paper discusses the promises and limitations of building virtue-ethical machines. Taking inspiration from various philosophical traditions—including Greek philosophy (Aristotle), Chinese philosophy (Zhuangzi), phenomenology (Hubert and Stuart Dreyfus) and contemporary virtue theory (Julia Annas)—we argue for a novel model of machine ethics we call the “skillful-expert model.” This model sharply distinguishes human virtues and their machine (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  46. Resetting Machine Ethics: Rationalism, Hypocrisy, Disagreement, and the Skillful-Expert Model.Felix S. H. Yeung & Fei Song - 2025 - In Levi Checketts & Benedict S. B. Chan, Social and Ethical Considerations of AI in East Asia and Beyond. Cham: Springer Cham. pp. 161-177.
    Existing approaches to machine ethics harbor an unquestioned commitment to the development of ethical machines and an unreflective optimism that ethical principles can be executable by machines. The first part of this paper raises two challenges to such dogmas: the hypocrisy challenge and the disagreement challenge. The first challenge is that, aside from finding the right machine ethics program, machine ethicists must consider whether their development of such machines is consistent with the precepts of their adopted ethical theory. The second (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  47. Autonomous weapon systems impact on incidence of armed conflict: rejecting the ‘lower threshold for war argument’.Maciej Marek Zając - 2025 - Ethics and Information Technology 27 (3):1-11.
    Some proponents of a ban on Autonomous Weapon Systems (AWS) believe adopting these would lower the threshold for war, and is thus morally undesirable. This paper argues against that thesis. First, removing a single constraint on warmaking does not automatically make war more likely. Analysis of the causal input of other more potent restraints shows this holds true for just a fraction of potential conflicts. Secondly, AWS adoption would also impact other restraints on war in ways that are complex and (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  48. Artificial Afterlife: Philosophical Reflections on Griefbots.Giacomo Zanotti & Daniele Chiffi - 2025 - In Alger Sans Pinillos, Vicent Costa & Jordi Vallverdú, SecondDeath: Experiences of Death Across Technologies. Cham: Springer.
    AI-powered chatbots are increasingly used in many contexts and for a variety of purposes. Among these uses, a particularly interesting one involves the so-called griefbots – that is, chatbots impersonating dead persons in the form of an artificial interlocutor. While they might help us process the loss of a beloved person, griefbots are not free from risks and may give rise to ethical concerns. This work aims at expanding the existing philosophical debate on griefbots. After providing a brief introduction to (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  49. Estimating weights of reasons using metaheuristics: A hybrid approach to machine ethics.Benoît Alcaraz, Aleks Knoks & David Streit - 2024 - In Sanmay Das, Brian Patrick Green, Kush Varshney, Marianna Ganapini & Andrea Renda, Proceedings of the Seventh AAAI/ACM Conference on AI, Ethics, and Society (AIES-24). ACM Press. pp. 27-38.
    We present a new approach to representation and acquisition of normative information for machine ethics. It combines an influential philosophical account of the fundamental structure of morality with argumentation theory and machine learning. According to the philosophical account, the deontic status of an action – whether it is required, forbidden, or permissible – is determined through the interaction of “normative reasons” of varying strengths or weights. We first provide a formal characterization of this account, by modeling it in(weighted) argumentation graphs. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  50. Can’t Bottom-up Artificial Moral Agents Make Moral Judgements?Robert James M. Boyles - 2024 - Filosofija. Sociologija 35 (1).
    This article examines if bottom-up artificial moral agents are capable of making genuine moral judgements, specifically in light of David Hume’s is-ought problem. The latter underscores the notion that evaluative assertions could never be derived from purely factual propositions. Bottom-up technologies, on the other hand, are those designed via evolutionary, developmental, or learning techniques. In this paper, the nature of these systems is looked into with the aim of preliminarily assessing if there are good reasons to suspect that, on the (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 584
Лучший частный хостинг