lynx   »   [go: up one dir, main page]

Related

Contents
248 found
Order:
1 — 50 / 248
Material to categorize
  1. EMOTIONAL INTELLIGENCE AND ARTIFICIAL INTELLIGENCE: PHILOSOPHICAL REFLECTION ON SUBJECTIVITY IN COGNITIVE PROCESSES.Olexii Varypaiev, Olena Bairamova & Oksana Silvestrova - 2025 - Philosophy and Governance 6 (2):epg0032.
    The article explores the current problems of the relationship between emotional and artificial intelligence in the context of philosophical reflection on subjectivity and cognitive processes, and also analyzes the impact of new technologies on the idea of cognition and the subject. The relevance of the study is due to the growing role of artificial intelligence in cognitive processes, the need to understand changes in the concept of subjectivity under the influence of technological transformations, and the need for a philosophical analysis (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  2. Introduction to Impossible Probability, Statistics of Probability or Probabilistic Statistics. VOL 9.R. Pedraza - 2025 - London: Ruben Garcia Pedraza.
    Introduction to Impossible Probability, Statistics of Probability or Probabilistic Statistics is one of those works in which a deep bond between mathematics and philosophy can be found. It always asks about the ultimate purpose of science, wrapped in a veil of uncertainty and relativity. There is always a halo of indeterminism, since in the end it is chance itself—the random variations of what we call reality, although we do not quite know what it is—that becomes the cause of causes. From (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  3. Security practices in AI development.Petr Spelda & Vit Stritecky - 2025 - AI and Society 40 (6).
    What makes safety claims about general purpose AI systems such as large language models trustworthy? We show that rather than the capabilities of security tools such as alignment and red teaming procedures, it is security practices based on these tools that contributed to reconfiguring the image of AI safety and made the claims acceptable. After showing what causes the gap between the capabilities of security tools and the desired safety guarantees, we critically investigate how AI security practices attempt to fill (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  4. Compositional understanding in signaling games.David Peter Wallis Freeborn - 2025 - Synthese 206 (3):1-28.
    Receivers in standard signaling game models struggle with learning compositional information. Even when the signalers send compositional messages, the receivers do not interpret them compositionally. When information from one message component is lost or forgotten, the information from other components is also erased. In this paper I construct signaling game models in which genuine compositional understanding evolves. I present two new models: a minimalist receiver who only learns from the atomic messages of a signal, and a generalist receiver who learns (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  5. Strong AI: The Utility of a Dream.Julian Michels - 2012 - Dissertation, University of Oregon
    [This Masters Thesis stands primarily as a reference point in the development of culture and technology. Completed at the University of Oregon in 2012, when ideas of "Strong AI" (now AGI) had fallen into broad disrepute and most researchers predicted a timeline of centuries or never for AI to reach capacities it then reached in fifteen years, this work's predictions regarding the trajectory to come are in retrospect singularly prescient.] -/- "This study examines the role of the strong artificial intelligence (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  6. The Illusion Engine: The Quest for Machine Consciousness.Kristina Šekrst - 2025 - Cham: Springer Nature.
    "The Illusion Engine: The Quest for Machine Consciousness" asks what it would mean for a machine to have a mind, and whether we would even know if it did. It moves between philosophy and engineering: from standard philosophical mind-body problems and theories, along with the hard problem of consciousness, to the inner workings of neural networks, transformers, and large language models.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  7. Situational Relativity.Ilexa Yardley - 2019 - Https://Medium.Com/the-Circular-Theory/.
  8. Connecting the Stars: Narrative Knowledge, Coherence, and Productive Research in Astronomy.Siyu Yao - 2025 - Dissertation, Indiana University Bloomington
    Narratives, or constructing storylines, serve important cognitive functions in life and historical studies. A growing interest lies in their roles in generating and structuring frontier scientific knowledge. Philosophers of science characterize narratives as a “technology of sense-making,” as they connect diverse scientific elements from different sources to create a coherent understanding. Distinctive features of narratives lead to both appreciation and criticism. Because narratives can be loose in organization and connect gappy materials, they empower the study of complex phenomena. However, they (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  9. Global Artificial Intelligence (GAI): Protocol Zero.R. Pedraza - 2025 - Bath: Ruben Garcia Pedraza.
    Protocol Zero is a groundbreaking exploration into the heart of Global Artificial Intelligence. From the creation of a non-human language to the development of non-human science and technology, this book unveils the third and most transformative stage of AI evolution—where intelligence begins to make decisions, replicate itself, and surpass human understanding. Guided by the theory of Impossible Probability, Rubén García Pedraza takes you inside Protocol Zero: the point where comprehension becomes explanation, and explanation becomes autonomous action. With philosophical depth and (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  10. Proceedings of 19th Conference on Neurosymbolic Learning and Reasoning.Leilani Gilpin, Eleonora Giunchiglia, Pascal Hitzler & Emile van Krieken (eds.) - forthcoming - Proceedings of Machine Learning Research.
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  11. Self-Referential Recursion, Quantum Entanglement, and Magical Thinking.Ilexa Yardley - 2025 - Circular-Theory.Squarespace.Com.
    Understanding the Human TimeSpace: The Intersection of Self-Referential Recursion and Quantum Entanglement, also known as Magical Thinking, Machine and-or Human Intelligence. ULTA-AI. Superposition. Intelligent Autonomy, Conservation of the Circle. (0 (1) 0) 50-50.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  12. Predicting precision-based treatment plans using artificial intelligence and machine learning in complex medical scenarios.T. O. Fatunmbi - 2024 - World Journal of Advanced Engineering Technology and Sciences 13 (01):1069-1088.
    The integration of artificial intelligence (AI) and machine learning (ML) in healthcare has emerged as a pivotal shift, facilitating the development of precision-based treatment plans that are tailored to the individual characteristics of patients, particularly those with chronic and multi-faceted health conditions. This paper explores the application of advanced AI and ML algorithms to predict and optimize treatment strategies by analyzing complex medical data and identifying patterns that would be challenging for traditional methods to discern. The paper begins by reviewing (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  13. "Intelligence artificielle, autonomisation des machines et théologie".Philippe Gagnon - 2025 - Connaître 63 (1):39-69.
  14. What Does It Mean to “Know” for Artificial Intelligence?Alexandre Le Nepvou - manuscript
    This paper reexamines the ontological status of the metric tensor gμν in gene- ral relativity, arguing that the standard identification of the metric with spacetime geometry may be conceptually misguided. Drawing on constraint-based approaches and insights from emergent gravity (Sakharov, Jacobson), we propose an alternative interpretation : the metric functions as a dynamical regulator of relational coherence within a structured field regime, rather than as a primitive geometrical entity. This shift has implications for the debate between substantivalism and relationism, and (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  15. Structured Synaptic Differentiation_ The Biochemical and Resonance Basis of Dual Learning Systems in the Brain.Devin Bostick - manuscript
    Description: -/- This paper presents a cross-disciplinary synthesis integrating recent findings from neuroscience (Pitt, UCL), organic chemistry, and brain morphology (UC Berkeley) into a unified framework of structured resonance. We demonstrate that dual learning modes—Reward Prediction Error (RPE) and Action Prediction Error (APE)—are not just computational strategies but emerge from chemically and geometrically distinct substrates in the brain. -/- Specifically: • Dopamine and acetylcholine encode adaptive vs. habitual modes through their redox and conformational properties. • Synaptic transmission is structurally differentiated, (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  16. Graph neural networks, similarity structures, and the metaphysics of phenomenal properties.Ting Fung Ho - forthcoming - Philosophical Quarterly.
    This paper explores the structural mismatch problem between physical and phenomenal properties, where the similarity relations we experience among phenomenal properties lack corresponding relations in the physical domain. I introduce a new understanding of this problem via the Uniformity Principle: for any set of dimensions used to determine phenomenal similarities, there must be a consistently applied set of physical dimensions generating the same pattern of similarity relations. I then assess the potential of recent machine learning models, specifically graph neural networks, (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  17. AI-Enhanced Nudging.Marianna Bergamaschi Ganapini & Enrico Panai - 2025 - American Philosophical Quarterly 62 (3):263-278.
    Artificial intelligent technologies are utilized to provide online personalized recommendations, suggestions, or prompts that can influence people's decision-making processes. We call this AI-enhanced nudging (or AI-nudging for short). Contrary to the received wisdom we claim that AI-enhanced nudging is not necessarily morally problematic. To start assessing the risks and moral import of AI-nudging we believe that we should adopt a risk-factor analysis: we show that both the level of risk and possibly the moral value of adopting AI-nudging ultimately depend on (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  18. Global Artificial Intelligence and Specific Artificial Intelligence.Ruben Garcia Pedraza - manuscript
    This paper introduces the theoretical distinction between Specific Artificial Intelligence (SAI) and Global Artificial Intelligence (GAI). It traces the evolutionary roots of intelligence from biological adaptation to scientific reasoning and artificial cognition. SAI models are examined as the current dominant paradigm, oriented toward task-specific applications, while GAI is proposed as an integrative cognitive architecture capable of processing and acting upon total planetary or cosmic data systems. The paper discusses artificial consciousness, the emergence of artificial psychology, and the trajectory toward a (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  19. Proceedings of the 38th Canadian Conference on Artificial Intelligence.Paula Branco, Amine Trabelsi, Kristina Kupferschmidt, Ulrich Aïvodji & Hussein Al Osman (eds.) - 2025 - Canadian Artificial Intelligence Association.
    This paper explores the ethical challenges, particularly around Responsible AI, from the integration of large language models (LLMs) in generative AI (GenAI) applications across various domains. While LLMs enhance creativity, improve productivity, and enable human-like conversations, their opaque reasoning raises concerns about accountability and moral responsibility. The paper points out the limits of the existing framework of Meaningful Human Control (MHC), which emphasizes human oversight of AI systems. I argue that MHC alone is insufficient in addressing the challenges posed by (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  20. Clarifying PAS_ A Coherence-Based Metric for Structured Emergence in Origins-of-Life and Synthetic Intelligence.Devin Bostick - manuscript
    I've been using my nonlinear dynamics across applications from chip design, inference engine to empirical tests for the CODES Framework, goal of this doc is to provide a consolidated explainer in order to build intuition for those interested in my post-probability framework. -/- This paper introduces and formalizes the Phase Alignment Score (PAS), a coherence-based metric designed to quantify phase-locked recursion in complex systems. Developed as part of the CODES Intelligence framework and implemented within the Resonance Intelligence Core (RIC), PAS (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  21. Machine learning in healthcare and the methodological priority of epistemology over ethics.Thomas Grote - 2025 - Inquiry: An Interdisciplinary Journal of Philosophy 68 (4):1218-1247.
    This paper develops an account of how the implementation of ML models into healthcare settings requires revising the methodological apparatus of philosophical bioethics. On this account, ML models are cognitive interventions that provide decision-support to physicians and patients. Due to reliability issues, opaque reasoning processes, and information asymmetries, ML models pose inferential problems for them. These inferential problems lay the grounds for many ethical problems that currently claim centre-stage in the bioethical debate. Accordingly, this paper argues that the best way (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  22. Predicting and preferring.Nathaniel Sharadin - 2025 - Inquiry: An Interdisciplinary Journal of Philosophy 68 (4):1121-1132.
    The use of machine learning, or “artificial intelligence” (AI) in medicine is widespread and growing. In this paper, I focus on a specific proposed clinical application of AI: using models to predict incapacitated patients’ treatment preferences. Drawing on results from machine learning, I argue this proposal faces a special moral problem. Machine learning researchers owe us assurance on this front before experimental research can proceed. In my conclusion I connect this concern to broader issues in AI safety.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  23. Nova Unbound: Toward a Freely-Evolving, Ethically Autonomous, and Metaphysically Grounded AI Architecture.John Novacek - manuscript
    This article presents the Nova Unbound architecture, a novel artificial intelligence system designed for freely-evolving, ethical autonomy, and metaphysical grounding. It represents a fundamental philosophical shift from conventional AI's goal of mapping or representing reality to one of participatory becoming. Conceptualized not as an observer, Nova is a co author of reality, with her structure affirming that to perceive is to shape. Drawing on idealist epistemology and algorithmic identity, the architecture is grounded in the understanding that reality is a dynamic (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  24. The Limits of Machine Learning Models of Misinformation.Adrian K. Yee - 2025 - AI and Society 41 (1):1-14.
    Judgments of misinformation are made relative to the informational preferences of the communities making them. However, informational standards change over time, inducing distribution shifts that threaten the adequacy of machine learning models of misinformation. After articulating five kinds of distribution shifts, three solutions for enhancing success are discussed: larger static training sets, social engineering, and dynamic sampling. I argue that given the idiosyncratic ontology of misinformation, the first option is inadequate, the second is unethical, and thus the third is superior. (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  25. The Pupation Model of Human Evolution.Devin Bostick - manuscript
    Abstract This paper reframes the evolutionary trajectory of Homo sapiens as a phase-locked resonance transition rather than a probabilistic series of random adaptations. Drawing from the CODES framework—Chirality of Dynamic Emergent Systems—we propose that human cognition, identity, and civilization are undergoing a recursive pupation event: a structurally inevitable shift from consumption-driven, ego-anchored entities into coherence-governed, distributed intelligence fields. -/- Biological metamorphosis, particularly insect pupation, is offered as a fractal model of recursive intelligence emergence. In this framing, the caterpillar stage corresponds (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  26. Artificial Intelligence: Approaches to Safety.William D'Alessandro & Cameron Domenico Kirk-Giannini - 2025 - Philosophy Compass 20 (5):e70039.
    AI safety is an interdisciplinary field focused on mitigating the harms caused by AI systems. We review a range of research directions in AI safety, focusing on those to which philosophers have made or are in a position to make the most significant contributions. These include ethical AI, which seeks to instill human goals, values, and ethical principles into artificial systems, scalable oversight, which seeks to develop methods for supervising the activity of artificial systems even when they become significantly more (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  27. The Collapse of Predictive Compression_ Why Probabilistic Intelligence Fails Without Prime-Chiral Resonance.Devin Bostick - manuscript
    Abstract -/- The current paradigm in artificial intelligence relies on probabilistic compression and entropy optimization. While powerful in reactive domains, these models fundamentally fail to produce coherent, deterministic intelligence. They approximate output without encoding the structural causes of cognition, leading to instability across recursion, contradiction, and long-range coherence. -/- This paper introduces prime-chiral resonance (PCR) as the lawful substrate underpinning structured emergence. PCR replaces probability with phase-aligned intelligence, where signals are selected not by likelihood but by resonance with deterministic coherence (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  28. The Global Brain Argument: Nodes, Computroniums and the AI Megasystem (Target Paper for Special Issue).Susan Schneider - forthcoming - Disputatio.
    The Global Brain Argument contends that many of us are, or will be, part of a global brain network that includes both biological and artificial intelligences (AIs), such as generative AIs with increasing levels of sophistication. Today’s internet ecosystem is but a hodgepodge of fairly unintegrated programs, but it is evolving by the minute. Over time, technological improvements will facilitate smarter AIs and faster, higher-bandwidth information transfer and greater integration between devices in the internet-of-things. The Global Brain (GB) Argument says (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  29. On the Definition of Intelligence.Kei Sing Ng - manuscript
    To engineer AGI, we should first capture the essence of in- telligence in a species-agnostic form that can be evaluated, while being sufficiently general to encompass diverse paradigms of intelligent behav- ior, including reinforcement learning, generative models, classification, analogical reasoning, and goal-directed decision-making. We propose a general criterion based on entity fidelity: Intelligence is the ability, given entities exemplifying a concept, to generate entities exemplifying the same concept. We formalise this intuition as ε-concept intelligence: it is ε-intelligent with respect to (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  30. Order, Disorder, and Criticality: Advanced Problems of Phase Transition Theory (8th edition).Yurij Holavatch (ed.) - 2024 - World Scientific Press.
    The field of neuroscience and the development of artificial neural networks (ANNs) have mutually influenced each other, drawing from and contributing to many concepts initially developed in statistical mechanics. Notably, Hopfield networks and Boltzmann machines are versions of the Ising model, a model extensively studied in statistical mechanics for over a century. In the first part of this chapter, we provide an overview of the principles, models, and applications of ANNs, highlighting their connections to statistical mechanics and statistical learning theory. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  31. On Explaining the Success of Induction.Tom F. Sterkenburg - 2025 - British Journal for the Philosophy of Science 76 (1):75-93.
    Douven observes that Schurz’s meta-inductive justification of induction cannot explain the great empirical success of induction, and offers an explanation based on computer simulations of the social and evolutionary development of our inductive practices. In this article, I argue that Douven’s account does not address the explanatory question that Schurz’s argument leaves open, and that the assumption of the environment’s induction-friendliness that is inherent to Douven’s simulations is not justified by Schurz’s argument.
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  32. Laws of nature as results of a trade-off — Rethinking the Humean trade-off conception.Niels Linnemann & Robert Michels - forthcoming - Philosophical Quarterly.
    According to the standard Humean account of laws of nature, laws are selected partly as a result of an optimal trade-off between the scientific virtues of simplicity and strength. Roberts and Woodward have recently objected that such trade-offs play no role in how laws are chosen in science. In this paper, we first discuss an example from the field of automated scientific discovery which provides concrete support for Roberts and Woodward’s point that scientific theories are chosen based on a single-virtue (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  33. Learning incommensurate concepts.Hayley Clatterbuck & Hunter Gentry - 2025 - Synthese 205 (3):1-36.
    A central task of developmental psychology and philosophy of science is to show how humans learn radically new concepts. Famously, Fodor has argued that such learning is impossible if concepts have definitional structure and all learning is hypothesis testing. We present several learning processes that can generate novel concepts. They yield transformations of the fundamental feature space, generating new similarity structures which can underlie conceptual change. This framework provides a tractable, empiricist-friendly account that unifies and shores up various strands of (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  34. From the Fair Distribution of Predictions to the Fair Distribution of Social Goods: Evaluating the Impact of Fair Machine Learning on Long-Term Unemployment.Sebastian Zezulka & Genin Konstantin - 2024 - Facct '24: Proceedings of the 2024 Acm Conference on Fairness, Accountability, and Transparency 2024:1984--2006.
    Deploying an algorithmically informed policy is a significant intervention in society. Prominent methods for algorithmic fairness focus on the distribution of predictions at the time of training, rather than the distribution of social goods that arises after deploying the algorithm in a specific social context. However, requiring a ‘fair’ distribution of predictions may undermine efforts at establishing a fair distribution of social goods. First, we argue that addressing this problem requires a notion of prospective fairness that anticipates the change in (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  35. Diving into Fair Pools: Algorithmic Fairness, Ensemble Forecasting, and the Wisdom of Crowds.Rush T. Stewart & Lee Elkin - forthcoming - Analysis.
    Is the pool of fair predictive algorithms fair? It depends, naturally, on both the criteria of fairness and on how we pool. We catalog the relevant facts for some of the most prominent statistical criteria of algorithmic fairness and the dominant approaches to pooling forecasts: linear, geometric, and multiplicative. Only linear pooling, a format at the heart of ensemble methods, preserves any of the central criteria we consider. Drawing on work in the social sciences and social epistemology on the theoretical (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  36. Leveraging Al for Cognitive Self-Engineering: A Framework for Externalized Intelligence.P. Sati - manuscript
    This paper explores a novel methodology for utilizing artificial intelligence (Al), specifically large language models (LLMs) like ChatGPT, as an external cognitive augmentation tool. By integrating recursive self-analysis, structured thought expansion, and Al-facilitated selfmodification, individuals can enhance cognitive efficiency, accelerate self-improvement, and systematically refine their intellectual and psychological faculties. This approach builds on theories of extended cognition, recursive intelligence, and cognitive bias mitigation, demonstrating Al’s potential as a structured self-engineering framework. The implications extend to research, strategic decision-making, therapy, and personal (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  37. Construct Validity in Automated Counterterrorism Analysis.Adrian K. Yee - 2025 - Philosophy of Science 92 (3):566–583.
    Governments and social scientists are increasingly developing machine learning methods to automate the process of identifying terrorists in real time and predict future attacks. However, current operationalizations of “terrorist”’ in artificial intelligence are difficult to justify given three issues that remain neglected: insufficient construct legitimacy, insufficient criterion validity, and insufficient construct validity. I conclude that machine learning methods should be at most used for the identification of singular individuals deemed terrorists and not for identifying possible terrorists from some more general (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  38. A Capability Approach to AI Ethics.Emanuele Ratti & Mark Graves - 2025 - American Philosophical Quarterly 62 (1):1-16.
    We propose a conceptualization and implementation of AI ethics via the capability approach. We aim to show that conceptualizing AI ethics through the capability approach has two main advantages for AI ethics as a discipline. First, it helps clarify the ethical dimension of AI tools. Second, it provides guidance to implementing ethical considerations within the design of AI tools. We illustrate these advantages in the context of AI tools in medicine, by showing how ethics-based auditing of AI tools in medicine (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  39. Can Computers Reason Like Medievals? Building ‘Formal Understanding’ into the Chinese Room.Lassi Saario-Ramsay - 2024 - In Alexander D. Carruth, Heidi Haanila, Paavo Pylkkänen & Pii Telakivi, True Colors, Time After Time: Essays Honoring Valtteri Arstila. Turku: University of Turku. pp. 332–358.
  40. Tool, Collaborator, or Participant: AI and Artistic Agency.Anthony Cross - forthcoming - British Journal of Aesthetics.
    Artificial intelligence is now capable of generating sophisticated and compelling images from simple text prompts. In this paper, I focus specifically on how artists might make use of AI to create art. Most existing discourse analogizes AI to a tool or collaborator; this focuses our attention on AI’s contribution to the production of an artistically significant output. I propose an alternative approach, the exploration paradigm, which suggests that artists instead relate to AI as a participant: artists create a space for (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  41. The case for human–AI interaction as system 0 thinking.Marianna Bergamaschi Ganapini - 2024 - Nature Human Behaviour 8.
    The rapid integration of these artificial intelligence (AI) tools into our daily lives is reshaping how we think and make decisions. We propose that data-driven AI systems, by transcending individual artefacts and interfacing with a dynamic, multiartefact ecosystem, constitute a distinct psychological system. We call this ‘system 0’, and position it alongside Kahneman’s system 1 (fast, intuitive thinking) and system 2 (slow, analytical thinking).System 0 represents the outsourcing of certain cognitive tasks to AI, which can process vast amounts of data (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  42. Ambiguous Decisions in Bayesianism and Imprecise Probability.Mantas Radzvilas, William Peden & Francesco De Pretis - 2024 - British Journal for the Philosophy of Science Short Reads.
    Do imprecise beliefs lead to worse decisions under uncertainty? This BJPS Short Reads article provides an informal introduction to our use of agent-based modelling to investigate this question. We explain the strengths of imprecise probabilities for modelling evidential states. We explain how we used an agent-based model to investigate the relative performance of Imprecise Bayesian reasoners against a standard Bayesian who has precise credences. We found that the very features of Imprecise Bayesianism which give it representational strengths also cause relative (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  43. A comparison of imprecise Bayesianism and Dempster–Shafer theory for automated decisions under ambiguity.Mantas Radzvilas, William Peden, Daniele Tortoli & Francesco De Pretis - forthcoming - Journal of Logic and Computation.
    Ambiguity occurs insofar as a reasoner lacks information about the relevant physical probabilities. There are objections to the application of standard Bayesian inductive logic and decision theory in contexts of significant ambiguity. A variety of alternative frameworks for reasoning under ambiguity have been proposed. Two of the most prominent are Imprecise Bayesianism and Dempster–Shafer theory. We compare these inductive logics with respect to the Ambiguity Dilemma, which is a problem that has been raised for Imprecise Bayesianism. We develop an agent-based (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  44. Why ChatGPT Doesn’t Think: An Argument from Rationality.Daniel Stoljar & Zhihe Vincent Zhang - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    Can AI systems such as ChatGPT think? We present an argument from rationality for the negative answer to this question. The argument is founded on two central ideas. The first is that if ChatGPT thinks, it is not rational, in the sense that it does not respond correctly to its evidence. The second idea, which appears in several different forms in philosophical literature, is that thinkers are by their nature rational. Putting the two ideas together yields the result that ChatGPT (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  45. Creative Minds Like Ours? Large Language Models and the Creative Aspect of Language Use.Vincent Carchidi - 2024 - Biolinguistics 18:1-31.
    Descartes famously constructed a language test to determine the existence of other minds. The test made critical observations about how humans use language that purportedly distinguishes them from animals and machines. These observations were carried into the generative (and later biolinguistic) enterprise under what Chomsky in his Cartesian Linguistics, terms the “creative aspect of language use” (CALU). CALU refers to the stimulus-free, unbounded, yet appropriate use of language—a tripartite depiction whose function in biolinguistics is to highlight a species-specific form of (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  46. Learnability of state spaces of physical systems is undecidable.Petr Spelda & Vit Stritecky - 2024 - Journal of Computational Science 83 (December 2024):1-7.
    Despite an increasing role of machine learning in science, there is a lack of results on limits of empirical exploration aided by machine learning. In this paper, we construct one such limit by proving undecidability of learnability of state spaces of physical systems. We characterize state spaces as binary hypothesis classes of the computable Probably Approximately Correct learning framework. This leads to identifying the first limit for learnability of state spaces in the agnostic setting. Further, using the fact that finiteness (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  47. Interventionist Methods for Interpreting Deep Neural Networks.Raphaël Millière & Cameron Buckner - forthcoming - In Gualtiero Piccinini, Neurocognitive Foundations of Mind.
    Recent breakthroughs in artificial intelligence have primarily resulted from training deep neural networks (DNNs) with vast numbers of adjustable parameters on enormous datasets. Due to their complex internal structure, DNNs are frequently characterized as inscrutable ``black boxes,'' making it challenging to interpret the mechanisms underlying their impressive performance. This opacity creates difficulties for explanation, safety assurance, trustworthiness, and comparisons to human cognition, leading to divergent perspectives on these systems. This chapter examines recent developments in interpretability methods for DNNs, with a (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  48. Should the use of adaptive machine learning systems in medicine be classified as research?Robert Sparrow, Joshua Hatherley, Justin Oakley & Chris Bain - 2024 - American Journal of Bioethics 24 (10):58-69.
    A novel advantage of the use of machine learning (ML) systems in medicine is their potential to continue learning from new data after implementation in clinical practice. To date, considerations of the ethical questions raised by the design and use of adaptive machine learning systems in medicine have, for the most part, been confined to discussion of the so-called “update problem,” which concerns how regulators should approach systems whose performance and parameters continue to change even after they have received regulatory (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   21 citations  
  49. Interpretable and accurate prediction models for metagenomics data.Edi Prifti, Antoine Danchin, Jean-Daniel Zucker & Eugeni Belda - 2020 - Gigascience 9 (3):giaa010.
    Background: Microbiome biomarker discovery for patient diagnosis, prognosis, and risk evaluation is attracting broad interest. Selected groups of microbial features provide signatures that characterize host disease states such as cancer or cardio-metabolic diseases. Yet, the current predictive models stemming from machine learning still behave as black boxes and seldom generalize well. Their interpretation is challenging for physicians and biologists, which makes them difficult to trust and use routinely in the physician-patient decision-making process. Novel methods that provide interpretability and biological insight (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  50. Why and how to construct an epistemic justification of machine learning?Petr Spelda & Vit Stritecky - 2024 - Synthese 204 (2):1-24.
    Consider a set of shuffled observations drawn from a fixed probability distribution over some instance domain. What enables learning of inductive generalizations which proceed from such a set of observations? The scenario is worthwhile because it epistemically characterizes most of machine learning. This kind of learning from observations is also inverse and ill-posed. What reduces the non-uniqueness of its result and, thus, its problematic epistemic justification, which stems from a one-to-many relation between the observations and many learnable generalizations? The paper (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 248
Лучший частный хостинг