Quantitative Finance
See recent articles
Showing new listings for Tuesday, 29 July 2025
- [1] arXiv:2507.19693 [pdf, html, other]
-
Title: The Impact of Shared Telecom Infrastructure on Digital Connectivity and InclusionSubjects: General Economics (econ.GN)
Nearly half the world remains offline, and capital scarcity stalls new network buildouts. Sharing existing mobile towers could accelerate connectivity. We assemble data on 107 tower-sharing deals in 28 low-income countries (2008-20) and estimate staggered difference-in-differences effects. Two years after a transaction covering over 1,000 towers, the PPP-adjusted mobile-price index falls USD 1.60 (s.e. 1.10) from a baseline of USD 3.16, while data prices drop USD 1.00 (0.29), baseline USD 3.41 per GB. The number of mobile connections increases. Rural internet access increases by 4.7 pp and female-headed households by 3.6 pp. Tower-sharing agreements increase product market competition as measured by Herfindahl-Hirschman Index.
- [2] arXiv:2507.19775 [pdf, other]
-
Title: AIs Structural Impact on Indias Knowledge Intensive Startup Ecosystem: A Natural Experiment in Firm Efficiency and DesignSubjects: General Economics (econ.GN)
This study explores the structural and performance impacts of artificial intelligence (AI) adoption on Indias knowledge intensive startups, spanning information technology, financial technology, health technology, and educational technology, founded between 2016 and 2025. Using a natural experiment framework with the founding year as an exogenous treatment proxy, it examines firm size, revenue productivity, valuation efficiency, and capital utilization across pre AI and AI era cohorts. Findings reveal larger structures and lower efficiency in AI era firms, supported by a dataset of 914 cleaned firms. The study offers insights into AIs transformative role, suggesting that while AI era firms attract higher funding and achieve higher absolute valuations, their per employee productivity and efficiency ratios are lower, potentially indicating earlystage investments in technology that have yet to yield proportional returns. This informs global entrepreneurial strategies while highlighting the need for longitudinal research on sustainability.
- [3] arXiv:2507.19824 [pdf, html, other]
-
Title: Optimal mean-variance portfolio selection under regime-switching-induced stock price shocksComments: to appear in Systems and Control LettersSubjects: Portfolio Management (q-fin.PM); Optimization and Control (math.OC); Probability (math.PR); Mathematical Finance (q-fin.MF)
In this paper, we investigate mean-variance (MV) portfolio selection problems with jumps in a regime-switching financial model. The novelty of our approach lies in allowing not only the market parameters -- such as the interest rate, appreciation rate, volatility, and jump intensity -- to depend on the market regime, but also in permitting stock prices to experience jumps when the market regime switches, in addition to the usual micro-level jumps. This modeling choice is motivated by empirical observations that stock prices often exhibit sharp declines when the market shifts from a ``bullish'' to a ``bearish'' regime, and vice versa. By employing the completion-of-squares technique, we derive the optimal portfolio strategy and the efficient frontier, both of which are characterized by three systems of multi-dimensional ordinary differential equations (ODEs). Among these, two systems are linear, while the first one is an $\ell$-dimensional, fully coupled, and highly nonlinear Riccati equation. In the absence of regime-switching-induced stock price shocks, these systems reduce to simple linear ODEs. Thus, the introduction of regime-switching-induced stock price shocks adds significant complexity and challenges to our model. Additionally, we explore the MV problem under a no-shorting constraint. In this case, the corresponding Riccati equation becomes a $2\ell$-dimensional, fully coupled, nonlinear ODE, for which we establish solvability. The solution is then used to explicitly express the optimal portfolio and the efficient frontier.
- [4] arXiv:2507.19911 [pdf, html, other]
-
Title: AI-Driven Spatial Distribution Dynamics: A Comprehensive Theoretical and Empirical Framework for Analyzing Productivity Agglomeration Effects in Japan's Aging SocietyComments: 33 pages, 9 figuresSubjects: General Economics (econ.GN)
This paper develops the first comprehensive theoretical and empirical framework for analyzing AI-driven spatial distribution dynamics in metropolitan areas undergoing demographic transition. We extend New Economic Geography by formalizing five novel AI-specific mechanisms: algorithmic learning spillovers, digital infrastructure returns, virtual agglomeration effects, AI-human complementarity, and network externalities. Using Tokyo as our empirical laboratory, we implement rigorous causal identification through five complementary econometric strategies and develop machine learning predictions across 27 future scenarios spanning 2024-2050. Our theoretical framework generates six testable hypotheses, all receiving strong empirical support. The causal analysis reveals that AI implementation increases agglomeration concentration by 4.2-5.2 percentage points, with heterogeneous effects across industries: high AI-readiness sectors experience 8.4 percentage point increases, while low AI-readiness sectors show 1.2 percentage point gains. Machine learning predictions demonstrate that aggressive AI adoption can offset 60-80\% of aging-related productivity declines. We provide a strategic three-phase policy framework for managing AI-driven spatial transformation while promoting inclusive development. The integrated approach establishes a new paradigm for analyzing technology-driven spatial change with global applications for aging societies.
- [5] arXiv:2507.19989 [pdf, other]
-
Title: Assessing the Sensitivities of Input-Output Methods for Natural Hazard-Induced Power Outage Macroeconomic ImpactsSubjects: General Economics (econ.GN)
It is estimated that over one-fourth of US households experienced a power outage in 2023, costing on average US $\$150$ Bn annually, with $87\%$ of outages caused by natural hazards. Indeed, numerous studies have examined the macroeconomic impact of power network interruptions, employing a wide variety of modeling methods and data parameterization techniques, which warrants further investigation. In this paper, we quantify the macroeconomic effects of three significant natural hazard-induced US power outages: Hurricane Ian (2022), the 2021 Texas Blackouts, and Tropical Storm Isaias (2020). Our analysis evaluates the sensitivity of three commonly used data parameterization techniques (household interruptions, kWh lost, and satellite luminosity), along with three static models (Leontief and Ghosh, critical input, and inoperability Input-Output). We find the mean domestic loss estimates to be US $\$3.13$ Bn, US $\$4.18$ Bn, and US $\$2.93$ Bn, respectively. Additionally, data parameterization techniques can alter estimated losses by up to $23.1\%$ and $50.5\%$. Consistent with the wide range of outputs, we find that the GDP losses are highly sensitive to model architecture, data parameterization, and analyst assumptions. Results sensitivity is not uniform across models and arises from important a priori analyst decisions, demonstrated by data parameterization techniques yielding $11\%$ and $45\%$ differences within a model. We find that the numerical value output is more sensitive than intersectoral linkages and other macroeconomic insights. To our knowledge, we contribute to literature the first systematic comparison of multiple IO models and parameterizations across several natural hazard-induced long-duration power outages, providing guidance and insights for analysts.
- [6] arXiv:2507.20039 [pdf, html, other]
-
Title: Dependency Network-Based Portfolio Design with Forecasting and VaR ConstraintsSubjects: Portfolio Management (q-fin.PM); Econometrics (econ.EM); Statistical Finance (q-fin.ST); Machine Learning (stat.ML)
This study proposes a novel portfolio optimization framework that integrates statistical social network analysis with time series forecasting and risk management. Using daily stock data from the S&P 500 (2020-2024), we construct dependency networks via Vector Autoregression (VAR) and Forecast Error Variance Decomposition (FEVD), transforming influence relationships into a cost-based network. Specifically, FEVD breaks down the VAR's forecast error variance to quantify how much each stock's shocks contribute to another's uncertainty information we invert to form influence-based edge weights in our network. By applying the Minimum Spanning Tree (MST) algorithm, we extract the core inter-stock structure and identify central stocks through degree centrality. A dynamic portfolio is constructed using the top-ranked stocks, with capital allocated based on Value at Risk (VaR). To refine stock selection, we incorporate forecasts from ARIMA and Neural Network Autoregressive (NNAR) models. Trading simulations over a one-year period demonstrate that the MST-based strategies outperform a buy-and-hold benchmark, with the tuned NNAR-enhanced strategy achieving a 63.74% return versus 18.00% for the benchmark. Our results highlight the potential of combining network structures, predictive modeling, and risk metrics to improve adaptive financial decision-making.
- [7] arXiv:2507.20338 [pdf, html, other]
-
Title: Lévy-Driven Option Pricing without a Riskless AssetSubjects: Mathematical Finance (q-fin.MF)
We extend the Lindquist-Rachev (LR) option-pricing framework--which values derivatives in markets lacking a traded risk-free bond--by introducing common Levy jump dynamics across two risky assets. The resulting endogenous "shadow" short rate replaces the usual risk-free yield and governs discounting and risk-neutral drifts. We focus on two widely used pure-jump specifications: the Normal Inverse Gaussian (NIG) process and the Carr-Geman-Madan-Yor (CGMY) tempered-stable process. Using Ito-Levy calculus we derive an LR partial integro-differential equation (LR-PIDE) and obtain European option values through characteristic-function methods implemented with the Fast Fourier Transform (FFT) and Fourier-cosine (COS) algorithms. Calibrations to S and P 500 index options show that both jump models materially reduce pricing errors and fit the observed volatility smile far better than the Black-Scholes benchmark; CGMY delivers the largest improvement. We also extract time-varying shadow short rates from paired asset data and show that sharp declines coincide with liquidity-stress episodes, highlighting risk signals not visible in Treasury yields. The framework links jump risk, relative asset pricing, and funding conditions in a tractable form for practitioners.
- [8] arXiv:2507.20340 [pdf, other]
-
Title: Measuring the Macroeconomic and Financial Stability of BangladeshSubjects: General Economics (econ.GN)
This study constructs an Aggregate Financial Stability Index (AFSI) for Bangladesh to evaluate the systemic health and resilience of the countrys financial system during the period from 2016 to 2024. The index incorporates 19 macrofinancial indicators across four key sectors Real Sector, Financial and Monetary Sector, Fiscal Sector and External Sector. Using a normalized scoring approach and equal weighting scheme, sub-indices were aggregated to form a comprehensive measure of financial stability. The findings indicate that while the Real and Fiscal sectors demonstrated modest improvements in FY2024, overall financial stability deteriorated, largely due to poor performance in the Financial and Monetary Sector and continued weakness in the External Sector. Key stress indicators include rising non-performing loans, declining capital adequacy ratios, weak capital market performance, growing external debt, and shrinking foreign exchange reserves. The study highlights the interconnectedness of macro-financial sectors and the urgent need for structural reforms, stronger regulatory oversight, and enhanced macroprudential policy coordination. The AFSI framework developed in this paper offers an early warning tool for policymakers and contributes to the literature on financial stability measurement in emerging economies.
- [9] arXiv:2507.20410 [pdf, other]
-
Title: Beyond pay: AI skills reward more job benefitsComments: 41 pages, 10 figures, 5 tablesSubjects: General Economics (econ.GN)
This study investigates the non-monetary rewards associated with artificial intelligence (AI) skills in the U.S. labour market. Using a dataset of approximately ten million online job vacancies from 2018 to 2024, we identify AI roles-positions requiring at least one AI-related skill-and examine the extent to which these roles offer non-monetary benefits such as tuition assistance, paid leave, health and well-being perks, parental leave, workplace culture enhancements, and remote work options. While previous research has documented substantial wage premiums for AI-related roles due to growing demand and limited talent supply, our study asks whether this demand also translates into enhanced non-monetary compensation. We find that AI roles are significantly more likely to offer such perks, even after controlling for education requirements, industry, and occupation type. It is twice as likely for an AI role to offer parental leave and almost three times more likely to provide remote working options. Moreover, the highest-paying AI roles tend to bundle these benefits, suggesting a compound premium where salary increases coincide with expanded non-monetary rewards. AI roles offering parental leave or health benefits show salaries that are, on average, 12% to 20% higher than AI roles without this benefit. This pattern is particularly pronounced in years and occupations experiencing the highest AI-related demand, pointing to a demand-driven dynamic. Our findings underscore the strong pull of AI talent in the labor market and challenge narratives of technological displacement, highlighting instead how employers compete for scarce talent through both financial and non-financial incentives.
- [10] arXiv:2507.20468 [pdf, other]
-
Title: Building crypto portfolios with agentic AIComments: 12 pages, 2 figuresSubjects: Portfolio Management (q-fin.PM); Machine Learning (cs.LG)
The rapid growth of crypto markets has opened new opportunities for investors, but at the same time exposed them to high volatility. To address the challenge of managing dynamic portfolios in such an environment, this paper presents a practical application of a multi-agent system designed to autonomously construct and evaluate crypto-asset allocations. Using data on daily frequencies of the ten most capitalized cryptocurrencies from 2020 to 2025, we compare two automated investment strategies. These are a static equal weighting strategy and a rolling-window optimization strategy, both implemented to maximize the evaluation metrics of the Modern Portfolio Theory (MPT), such as Expected Return, Sharpe and Sortino ratios, while minimizing volatility. Each step of the process is handled by dedicated agents, integrated through a collaborative architecture in Crew AI. The results show that the dynamic optimization strategy achieves significantly better performance in terms of risk-adjusted returns, both in-sample and out-of-sample. This highlights the benefits of adaptive techniques in portfolio management, particularly in volatile markets such as cryptocurrency markets. The following methodology proposed also demonstrates how multi-agent systems can provide scalable, auditable, and flexible solutions in financial automation.
- [11] arXiv:2507.20474 [pdf, html, other]
-
Title: MountainLion: A Multi-Modal LLM-Based Agent System for Interpretable and Adaptive Financial TradingSiyi Wu, Zhaoyang Guan, Leyi Zhao, Xinyuan Song, Xinyu Ying, Hanlin Zhang, Michele Pak, Yangfan He, Yi Xin, Jianhui Wang, Tianyu ShiSubjects: Trading and Market Microstructure (q-fin.TR); Computation and Language (cs.CL); Machine Learning (cs.LG)
Cryptocurrency trading is a challenging task requiring the integration of heterogeneous data from multiple modalities. Traditional deep learning and reinforcement learning approaches typically demand large training datasets and encode diverse inputs into numerical representations, often at the cost of interpretability. Recent progress in large language model (LLM)-based agents has demonstrated the capacity to process multi-modal data and support complex investment decision-making. Building on these advances, we present \textbf{MountainLion}, a multi-modal, multi-agent system for financial trading that coordinates specialized LLM-based agents to interpret financial data and generate investment strategies. MountainLion processes textual news, candlestick charts, and trading signal charts to produce high-quality financial reports, while also enabling modification of reports and investment recommendations through data-driven user interaction and question answering. A central reflection module analyzes historical trading signals and outcomes to continuously refine decision processes, and the system is capable of real-time report analysis, summarization, and dynamic adjustment of investment strategies. Empirical results confirm that MountainLion systematically enriches technical price triggers with contextual macroeconomic and capital flow signals, providing a more interpretable, robust, and actionable investment framework that improves returns and strengthens investor confidence.
- [12] arXiv:2507.20494 [pdf, html, other]
-
Title: Deep Reputation Scoring in DeFi: zScore-Based Wallet Ranking from Liquidity and Trading SignalsComments: Comments: 10 pages, 5 figures. Independently developed system by Zeru Finance for decentralized user scoring. Not submitted to any conference or journalSubjects: General Finance (q-fin.GN); Machine Learning (cs.LG)
As decentralized finance (DeFi) evolves, distinguishing between user behaviors - liquidity provision versus active trading - has become vital for risk modeling and on-chain reputation. We propose a behavioral scoring framework for Uniswap that assigns two complementary scores: a Liquidity Provision Score that assesses strategic liquidity contributions, and a Swap Behavior Score that reflects trading intent, volatility exposure, and discipline. The scores are constructed using rule-based blueprints that decompose behavior into volume, frequency, holding time, and withdrawal patterns. To handle edge cases and learn feature interactions, we introduce a deep residual neural network with densely connected skip blocks inspired by the U-Net architecture. We also incorporate pool-level context such as total value locked (TVL), fee tiers, and pool size, allowing the system to differentiate similar user behaviors across pools with varying characteristics. Our framework enables context-aware and scalable DeFi user scoring, supporting improved risk assessment and incentive design. Experiments on Uniswap v3 data show its usefulness for user segmentation and protocol-aligned reputation systems. Although we refer to our metric as zScore, it is independently developed and methodologically different from the cross-protocol system proposed by Udupi et al. Our focus is on role-specific behavioral modeling within Uniswap using blueprint logic and supervised learning.
- [13] arXiv:2507.20796 [pdf, other]
-
Title: Aligning Large Language Model Agents with Rational and Moral Preferences: A Supervised Fine-Tuning ApproachSubjects: General Economics (econ.GN); Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
Understanding how large language model (LLM) agents behave in strategic interactions is essential as these systems increasingly participate autonomously in economically and morally consequential decisions. We evaluate LLM preferences using canonical economic games, finding substantial deviations from human behavior. Models like GPT-4o show excessive cooperation and limited incentive sensitivity, while reasoning models, such as o3-mini, align more consistently with payoff-maximizing strategies. We propose a supervised fine-tuning pipeline that uses synthetic datasets derived from economic reasoning to align LLM agents with economic preferences, focusing on two stylized preference structures. In the first, utility depends only on individual payoffs (homo economicus), while utility also depends on a notion of Kantian universalizability in the second preference structure (homo moralis). We find that fine-tuning based on small datasets shifts LLM agent behavior toward the corresponding economic agent. We further assess the fine-tuned agents' behavior in two applications: Moral dilemmas involving autonomous vehicles and algorithmic pricing in competitive markets. These examples illustrate how different normative objectives embedded via realizations from structured preference structures can influence market and moral outcomes. This work contributes a replicable, cost-efficient, and economically grounded pipeline to align AI preferences using moral-economic principles.
- [14] arXiv:2507.20957 [pdf, html, other]
-
Title: Your AI, Not Your View: The Bias of LLMs in Investment AnalysisHoyoung Lee, Junhyuk Seo, Suhwan Park, Junhyeong Lee, Wonbin Ahn, Chanyeol Choi, Alejandro Lopez-Lira, Yongjae LeeSubjects: Portfolio Management (q-fin.PM); Artificial Intelligence (cs.AI); Computation and Language (cs.CL)
In finance, Large Language Models (LLMs) face frequent knowledge conflicts due to discrepancies between pre-trained parametric knowledge and real-time market data. These conflicts become particularly problematic when LLMs are deployed in real-world investment services, where misalignment between a model's embedded preferences and those of the financial institution can lead to unreliable recommendations. Yet little research has examined what investment views LLMs actually hold. We propose an experimental framework to investigate such conflicts, offering the first quantitative analysis of confirmation bias in LLM-based investment analysis. Using hypothetical scenarios with balanced and imbalanced arguments, we extract models' latent preferences and measure their persistence. Focusing on sector, size, and momentum, our analysis reveals distinct, model-specific tendencies. In particular, we observe a consistent preference for large-cap stocks and contrarian strategies across most models. These preferences often harden into confirmation bias, with models clinging to initial judgments despite counter-evidence.
New submissions (showing 14 of 14 entries)
- [15] arXiv:2507.19487 (cross-list from cs.CY) [pdf, other]
-
Title: Does AI and Human Advice Mitigate Punishment for Selfish Behavior? An Experiment on AI ethics From a Psychological PerspectiveSubjects: Computers and Society (cs.CY); Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Human-Computer Interaction (cs.HC); General Economics (econ.GN)
People increasingly rely on AI-advice when making decisions. At times, such advice can promote selfish behavior. When individuals abide by selfishness-promoting AI advice, how are they perceived and punished? To study this question, we build on theories from social psychology and combine machine-behavior and behavioral economic approaches. In a pre-registered, financially-incentivized experiment, evaluators could punish real decision-makers who (i) received AI, human, or no advice. The advice (ii) encouraged selfish or prosocial behavior, and decision-makers (iii) behaved selfishly or, in a control condition, behaved prosocially. Evaluators further assigned responsibility to decision-makers and their advisors. Results revealed that (i) prosocial behavior was punished very little, whereas selfish behavior was punished much more. Focusing on selfish behavior, (ii) compared to receiving no advice, selfish behavior was penalized more harshly after prosocial advice and more leniently after selfish advice. Lastly, (iii) whereas selfish decision-makers were seen as more responsible when they followed AI compared to human advice, punishment between the two advice sources did not vary. Overall, behavior and advice content shape punishment, whereas the advice source does not.
- [16] arXiv:2507.20202 (cross-list from cs.LG) [pdf, html, other]
-
Title: Technical Indicator Networks (TINs): An Interpretable Neural Architecture Modernizing Classic al Technical Analysis for Adaptive Algorithmic TradingComments: Patent Application No. DE10202502351 filed on July 8, 2025 with DPMASubjects: Machine Learning (cs.LG); Portfolio Management (q-fin.PM)
This work proposes that a vast majority of classical technical indicators in financial analysis are, in essence, special cases of neural networks with fixed and interpretable weights. It is shown that nearly all such indicators, such as moving averages, momentum-based oscillators, volatility bands, and other commonly used technical constructs, can be reconstructed topologically as modular neural network components. Technical Indicator Networks (TINs) are introduced as a general neural architecture that replicates and structurally upgrades traditional indicators by supporting n-dimensional inputs such as price, volume, sentiment, and order book data. By encoding domain-specific knowledge into neural structures, TINs modernize the foundational logic of technical analysis and propel algorithmic trading into a new era, bridging the legacy of proven indicators with the potential of contemporary AI systems.
- [17] arXiv:2507.20263 (cross-list from cs.LG) [pdf, html, other]
-
Title: Learning from Expert Factors: Trajectory-level Reward Shaping for Formulaic Alpha MiningSubjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Portfolio Management (q-fin.PM)
Reinforcement learning (RL) has successfully automated the complex process of mining formulaic alpha factors, for creating interpretable and profitable investment strategies. However, existing methods are hampered by the sparse rewards given the underlying Markov Decision Process. This inefficiency limits the exploration of the vast symbolic search space and destabilizes the training process. To address this, Trajectory-level Reward Shaping (TLRS), a novel reward shaping method, is proposed. TLRS provides dense, intermediate rewards by measuring the subsequence-level similarity between partially generated expressions and a set of expert-designed formulas. Furthermore, a reward centering mechanism is introduced to reduce training variance. Extensive experiments on six major Chinese and U.S. stock indices show that TLRS significantly improves the predictive power of mined factors, boosting the Rank Information Coefficient by 9.29% over existing potential-based shaping algorithms. Notably, TLRS achieves a major leap in computational efficiency by reducing its time complexity with respect to the feature dimension from linear to constant, which is a significant improvement over distance-based baselines.
Cross submissions (showing 3 of 3 entries)
- [18] arXiv:2401.13159 (replaced) [pdf, other]
-
Title: Retail prices, environmental footprints, and nutritional profiles of commonly sold retail food items in 181 countriesSubjects: General Economics (econ.GN)
Background: Transitions towards healthier, more environmentally sustainable diets would require large shifts in consumption patterns. Cost and affordability can be barriers to consuming healthy, sustainable diets.
Objective: This study provides the first worldwide test of how retail food prices relate to empirically estimated environmental footprints and nutritional profile scores between and within food groups.
Methods: We use 48,316 prices for 860 retail food items commonly sold in 181 countries during 2011 and 2017, matched to estimated carbon and water footprints and nutritional profiles, to test whether healthier and more sustainable foods are more expensive between and within food groups.
Results: Prices, environmental footprints, and nutritional profiles differ between food groups. Within almost all groups, more expensive items have significantly larger carbon and water footprints. Associations are strongest for animal source foods, where each 10% increment in price is associated with 21 grams higher carbon footprint and 5 liters higher water footprint per 100kcal of food. There is no such gradient for price and nutritional profile, as more expensive items are sometimes healthier and sometimes less healthy depending on the food group, price range, and nutritional attribute of interest.
Conclusions: Our finding that higher-priced items have larger environmental footprints is contrary to expectations that a more sustainable diet would be more expensive. Instead, we find that within each food group, meeting dietary needs with lower environmental footprints is possible by choosing items with a lower unit price. These findings are consistent with prior observations that higher-priced items typically use more resources, including energy and water, but may or may not be healthful as measured by nutrient profile scores. - [19] arXiv:2403.13138 (replaced) [pdf, html, other]
-
Title: Max- and min-stability under first-order stochastic dominanceSubjects: Mathematical Finance (q-fin.MF); Probability (math.PR)
Max-stability is the property that taking a maximum between two inputs results in a maximum between two outputs. We study max-stability with respect to first-order stochastic dominance, the most fundamental notion of stochastic dominance in decision theory. Under two additional standard axioms of nondegeneracy and lower semicontinuity, we establish a representation theorem for functionals satisfying max-stability, which turns out to be represented by the supremum of a bivariate function. A parallel characterization result for min-stability, that is, with the maximum replaced by the minimum in max-stability, is also established. By combining both max-stability and min-stability, we obtain a new characterization for a class of functionals, called the Lambda-quantiles, that appear in finance and political science.
- [20] arXiv:2410.14173 (replaced) [pdf, other]
-
Title: Decentralized Finance (Literacy) today and in 2034: Initial Insights from Singapore and beyondComments: working paperSubjects: General Finance (q-fin.GN)
How will Decentralized Finance transform financial services? Using New Institutional Economics and Dynamic Capabilities Theory, I analyse survey data from 109 experts using non-parametric methods. Experts span traditional finance, DeFi industry, and academia. Four insights emerge: adoption expectations rise from negligible to 43% expecting at least high adoption by 2034; experts expect convergence scenarios over disruption, with traditional finance embracing DeFi most likely; back-office transforms before customer-facing functions; strategic competencies eclipse DeFi-sector specific- and technical skills. This challenges technology-centric adoption models. DeFi represents emerging market entry requiring organizational transformation, not just technological implementation. SEC developments validate predictions. Financial institutions should prioritize developing strategic capabilities over mere technical training.
- [21] arXiv:2411.08864 (replaced) [pdf, html, other]
-
Title: Isotropic Correlation Models for the Cross-Section of Equity ReturnsComments: 24 pages, 5 figures, code is available on the author's personal GitHub repository, code executes in Google's Colab system and generates figures from live data downloaded from Yahoo! FinanceSubjects: Portfolio Management (q-fin.PM)
This note discusses some of the aspects of a model for the covariance of equity returns based on a simple "isotropic" structure in which all pairwise correlations are taken to be the same value. The effect of the structure on feasible values for the common correlation of returns and on the "effective degrees of freedom" within the equity cross-section are discussed, as well as the impact of this constraint on the asymptotic Normality of portfolio returns. An eigendecomposition of the covariance matrix is presented and used to partition variance into that from a common "market" factor and "non-diversifiable" idiosyncratic risk. A empirical analysis of the recent history of the returns of S&P 500 Index members is presented and compared to the expectations from both this model and linear factor models. This analysis supports the isotropic covariance model and does not seem to provide evidence in support of linear factor models. Analysis of portfolio selection under isotropic correlation is presented using mean-variance optimization for both heteroskedastic and homoskedastic cases. Portfolio selection for negative exponential utility maximizers is also discussed for the general case of distributions of returns with elliptical symmetry. The fact that idiosyncratic risk may not be removed by diversification in a model that the data supports undermines the basic premises of structures such as the C.A.P.M. and A.P.T. If the cross-section of equity returns is more accurately described by this structure then an inevitable consequence is that picking stocks is not a "pointless" activity, as the returns to residual risk would be non-zero.
- [22] arXiv:2505.18687 (replaced) [pdf, html, other]
-
Title: An AI Capability Threshold for Rent-Funded Universal Basic Income in an AI-Automated EconomyComments: 9 pages, 3 figures, added more clarifications and refsSubjects: General Economics (econ.GN); Artificial Intelligence (cs.AI); Computer Science and Game Theory (cs.GT)
We derive the first closed-form condition under which artificial intelligence (AI) capital profits could sustainably finance a universal basic income (UBI) without additional taxes or new job creation. In a Solow-Zeira economy characterized by a continuum of automatable tasks, a constant net saving rate $s$, and task-elasticity $\sigma < 1$, we analyze how the AI capability threshold--defined as the productivity level of AI relative to pre-AI automation--varies under different economic scenarios. At present economic parameters, we find that AI systems must achieve only approximately 5-6 times existing automation productivity to finance an 11%-of-GDP UBI, in the worst case situation where *no* new jobs or tasks are created.
Our analysis also reveals some specific policy levers: raising public revenue share (e.g. profit taxation) of AI capital from the current 15% to about 33% halves the required AI capability threshold to attain UBI to 3 times existing automotion productivity, but gains diminish beyond 50% public revenue share, especially if regulatory costs increase. Market structure also strongly affects outcomes: monopolistic or concentrated oligopolistic markets reduce the threshold by increasing economic rents, whereas heightened competition significantly raises it.
Overall, these results suggest a couple policy recommendations: maximizing public revenue share up to a point so that operating costs are minimized, and strategically managing market competition can ensure AI's growing capabilities translate into meaningful social benefits within realistic technological progress scenarios. - [23] arXiv:2507.17191 (replaced) [pdf, other]
-
Title: Quotas for scholarship recipients: an efficient race-neutral alternative to affirmative action?Subjects: General Economics (econ.GN)
Since 2018, France's centralized higher education platform, Parcoursup, has implemented quotas for scholarship recipients, with program-specific thresholds based on the applicant's composition. Using difference-in-differences methods, I find these quotas enabled scholarship students to access more selective programs, though intention-to-treat effects remain modest (maximum 0.10 SD). Matching methods reveal that the policy improved the scholarship students' waiting list positions relative to those of comparable non-scholarship peers. However, I detect no robust or lasting effects on the extensive margin of higher education access. Despite high policy salience, quotas did not affect the application behavior or pre-college investment of scholarship students, even among high achievers. These findings align with research on affirmative action bans, suggesting that such policies primarily benefit disadvantaged students who access selective institutions, rather than expanding total enrollment. Nevertheless, scholarship quotas demonstrate that race-neutral alternatives can effectively promote socioeconomic diversity in prestigious programs.
- [24] arXiv:2408.02634 (replaced) [pdf, other]
-
Title: CLVR Ordering of Transactions on AMMsSubjects: Computer Science and Game Theory (cs.GT); Mathematical Finance (q-fin.MF); Trading and Market Microstructure (q-fin.TR)
This paper introduces a trade ordering rule that aims to reduce intra-block price volatility in Automated Market Maker (AMM) powered decentralized exchanges. The ordering rule introduced here, Clever Look-ahead Volatility Reduction (CLVR), operates under the (common) framework in decentralized finance that allows some entities to observe trade requests before they are settled, assemble them into "blocks", and order them as they like. On AMM exchanges, asset prices are continuously and transparently updated as a result of each trade and therefore, transaction order has high financial value. CLVR aims to order transactions for traders' benefit. Our primary focus is intra-block price stability (minimizing volatility), which has two main benefits for traders: it reduces transaction failure rate and allows traders to receive closer prices to the reference price at which they submit their transactions accordingly. We show that CLVR constructs an ordering which approximately minimizes price volatility with a small computation cost and can be trivially verified externally.