Skip to content
Go back 2505.07393 arXiv logo

AI in Money Matters

Published:  at  11:03 AM
99.13 🤔

This paper investigates the cautious adoption of Large Language Models like ChatGPT in the Fintech industry through qualitative interviews, highlighting professionals’ optimism for routine task automation, concerns over regulatory inadequacies, and interest in bespoke models to ensure compliance and data control.

Large Language Model, Generative AI, AI Ethics, Human-AI Interaction, AI in Finance, Trustworthy AI

Nadine Sandjo Tchatchoua, Richard Harper

Roskilde University, Lancaster University

Generated by grok-3

Background Problem

The rapid emergence of Large Language Models (LLMs) like ChatGPT since November 2022 has sparked widespread academic and public interest in their potential societal benefits and challenges. However, there is a significant gap in understanding how these models are perceived and adopted in regulated industries such as Fintech, where issues of reliability, accountability, and compliance are paramount. This paper addresses this gap by investigating the factors influencing LLM adoption in Fintech and how companies in this sector are reacting to these technological developments, focusing on professional viewpoints to counterbalance the hype surrounding generative AI.

Method

The study employs a qualitative approach using semi-structured interviews with representatives from six Fintech companies based in London, Stockholm, and Copenhagen. The methodology focuses on capturing detailed perspectives on current and potential uses of LLMs, particularly ChatGPT, within these organizations. Interviews, lasting between 36 and 63 minutes, covered three main areas: past usage of LLMs, future expansion plans and associated legal challenges, and the adequacy of current regulations like GDPR for governing LLM applications. Data analysis followed a general inductive approach, involving transcription, iterative categorization, and thematic grouping by both authors to identify key themes such as caution in adoption, regulatory concerns, and interest in bespoke models.

Experiment

The ‘experiment’ in this context is the empirical investigation through interviews rather than a controlled technical experiment. The setup involved opportunity and snowball sampling to recruit participants with knowledge of or intent to adopt LLMs in their operations, ensuring relevance to the research question. The results revealed cautious optimism: Fintech professionals see potential in LLMs for routine tasks like documentation and customer service but are wary of using them for decision-making due to risks of error, bias, and regulatory non-compliance. The focus on regulation as inadequate and the interest in bespoke models reflect practical concerns, though the small sample size limits broader applicability. The dismissal of environmental impacts by respondents contrasts with academic concerns, and the study does not critically evaluate whether this pragmatism overlooks long-term sustainability issues. Overall, the setup is reasonable for an exploratory study but lacks depth in assessing specific use cases or measurable outcomes of LLM adoption.

Further Thoughts

The paper’s focus on practitioner perspectives in Fintech offers a grounded counterpoint to the often speculative discourse on LLMs, but it raises questions about whether the industry’s caution might delay innovation or if it wisely mitigates risks seen in other AI hype cycles (e.g., the 5th Generation AI promises). An interesting angle for future research could explore how bespoke LLM development in Fintech might intersect with federated learning approaches, where data privacy is preserved by training models locally without central data aggregation—a critical concern in regulated sectors. Additionally, the dismissal of environmental impacts by Fintech actors contrasts sharply with growing academic and policy emphasis on sustainable AI, suggesting a potential disconnect between industry priorities and broader societal goals. This tension could be further investigated by comparing Fintech attitudes with other regulated sectors like healthcare, where ethical considerations might weigh differently. Finally, linking this study to historical AI adoption patterns could provide insights into whether LLMs are on a path to practical integration or another ‘trough of disillusionment’ as per Gartner’s hype cycle.



Previous Post
Towards Revealing the Effectiveness of Small-Scale Fine-tuning in R1-style Reinforcement Learning
Next Post
Interleaved Reasoning for Large Language Models via Reinforcement Learning