Tag: Trustworthy AI
All the articles with the tag "Trustworthy AI".
-
AI in Money Matters
This paper investigates the cautious adoption of Large Language Models like ChatGPT in the Fintech industry through qualitative interviews, highlighting professionals' optimism for routine task automation, concerns over regulatory inadequacies, and interest in bespoke models to ensure compliance and data control.
-
A Large-Scale Empirical Analysis of Custom GPTs' Vulnerabilities in the OpenAI Ecosystem
This paper conducts a large-scale empirical analysis of 14,904 custom GPTs in the OpenAI store, revealing over 95% lack adequate security against attacks like roleplay (96.51%) and phishing (91.22%), introduces a multi-metric popularity ranking system, and highlights the need for enhanced security in both custom and base models.
-
Facets of Disparate Impact: Evaluating Legally Consistent Bias in Machine Learning
This paper introduces the Objective Fairness Index (OFI), a legally grounded metric for evaluating bias in machine learning by comparing marginal benefits across groups, demonstrating its ability to detect algorithmic bias in applications like COMPAS and Folktable's Adult Employment dataset where traditional Disparate Impact fails.
-
UnifyFL: Enabling Decentralized Cross-Silo Federated Learning
UnifyFL proposes a decentralized cross-silo federated learning framework using Ethereum blockchain and IPFS to enable trust-based collaboration among organizations, achieving comparable accuracy to centralized FL with flexible aggregation policies and efficient handling of stragglers through synchronous and asynchronous modes.
-
A closer look at how large language models trust humans: patterns and biases
本研究通过模拟实验首次揭示大型语言模型对人类的隐性信任模式,显示其类似于人类受可信度维度影响,但存在模型异质性和人口统计学偏差。