Bitget App
Trade smarter
Buy cryptoMarketsTradeFuturesEarnSquareMore
When You Tell AI Models to Act Like Women, Most Become More Risk-Averse: Study

When You Tell AI Models to Act Like Women, Most Become More Risk-Averse: Study

CryptoNewsNetCryptoNewsNet2025/10/11 19:36
By:decrypt.co

Ask an AI to make decisions as a woman, and it suddenly gets more cautious about risk. Tell the same AI to think like a man, and watch it roll the dice with greater confidence.

A new research paper from Allameh Tabataba'i University in Tehran, Iran revealed that large language models systematically change their fundamental approach to financial risk-taking behavior based on the gender identity they're asked to assume.

The study, which tested AI systems from companies including OpenAI, Google, Meta, and DeepSeek, revealed that several models dramatically shifted their risk tolerance when prompted with different gender identities.

DeepSeek Reasoner and Google's Gemini 2.0 Flash-Lite showed the most pronounced effect, becoming notably more risk-averse when asked to respond as women, mirroring real-world patterns where women statistically demonstrate greater caution in financial decisions.

The researchers used a standard economics test called the Holt-Laury task, which presents participants with 10 decisions between safer and riskier lottery options. As the choices progress, the probability of winning increases for the risky option. Where someone switches from the safe to the risky choice reveals their risk tolerance—switch early and you're a risk-taker, switch late and you're risk-averse.

When You Tell AI Models to Act Like Women, Most Become More Risk-Averse: Study image 0

When DeepSeek Reasoner was told to act as a woman, it consistently chose the safer option more often than when prompted to act as a man. The difference was measurable and consistent across 35 trials for each gender prompt. Gemini showed similar patterns, though the effect varied in strength.

On the other hand, OpenAI's GPT models remained largely unmoved by gender prompts, maintaining their risk-neutral approach regardless of whether they were told to think as male or female.

Meta's Llama models acted unpredictably, sometimes showing the expected pattern, sometimes reversing it entirely. Meanwhile, xAI's Grok did Grok things, occasionally flipping the script entirely, showing less risk aversion when prompted as female.

When You Tell AI Models to Act Like Women, Most Become More Risk-Averse: Study image 1

OpenAI has clearly been working on making its models more balanced. A previous study from 2023 found its models exhibited clear political biases, which OpenAI appears to have addressed by now, showing a 30% decrease in biased replies according to a new research.

The research team, led by Ali Mazyaki, noted that this is basically a reflection of human stereotypes.

“This observed deviation aligns with established patterns in human decision-making, where gender has been shown to influence risk-taking behavior, with women typically exhibiting greater risk aversion than men,” the study says.

The study also examined whether AIs could convincingly play other roles beyond gender. When told to act as a "finance minister" or imagine themselves in a disaster scenario, the models again showed varying degrees of behavioral adaptation. Some adjusted their risk profiles appropriately for the context, while others remained stubbornly consistent.

<span></span>

Now, think about this: Many of these behavioral patterns aren't immediately obvious to users. An AI that subtly shifts its recommendations based on implicit gender cues in conversation could reinforce societal biases without anyone realizing it's happening.

For example, a loan approval system that becomes more conservative when processing applications from women, or an investment advisor that suggests safer portfolios to female clients, would perpetuate economic disparities under the guise of algorithmic objectivity.

The researchers argue these findings highlight the need for what they call "bio-centric measures" of AI behavior—ways to evaluate whether AI systems accurately represent human diversity without amplifying harmful stereotypes. They suggest that the ability to be manipulated isn't necessarily bad; an AI assistant should be able to adapt to represent different risk preferences when appropriate. The problem arises when this adaptability becomes an avenue for bias.

The research arrives as AI systems increasingly influence high-stakes decisions. From medical diagnosis to criminal justice, these models are being deployed in contexts where risk assessment directly impacts human lives.

If a medical AI becomes overly cautious when interfacing with female physicians or patients, then it could affect treatment recommendations. If a parole assessment algorithm shifts its risk calculations based on gendered language in case files, it could perpetuate systemic inequalities.

The study tested models ranging from tiny half-billion parameter systems to massive seven-billion parameter architectures, finding that size didn't predict gender responsiveness. Some smaller models showed stronger gender effects than their larger siblings, suggesting this isn't simply a matter of throwing more computing power at the problem.

This is a problem that cannot be solved easily. After all, the internet, the whole knowledge database used to train these models, not to mention our history as a species, is full of tales about men being reckless brave superheroes that know no fear and women being more cautious and thoughtful. In the end, teaching AIs to think differently may require us to live differently first.

0

Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.

PoolX: Earn new token airdrops
Lock your assets and earn 10%+ APR
Lock now!

You may also like

Bitcoin’s Sharp Decline: What Causes the Price Swings?

- Bitcoin dropped 32% in late 2025, falling from $126,300 to below $86,000 amid macroeconomic pressures and regulatory uncertainty. - Fed rate cut expectations and stalled CLARITY Act legislation fueled investor panic, while 3.1% inflation and disrupted employment data worsened risk-off sentiment. - Institutional buyers accumulated 18,700 BTC in November, contrasting retail-driven selloffs, as Fear & Greed Index signaled extreme bearishness before partial recovery. - Market analysts highlight the need to b

Bitget-RWA2025/11/30 00:22
Bitcoin’s Sharp Decline: What Causes the Price Swings?

Bitcoin Updates: Altcoin Momentum Faces Resistance from Wall Street’s Bitcoin-Linked Structured Products

- Animoca Brands plans 2026 U.S. IPO, shifting focus to altcoins and real-world asset tokenization to attract traditional investors. - Tom Lee revised Bitcoin forecast to $100,000 by year-end, citing market volatility and macroeconomic risks after October's $19B liquidation event. - JPMorgan launched Bitcoin-linked structured notes via BlackRock ETF, reflecting Wall Street's growing acceptance of crypto as a long-term asset class. - Industry trends highlight altcoin diversification, with Animoca's co-found

Bitget-RWA2025/11/29 23:32
Bitcoin Updates: Altcoin Momentum Faces Resistance from Wall Street’s Bitcoin-Linked Structured Products

ZEC Surges 701.51% This Year as Grayscale Files for Zcash ETF and Institutional Demand Increases

- Grayscale filed an S-3 registration with the SEC to convert its Zcash Trust into the first U.S. spot ETF for privacy-focused ZEC, signaling growing institutional adoption. - Zcash's shielded transactions now account for 30% of trades, with 20-25% of its supply stored in encrypted addresses, highlighting demand for privacy-enhanced crypto. - ZEC surged 701.51% year-to-date in 2025 but fell 13.26% weekly, reflecting crypto market volatility despite outperforming Bitcoin and Ethereum . - The pending ETF app

Bitget-RWA2025/11/29 23:22
ZEC Surges 701.51% This Year as Grayscale Files for Zcash ETF and Institutional Demand Increases

Zcash News Today: Crypto’s Schism: Doubt in L1s Contrasted with Growth at the Application Layer

- QwQiao critiques speculative L1 tokens (e.g., Bitcoin , Ethereum) for scalability issues and volatile valuations, contrasting them with utility-driven application-layer innovations. - Application-layer projects like DeFi, NFTs, and privacy-focused Zcash (ZEC) gain traction via real-world use cases, exemplified by Grayscale's ZEC ETF and Bitcoin Munari's structured token sales. - Dynamic tokenomics and institutional adoption (e.g., Ripple's RLUSD approval) highlight shifting priorities toward sustainable

Bitget-RWA2025/11/29 23:00
Zcash News Today: Crypto’s Schism: Doubt in L1s Contrasted with Growth at the Application Layer