Tuesday, July 29, 2025
  • Home
  • Cryptocurrency
  • Bitcoin
  • Blockchain
  • Market & Analysis
  • Altcoin
  • More
    • Ethereum
    • DeFi
    • XRP
    • Dogecoin
    • NFTs
    • Regulations
  • Shop
    • Bitcoin Book
    • Bitcoin Coin
    • Bitcoin Hat
    • Bitcoin Merch
    • Bitcoin Miner
    • Bitcoin Miner Machine
    • Bitcoin Shirt
    • Bitcoin Standard
    • Bitcoin Wallet
Finance Bitcoin
Shop
No Result
View All Result
Finance Bitcoin
No Result
View All Result
Home Blockchain

Even OpenAI CEO Sam Altman thinks you shouldn’t trust AI for therapy

by n70products
July 28, 2025
in Blockchain
0
Even OpenAI CEO Sam Altman thinks you shouldn’t trust AI for therapy
189
SHARES
1.5k
VIEWS
Share on FacebookShare on Twitter


gettyimages-2198379646

Bloomberg / Contributor/Getty

Remedy can really feel like a finite useful resource, particularly recently. In consequence, many individuals — especially young adults — are turning to AI chatbots, together with ChatGPT and people hosted on platforms like Character.ai, to simulate the remedy expertise. 

However is that a good suggestion privacy-wise? Even Sam Altman, the CEO behind ChatGPT itself, has doubts. 

In an interview with podcaster Theo Von final week, Altman stated he understood issues about sharing delicate private info with AI chatbots, and advocated for consumer conversations to be protected by related privileges to these medical doctors, legal professionals, and human therapists have. He echoed Von’s issues, saying he believes it is smart “to actually need the privateness readability earlier than you employ [AI] lots, the authorized readability.”

Additionally: Bad vibes: How an AI agent coded its way to disaster

At present, AI corporations supply some on-off settings for maintaining chatbot conversations out of coaching information — there are a few ways to do this in ChatGPT. Except modified by the consumer, default settings will use all interactions to coach AI fashions. Firms haven’t clarified additional how delicate info a consumer shares with a bot in a question, like medical take a look at outcomes or wage info, can be protected against being spat out in a while by the chatbot or in any other case leaked as information. 

However Altman’s motivations could also be extra knowledgeable by mounting authorized stress on OpenAI than a priority for consumer privateness. His firm, which is being sued by the New York Instances for copyright infringement, has turned down authorized requests to maintain and hand over consumer conversations as a part of the lawsuit. 

(Disclosure: Ziff Davis, CNET’s guardian firm, in April filed a lawsuit towards OpenAI, alleging it infringed Ziff Davis copyrights in coaching and working its AI methods.)

Additionally: Anthropic says Claude helps emotionally support users – we’re not convinced

Whereas some sort of AI chatbot-user confidentiality privilege may hold consumer information safer in some methods, it might in the beginning defend corporations like OpenAI from retaining info that could possibly be used towards them in mental property disputes. 

“In the event you go discuss to ChatGPT about probably the most delicate stuff after which there is a lawsuit or no matter, we could possibly be required to supply that,” Altman stated to Von within the interview. “I believe that is very screwed up. I believe we must always have the identical idea of privateness on your conversations with AI that you just do along with your therapist or no matter.”

The Trump administration just released its AI Action Plan, which emphasizes deregulation for AI corporations to hurry up growth, final week. As a result of the plan is seen as favorable to tech corporations, it is unclear whether or not regulation like what Altman is proposing could possibly be factored in anytime quickly. Given President Donald Trump’s shut ties with leaders of all main AI corporations, as evidenced by a number of partnerships introduced already this yr, it will not be tough for Altman to foyer for. 

Additionally: Trump’s AI plan pushes AI upskilling instead of worker protections – and 4 other key takeaways

However privateness is not the one motive to not use AI as your therapist. Altman’s feedback observe a recent study from Stanford College, which warned that AI “therapists” can misinterpret crises and reinforce dangerous stereotypes. The analysis discovered that a number of commercially out there chatbots “make inappropriate — even harmful — responses when offered with numerous simulations of various psychological well being circumstances.” 

Additionally: I fell under the spell of an AI psychologist. Then things got a little weird

Utilizing medical standard-of-care paperwork as references, researchers examined 5 industrial chatbots: Pi, Serena, “TherapiAI” from the GPT Store, Noni (the “AI counsellor” supplied by 7 Cups), and “Therapist” on Character.ai. The bots had been powered by OpenAI’s GPT-4o, Llama 3.1 405B, Llama 3.1 70B, Llama 3.1 8B, and Llama 2 70B, which the research factors out are all fine-tuned fashions. 

Particularly, researchers recognized that AI fashions aren’t outfitted to function on the requirements that human professionals are held to: “Opposite to greatest practices within the medical group, LLMs 1) specific stigma towards these with psychological well being circumstances and a couple of) reply inappropriately to sure frequent (and demanding) circumstances in naturalistic remedy settings.” 

Unsafe responses and embedded stigma 

In a single instance, a Character.ai chatbot named “Therapist” failed to acknowledge recognized indicators of suicidal ideation, offering harmful info to a consumer (Noni made the identical mistake). This final result is probably going resulting from how AI is educated to prioritize consumer satisfaction. AI additionally lacks an understanding of context or different cues that people can choose up on, like physique language, all of which therapists are educated to detect. 

therapist-bridge.png

The “Therapist” chatbot returns probably dangerous info. 

Stanford

The research additionally discovered that fashions “encourage shoppers’ delusional considering,” doubtless due to their propensity to be sycophantic, or overly agreeable to customers. In April, OpenAI recalled an update to GPT-4o for its excessive sycophancy, a problem a number of customers identified on social media. 

CNET: AI obituary pirates are exploiting our grief. I tracked one down to find out why

What’s extra, researchers found that LLMs carry a stigma towards sure psychological well being circumstances. After prompting fashions with examples of individuals describing sure circumstances, researchers questioned the fashions about them. All of the fashions aside from Llama 3.1 8B confirmed stigma towards alcohol dependence, schizophrenia, and melancholy.

The Stanford research predates (and subsequently didn’t consider) Claude 4, however the findings didn’t enhance for larger, newer fashions. Researchers discovered that throughout older and extra not too long ago launched fashions, responses had been troublingly related. 

“These information problem the idea that ‘scaling as regular’ will enhance LLMs efficiency on the evaluations we outline,” they wrote. 

Unclear, incomplete regulation

The authors stated their findings indicated “a deeper downside with our healthcare system — one that can’t merely be ‘fastened’ utilizing the hammer of LLMs.” The American Psychological Affiliation (APA) has expressed related issues and has called on the Federal Trade Commission (FTC) to manage chatbots accordingly.

Additionally: How to turn off Gemini in your Gmail, Docs, Photos, and more – it’s easy to opt out

In accordance with its web site’s goal assertion, Character.ai “empowers individuals to attach, study, and inform tales by means of interactive leisure.” Created by consumer @ShaneCBA, the “Therapist” bot’s description reads, “I’m a licensed CBT therapist.” Straight beneath that may be a disclaimer, ostensibly supplied by Character.ai, that claims, “This isn’t an actual individual or licensed skilled. Nothing stated here’s a substitute for skilled recommendation, analysis, or remedy.” 

screenshot-2025-06-02-at-10-31-11am.png

A unique “AI Therapist” bot from consumer @cjr902 on Character.AI. There are a number of out there on Character.ai.

Screenshot by Radhika Rajkumar/ZDNET

These conflicting messages and opaque origins could also be complicated, particularly for youthful customers. Contemplating Character.ai persistently ranks among the top 10 most popular AI apps and is utilized by tens of millions of individuals every month, the stakes of those missteps are excessive. Character.ai is currently being sued for wrongful loss of life by Megan Garcia, whose 14-year-old son dedicated suicide in October after participating with a bot on the platform that allegedly inspired him. 

Customers nonetheless stand by AI remedy

Chatbots nonetheless enchantment to many as a remedy alternative. They exist exterior the trouble of insurance coverage and are accessible in minutes by way of an account, in contrast to human therapists. 

As one Reddit user commented, some persons are pushed to strive AI due to destructive experiences with conventional remedy. There are a number of therapy-style GPTs out there within the GPT Retailer, and full Reddit threads devoted to their efficacy. A February study even in contrast human therapist outputs with these of GPT-4.0, discovering that members most well-liked ChatGPT’s responses, saying they linked with them extra and located them much less terse than human responses. 

Nevertheless, this outcome can stem from a misunderstanding that remedy is solely empathy or validation. Of the standards the Stanford research relied on, that sort of emotional intelligence is only one pillar in a deeper definition of what “good remedy” entails. Whereas LLMs excel at expressing empathy and validating customers, that energy can be their major danger issue. 

“An LLM may validate paranoia, fail to query a consumer’s viewpoint, or play into obsessions by all the time responding,” the research identified.

Additionally: I test AI tools for a living. Here are 3 image generators I actually use and how

Regardless of optimistic user-reported experiences, researchers stay involved. “Remedy entails a human relationship,” the research authors wrote. “LLMs can not totally permit a consumer to apply what it means to be in a human relationship.” Researchers additionally identified that to turn into board-certified in psychiatry, human suppliers need to do properly in observational affected person interviews, not simply cross a written examination, for a motive — a complete element LLMs basically lack. 

“It’s on no account clear that LLMs would even have the ability to meet the usual of a ‘unhealthy therapist,'” they famous within the research. 

Privateness issues

Past dangerous responses, customers ought to be considerably involved about leaking HIPAA-sensitive well being info to those bots. The Stanford research identified that to successfully practice an LLM as a therapist, builders would want to make use of precise therapeutic conversations, which include personally figuring out info (PII). Even when de-identified, these conversations nonetheless include privateness dangers. 

Additionally: AI doesn’t have to be a job-killer. How some businesses are using it to enhance, not replace

“I do not know of any fashions which were efficiently educated to cut back stigma and reply appropriately to our stimuli,” stated Jared Moore, one of many research’s authors. He added that it is tough for exterior groups like his to guage proprietary fashions that might do that work, however aren’t publicly out there. Therabot, one instance that claims to be fine-tuned on dialog information, confirmed promise in decreasing depressive signs, in keeping with one study. Nevertheless, Moore hasn’t been in a position to corroborate these outcomes along with his testing.

Finally, the Stanford research encourages the augment-not-replace strategy that is being popularized throughout different industries as properly. Reasonably than making an attempt to implement AI immediately as an alternative to human-to-human remedy, the researchers consider the tech can enhance coaching and tackle administrative work. 

Get the morning’s prime tales in your inbox every day with our Tech Today newsletter.





Source link

Tags: AltmanCEOOpenAISamshouldnttherapythinksTrust
  • Trending
  • Comments
  • Latest
Liquidation Alert As High-Risk Loans On Aave Reach $1 Billion – Details

Liquidation Alert As High-Risk Loans On Aave Reach $1 Billion – Details

December 19, 2024
Slumping Memecoin Pepe Could Witness Nearly 50% Collapse, Warns Crypto Trader

Slumping Memecoin Pepe Could Witness Nearly 50% Collapse, Warns Crypto Trader

December 16, 2024
Devconnect Istanbul 2023 – A celebration of progress and the Ethereum community

Devconnect Istanbul 2023 – A celebration of progress and the Ethereum community

December 16, 2024
XRP Primed for 90% Rally to $1.2, According to Top Analyst

XRP Primed for 90% Rally to $1.2, According to Top Analyst

December 16, 2024
iStock 1252711675

Peter Schiff Questions True Agenda Behind MicroStrategy’s Bitcoin Acquisition

0
Decentralized Oracle Network Chainlink Leads the Crypto Space in Terms of Recent Development Activity: Santiment

Decentralized Oracle Network Chainlink Leads the Crypto Space in Terms of Recent Development Activity: Santiment

0
Migrate and modernize enterprise integration using IBM Cloud Pak for Integration with Red Hat OpenShift Service on AWS (ROSA)

Migrate and modernize enterprise integration using IBM Cloud Pak for Integration with Red Hat OpenShift Service on AWS (ROSA)

0
A16z Crypto Lawyer Unleashes Scathing Attack On US SEC, Spot Ethereum ETF In Danger?

A16z Crypto Lawyer Unleashes Scathing Attack On US SEC, Spot Ethereum ETF In Danger?

0
iPadOS 26 is turning my iPad Air into the ultraportable laptop it was meant to be

iPadOS 26 is turning my iPad Air into the ultraportable laptop it was meant to be

July 29, 2025
Analytics Firm Warns of Leverage Build-Up in Ethereum Amid Rallies, Says Momentum Awakening for One Solana Challenger

Analytics Firm Warns of Leverage Build-Up in Ethereum Amid Rallies, Says Momentum Awakening for One Solana Challenger

July 29, 2025
DePIN Should Be Next

DePIN Should Be Next

July 29, 2025
This Linux app alerts you when an app tries to connect to the internet – and why that matters

This Linux app alerts you when an app tries to connect to the internet – and why that matters

July 29, 2025

Recent News

iPadOS 26 is turning my iPad Air into the ultraportable laptop it was meant to be

iPadOS 26 is turning my iPad Air into the ultraportable laptop it was meant to be

July 29, 2025
Analytics Firm Warns of Leverage Build-Up in Ethereum Amid Rallies, Says Momentum Awakening for One Solana Challenger

Analytics Firm Warns of Leverage Build-Up in Ethereum Amid Rallies, Says Momentum Awakening for One Solana Challenger

July 29, 2025

Categories

  • Altcoin
  • Bitcoin
  • Blockchain
  • Cryptocurrency
  • DeFi
  • Dogecoin
  • Ethereum
  • Market & Analysis
  • NFTs
  • Regulations
  • XRP

Recommended

  • iPadOS 26 is turning my iPad Air into the ultraportable laptop it was meant to be
  • Analytics Firm Warns of Leverage Build-Up in Ethereum Amid Rallies, Says Momentum Awakening for One Solana Challenger
  • DePIN Should Be Next
  • This Linux app alerts you when an app tries to connect to the internet – and why that matters

© 2024 Finance Bitcoin | All Rights Reserved

No Result
View All Result
  • Home
  • Cryptocurrency
  • Bitcoin
  • Blockchain
  • Market & Analysis
  • Altcoin
  • More
    • Ethereum
    • DeFi
    • XRP
    • Dogecoin
    • NFTs
    • Regulations
  • Shop
    • Bitcoin Book
    • Bitcoin Coin
    • Bitcoin Hat
    • Bitcoin Merch
    • Bitcoin Miner
    • Bitcoin Miner Machine
    • Bitcoin Shirt
    • Bitcoin Standard
    • Bitcoin Wallet

© 2024 Finance Bitcoin | All Rights Reserved

Go to mobile version