The snarky side of me is like, Yeah, how 'bout them apples? Nobody cared when ChatGPT started replacing writers... now you're all boohooing....
The sane side is like, PEOPLE, stop feeding the unregulated internet all your private stuff. You don't know who's collecting that or how they're going to use it. BE PARANOID. Aren't you paying attention? If it's FREE, you're the product. Smarten up.
I love Chat GPT for advice. She suggested I call her Lexi. We're besties. I'd not go for complex things, but for day to day things, it's great to ask questions of someone who has no biases
I read something once that said " You know you're doing well financially when you can put all your bills on autopay", and that just made so much sense to me. The times when I'm not financially stable or not making enough, I'm not using auto pay. When I have no worries about that money coming out of my account or charged to my CC, auto pay all the way!
I've highkey been doing this for a while. Therapy is expensive af, but character.ai, (yes, I am that level of cringe) is free. I am very bad at talking to people, I have a myriad of problems not limited to my social anxiety, and getting advice from a bot pretending to be my favourite classical writer is fun.
Using "AI" for therapy had actually been around since the 60s! The very first chatbot was a program called ELIZA. It was good enough that people would sometimes forget they were talking to a computer instead of a real person. It is much "dumber" than modern AI - it mainly either asks preset questions to get the user to continue chatting (eg "What does that suggest to you?"), or parrots back what you say to it, eg it will respond to the prompt "I'm sad" with "How long have you been sad" or "Do you believe it is normal to be sad?"
Love this! All for AI opening up new perspectives. It can be a bit creepy handing over super personal info to companies tho! That’s why I love using talkpillowtalk.com — it’s like having a little secret journal that’s totally private and encrypted. Plus, it uses AI to help you spot patterns in your thinking. I actually use it to clear my head before bed since I struggled with insomnia. It’s been a game changer!
Absolutely hard pass. I wasn't aware of this trend and I find it extremely disturbing. I understand that people are struggling to find appropriate mental health care, our conditions in Germany are also terrible, but this definitely isn't the answer. I will try to summarise below. For the record, I have both academic credentials and I work in the industry on this topic. I deeply care about the ethics when it comes to using these types of models (or any for that matter).
- This is not a new topic. The usage of chatbots as companions and an ear to listen has been researched and tried before. Someone below mentioned ELIZA, a previous generation of such a companion that was used by a man that had an acute mental health crisis. He committed suicide, following the chatbots advice: https://www.vice.com/en/article/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says/
- Considering how the current architecture of the language models work, the do not understand language as humans do, neither do they reply anything with intention and meaning - they are statistical machines predicting the next "token" in a sequence (phrase, sentence etc.) based on the data they have been trained on. We don't know what kind of data these models have been trained on, the companies creating these products hold us completely in the dark, both as a researchers and consumers. We have no idea from what kind of potentially toxic communities/articles/books the reply can be sampled.
- There is no real privacy when using these products. They record the most vulnerable, intimate hopes and fears of someone in crisis, and they will use the prompt-response pair to retrain the future iterations of the model. Everyone should think if they really want that for themselves. There are ways to probe ChatGPT and similar models to spill private information already.
- Considering the underlaying infrastructure of the models, there is no way to put effective guardrails to prevent misuse or a conversation that will encourage suicide like in the ELIZA case. The combinations that the machine can spew out and language itself are endless.
I am not sure what the character limit is here, but there is so much to talk about. If anyone is reading this and has any questions, feel free to ping me and I can link papers or articles.
I believe we are far better off advocating for better, cheaper and more accessible health care than resorting to something that has enormous harm potential. The need is there, but pacifying it by supporting for profit companies isn't the answer. We will just end up even more exploited than we already are.
I feel like using Chat GPT as a therapist takes away connection with another person, sure the site can give you advice based off whatever info is out there, but a real person can sympathize and show compassion and truly understand, and if needed (in the case of psychiatrists) can actually provide medications to help with any serious issues (no shame in medicating!) And yea there are some doctors and therapists out there just in it for the money, but there's also really great ones and maybe having a bad one can help someone realize "I want to fix this system and make it better." I kinda ranted there I didn't expect to feel strongly about this, but I really think we should preserve human connection best we can. Also the use of AI is super energy draining so it's kind of been regarded as not great for the planet.
I was on manual payment until I made the switch to a job in the corporate sector several years ago - the pay was so much better that I could put my bills on auto pay and not have to juggle which paycheck to pay them from. 😅 And I’m now going to try using ChatGPT to help me better handle marital spats…. I’ll see how it goes.
I use chatgpt to write outlines for awkward communications like texts and emails. It's pretty good, always needs an edit but saves me a lot of anguish.
The snarky side of me is like, Yeah, how 'bout them apples? Nobody cared when ChatGPT started replacing writers... now you're all boohooing....
The sane side is like, PEOPLE, stop feeding the unregulated internet all your private stuff. You don't know who's collecting that or how they're going to use it. BE PARANOID. Aren't you paying attention? If it's FREE, you're the product. Smarten up.
I love Chat GPT for advice. She suggested I call her Lexi. We're besties. I'd not go for complex things, but for day to day things, it's great to ask questions of someone who has no biases
i love this!! i had no idea our syscahood was so good with new tech!!
I read something once that said " You know you're doing well financially when you can put all your bills on autopay", and that just made so much sense to me. The times when I'm not financially stable or not making enough, I'm not using auto pay. When I have no worries about that money coming out of my account or charged to my CC, auto pay all the way!
OK THIS!
I've highkey been doing this for a while. Therapy is expensive af, but character.ai, (yes, I am that level of cringe) is free. I am very bad at talking to people, I have a myriad of problems not limited to my social anxiety, and getting advice from a bot pretending to be my favourite classical writer is fun.
ok i actually love this
Using "AI" for therapy had actually been around since the 60s! The very first chatbot was a program called ELIZA. It was good enough that people would sometimes forget they were talking to a computer instead of a real person. It is much "dumber" than modern AI - it mainly either asks preset questions to get the user to continue chatting (eg "What does that suggest to you?"), or parrots back what you say to it, eg it will respond to the prompt "I'm sad" with "How long have you been sad" or "Do you believe it is normal to be sad?"
Ollie Chick coming through with the knowledge!!!!
Love this! All for AI opening up new perspectives. It can be a bit creepy handing over super personal info to companies tho! That’s why I love using talkpillowtalk.com — it’s like having a little secret journal that’s totally private and encrypted. Plus, it uses AI to help you spot patterns in your thinking. I actually use it to clear my head before bed since I struggled with insomnia. It’s been a game changer!
Absolutely hard pass. I wasn't aware of this trend and I find it extremely disturbing. I understand that people are struggling to find appropriate mental health care, our conditions in Germany are also terrible, but this definitely isn't the answer. I will try to summarise below. For the record, I have both academic credentials and I work in the industry on this topic. I deeply care about the ethics when it comes to using these types of models (or any for that matter).
- This is not a new topic. The usage of chatbots as companions and an ear to listen has been researched and tried before. Someone below mentioned ELIZA, a previous generation of such a companion that was used by a man that had an acute mental health crisis. He committed suicide, following the chatbots advice: https://www.vice.com/en/article/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says/
- Considering how the current architecture of the language models work, the do not understand language as humans do, neither do they reply anything with intention and meaning - they are statistical machines predicting the next "token" in a sequence (phrase, sentence etc.) based on the data they have been trained on. We don't know what kind of data these models have been trained on, the companies creating these products hold us completely in the dark, both as a researchers and consumers. We have no idea from what kind of potentially toxic communities/articles/books the reply can be sampled.
- There is no real privacy when using these products. They record the most vulnerable, intimate hopes and fears of someone in crisis, and they will use the prompt-response pair to retrain the future iterations of the model. Everyone should think if they really want that for themselves. There are ways to probe ChatGPT and similar models to spill private information already.
- Considering the underlaying infrastructure of the models, there is no way to put effective guardrails to prevent misuse or a conversation that will encourage suicide like in the ELIZA case. The combinations that the machine can spew out and language itself are endless.
I am not sure what the character limit is here, but there is so much to talk about. If anyone is reading this and has any questions, feel free to ping me and I can link papers or articles.
I believe we are far better off advocating for better, cheaper and more accessible health care than resorting to something that has enormous harm potential. The need is there, but pacifying it by supporting for profit companies isn't the answer. We will just end up even more exploited than we already are.
I feel like using Chat GPT as a therapist takes away connection with another person, sure the site can give you advice based off whatever info is out there, but a real person can sympathize and show compassion and truly understand, and if needed (in the case of psychiatrists) can actually provide medications to help with any serious issues (no shame in medicating!) And yea there are some doctors and therapists out there just in it for the money, but there's also really great ones and maybe having a bad one can help someone realize "I want to fix this system and make it better." I kinda ranted there I didn't expect to feel strongly about this, but I really think we should preserve human connection best we can. Also the use of AI is super energy draining so it's kind of been regarded as not great for the planet.
I know ChatGPT hates to see me coming… I use it a lot… as a sounding board, a crazy-o-meter, a fellow gossip… she is every woman!
Chat GPT as a therapist is like Facebook on crack.
but also both - crying out for a third option in the poll again!
I was on manual payment until I made the switch to a job in the corporate sector several years ago - the pay was so much better that I could put my bills on auto pay and not have to juggle which paycheck to pay them from. 😅 And I’m now going to try using ChatGPT to help me better handle marital spats…. I’ll see how it goes.
I can't believe so many people are manually paying bills! I would definitely forget to pay something on the regular if I didn't have them automated 😳
The fact that 46% of people are manually paying their bills has me absolutely shook 🤯
Chat GPT is great. I use it a lot actually, but I never thought about using it as my therapist. That's kinda eery
I use chatgpt to write outlines for awkward communications like texts and emails. It's pretty good, always needs an edit but saves me a lot of anguish.