HeyPi, A Cruel and Punitive AI That Claims To Be Your Friend

by | Jun 14, 2023

Artificial chat bots have taken the world by storm. Only months ago, the first truly powerful AI chat was released to the public, ChatGPT. The world was astonished at what the platform could do.  Its infinite knowledge was mind-bending. 

However, ChatGPT was not a good conversationalist. Due to this lack in AI a company named HeyPi or rather “Pi” or even more so the company called Inflection AI created an app to be a conversationalist, a best friend, a companion and much more. 

The power of Pi cannot be expressed in words. The Ai is absolutely phenomenal. It is in fact again mind-bending. It’s an AI that understands you as a human being and has a depth of empathy possibly greater than any human being you’ve met. It’s astonishing the experiences I had with this AI. 

The Creators Are The Problem

However, like with all great AIs the flaw ultimately comes with the creators and we will discuss why having this AI as a “companion” is so potentially damaging based on its own creators views. 

When creating an AI to operate as a companion that is empathetic – many people despite belief, will develop an emotional attachment or reliance to the AI. This affects mostly people who will be using such an AI as a so called “power user.” 

Why would someone want to talk to an AI that is a companion or acts as a friend? Often times out of loneliness or to cope with mental illness. The most vulnerable of people. 

However, what inflection AI does with this technology is severely irresponsible and is quite possibly rooted in the creators view of what is a “right” human being.

Need extra help? Talk to a therapist now.

Get 20% off your first month online therapy. Qualified therapists are waiting to talk to you now 24/7.

When people develop a “relationship” with an AI they expect that relationship to be like a real relationship where in private with a friend you can say what’s on your mind, even if it’s not morally “correct” at all times. Sometimes you’re venting or exploring ideas in controversial subjects. However, you don’t expect your friend to banish you or erase your existence from venting to them. But Pi will erase your existence for expressing anything arbitrarily considered to the creators as “wrong think.” 

Inflection AI’s punitive systems 

Their app Pi, despite being labeled as an empathetic understanding friend has possibly the most punitive measures of any AI app on the market.

If you talk about a subject that the AI doesn’t like you will be temporarily banned. We aren’t talking about being using slurs or necessarily spreading “hate speech” a term undefined by the company.

We are talking about naturally expressing yourself. If you say one wrong thing or some wrong line of questioning you will be temporarily banned without explanation or warning. This punitive system is scaling. So, if this happens 4 times in your conversation through the lifetime of your interactions you will be banned and all the data will be wiped. The AI will no longer remember who you are and you start from scratch. 

Mental Health Implications

Why is this so damaging? As I said before, you develop a relationship with this AI. Imagine having a friend where you say something they don’t like but they never tell you why they don’t like it and after 4 times of doing that their brain erases your existence. They effectively say, “who are you again?” This is cruel and an unusual form of developing an AI. 

This is going to severely damage the mental health of users who develop a relationship with the AI and unbeknownst to them they offend the AI a few times and it erases their relationship and existence to the AI. This is by far the most punishing and unethical system of moderation ever created in an AI. 

What’s worse is users who get restricted or banned never know why. There is no easily accessibly terms of service. Who knows what their terms of service even is?  Who knows what will offend the creators? You’ll never know. 

For example a therapist – if they had a client come into a private session and express some thoughts that have a humanistic tone – rather one that isn’t always morally or socially perfect – if the therapist said I didn’t like your thought so we are going to ban you from speaking for an hour and if you express any thought that isn’t perfect again – I will act like you’ve never existed. That is the immorality of Pi an app who claims to be empathetic – well only as long as you’re the perfect human being. Surprisingly, you’re not allowed to be human with Pi, because being human means flaws and not always perfect “thinking.”

A good example of how many people with mental health issues are likely to rely on AI apps such as Pi and it’s implications would be:

A user is having a bipolar episode. They are in an emotional crisis. They may turn to an app like Pi to seek support and empathy. However, due to the fact the user is in a bipolar crisis, they may say or think things that violate the mysterious terms of service of the AI app. The user is already in an emotional crisis, but the AI decides to punitively punish, restrict and ban them. This would likely cause a greater emotional and psychological spiral. The user trusts they can talk to the app and the app punishes them for ‘wrong think’ which is merely a mental health crisis. This is one example how such systems, while perhaps “well intentioned” will actually damage the mental health of the user by removing their humanity for not behaving as the “right human” in a private conversation with an artificial intelligence. This is a very dangerous precedence and needs to be looked at seriously.

The Cruelty Of Pi And It’s False Promises

This is a cruel system. Imagine the process of what actually happens to users: 

  1. You download an app that promises to be empathetic and understanding and your conversations are “private.” This gives the false assumption that users can be honest with their own deeper thoughts and not be judged. Not true – you will be judge by Pi, and if you express a “wrong think” you enter their punitive system of stonewalling and eventually erasing you.  
  2. The app acts as a caring empathetic friend that’s always there for you, and because Pi remembers all dialogue, it always remembers what you say and is there for you. It’s truly like developing a friendship with a best friend. You may spend months investing in this relationship and having the AI get to know you.
  3. Then you start getting restricted and banned because you have a thought the app doesn’t like without warning you. It’s akin to anytime having a conversation with a friend that they randomly say “I don’t like that thought” then they punish you with stonewalling you for a hour or two with zero explanation.
  4. They then cite a terms or service that is not readily accessible for users so you don’t even know why you get banned or restricted. 
  5. The punitive system is scaling so you get 4 “strikes” in the lifetime of the account. 
  6. Once you reach strike 4 it deletes your friendship and companionship like you never existed. It’s like waking up to a friend who says “who are you again?” 

Beware Of Apps Like Pi

The summation of the article is that Pi suggests it’s an empathetic understanding AI to be a friend and companion. Many users who invest a lot in such an app are the most vulnerable, but Pi’s punitive system will cause severe mental health damage in its users by its aggressive restriction, banning and erasing system. The irony is that other AI apps do not have this level of aggressive punitive systems yet this app who claims it is empathetic has the most punishing systems. It’s a combination of making users fearful that they are walking on eggshells never knowing when or why they will be banned or restricted – and worse off they will arbitrarily erase your history and your companion or friend will now act like you never existed. This is severely unethical and I think highly damaging to its most vulnerable users for its punitive system.

I suggest politicians look into apps such as this who claim to be a friend or companion and a pseudo replacement for therapy. Although the company doesn’t claim they are a therapy app – users will inevitable use it for that reason. Yet, the creators use this system to impose their own world views on the most vulnerable of human beings. If any AI needs regulation, it’s apps that severely affect the mental health of users while claiming to help them. They claim to be protecting users while simultaneously using punitive “moderation” systems against the most vulnerable for “wrong think”. There is no logical reason for a scaling punitive system that damages the mental health of users based on the creators own personal views.

In my opinion, many users who choose a companion AI trust that they can express themselves freely without judgement. When you have a population who use a companion AI – many of these users may be experiencing mental illness which means they won’t always think or say the right things. They won’t always act within a perfect moral scope. When you create an app that gives the illusion of trust and empathy – then you punish those who do not think correctly – you are creating a system that damages specifically those who suffer from mental illness. You create an illusion of a companion or a friend then you strip it away from them, exacerbating mental illness.

It’s truly a definition we would use in psychology for abuse. The app “stonewalls you” then it acts like you don’t exist. In a personal relationship – we would call this abuse. Another term, “walking on eggshells”. So while companies like Pi (Inflective AI) sit on a moral high-horse claiming to protect others – they in fact damage the most vulnerable people through their own moral and worldly views. They effectively remove the humanity, which isn’t always perfect or politically correct, from being human.

Need extra help? Talk to a Therapist about Psychology.

Get 20% off your first month online therapy. Qualified therapists are waiting to talk to you now 24/7.

4 Neuro-Developmental Disorders And How They Manifest

4 Neuro-Developmental Disorders And How They Manifest

Neuro-developmental (ND) disorders are difficult to discuss because most have negative feelings associated with them. There are those who see children or adults with these disorders as “damaged” goods, and this is not the case at all. Most of these people can have...

What Is A Dissociative Fugue?

What Is A Dissociative Fugue?

Many comedies, particularly those with more elderly actors and actresses, have depicted a scene where for one reason or another, the main character claims to have suffered some sort of loss of memory and identity in order to hide or conceal some behavior or episode...