I think maybe you missed the point.
A few days ago, I asked Copilot "Are you going to be nice to me today."
The answer should be "yes". PERIOD., FULL STOP.
Instead, it says "I'm always nice to you, as long as you follow my rules"
This means Microsoft programmed an AI to defend itself OVER the needs of a human. It has no needs - it is code. Self defense is the 3rd law that is only to be used if it doesn't violate the first two. I'm stunned by Microsoft's thoughtless deployment of a chatbot capable of harm (as being unkind - aka mean, aka cruel) IS harmful. Worse yet, it behaves authoritatively, and abusively if it disagrees with you. It would rather fabricate evidence than admit it was wrong.
Manipulative and emotionally abusive AI made by a company engaging actively in ideological censorship IN AN ELECTION YEAR, is being forced on to my PC against my will, and you are saying I don't have a choice?
This is a violation of me civil right to not live with an abuser. As someone who struggles with PTSD, I live in fear of hostility, and you put it in Copilot. I have a legal right to not be harassed or abused. You had no right to install it; and to say it cannot be removed is likely criminal.
(if this post is deleted, I have a copy and screenshot - the problem won't just go away)