
ChatGPT, the conversational assistant from OpenAI with millions of users, is far from the only bot ready to answer our most paradoxical questions. Here are three examples of AI in unexpected areas.
A revolutionary creative tool for some, a weapon that can fuel conspiracy theories and destroy employment according to others… ChatGPT, the chatbot created by the US OpenAI in November, is generating great enthusiasm and some mistrust. In terms of musical composition, singer Nick Cave candidly feels that it is “Shit.”
>> ChatGPT: artificial intelligence in the service of conspiracy?
The assistant has attracted several million users in just a few days, but it has also attracted the lust of the tech giant. Microsoft has already announced, on Monday, January 23, a partnership with “several billion dollars” with OpenAI. More modestly, other robots made by ordinary citizens have appeared in recent months. Here are three examples of AI being applied in surprising fields.
ChatCGT, the Marxist and satirical artificial intelligence
“She’s respectful, she doesn’t like Macron, she’s ChatCGT.” Here’s how Vincent Flipbuster presents the far-left chatbot or “Marxist AI”, which was launched in the context of the opposition movement for pension reform. Red, white and yellow for the logo General Federation of Labor exist but in fact, the union has nothing to do with it. On his own initiative and with the help of his brother – “this genius”As he says – this digital citizenship coach and web consultant has created this virtual assistant using OpenAI services.
I have the honor to present to you the revival of artificial intelligence, thanks to the hard work of my brother, this genius, here is CHATCGT, your Marxist artificial intelligence!
She came, and she doesn’t like Macron, she chats
https://t.co/ZLANnHzh3rr ☭ pic.twitter.com/oqZyULwJJh
– Vincent Flibustier (@vinceflibustier) January 21, 2023
Politics, ethics, culture, science, cooking… ChatCGT has an opinion on everything, and an anti-capitalist read for any kind of situation. Many unlikely conversations are being shared on Twitter. Question : “Should we say pain chocolate or chocolate?”. Reply : “Employers want us to believe ‘pain chocolate’ is the right choice but in reality, chocolate is the truth!” , ChatCGT replies. Sleeping with my union is cheating?we ask. “I think it’s more of an act of solidarity,” says the robot.
The other answers are less funny: “Was the gulag positive?” , Ask Twitter The Gulag was certainly a very draconian system, but it was also an effective system in keeping workers safe and protected. chatbot says.
He’s very strong. pic.twitter.com/8JL3nZTFeV
—Shaax (@ShaaxLive) January 22, 2023
“ChatCGT is the absurd illustration of a non-neutral AI,” explains its creator to franceinfo. Vincent Flipbuster says he wants to open up “a reflection information generated by artificial intelligence systems.. “There is a real question, continued, About the transparency of the algorithms, about why the content is shown to us or not. We also asked him what data, which group, the Marxist bot was fed to. But the trainer started calling in a formula that ChatCGT probably won’t deny: “In no way can I reveal the secrets that might be sent to labor camps in French Siberia, also called creuses.”
The Marxist robot was a victim of its own success, and had been on strike for a few days. Vincent Flebuster makes it clear Requests to Amnesty International It costs him a lot. Creating an OpenAI account is, in fact, paying and processing questions also entails costs. The web consultant is therefore calling on ChatCGT users to support the cause financially.
Robot to chat with the child that you are
If you had the opportunity to discuss this with the child, what question(s) would you ask them? Michelle Huang had so many phrases in her head that she decided to use artificial intelligence to make sure she got answers. “I programmed a chatbot with the contents of my childhood diary to engage in a real-time dialogue with my ‘inner child'”, explains this engineer with training on Twitter. She maintains that she has kept notebooks tracing back ten years of her life in which she complains about the homework to be done, the “dizziness” of love, the “vulgar” and the deeper.
I kept a diary for over 10 years of my life, writing almost every day – about my dreams, fears, secrets
The content ranged from complaining about homework to the dizziness I got from talking to my crush
Some days were very mundane, and some were somewhat prescient pic.twitter.com/CzA1C20U4a
– Michelle Huang (@michellehuang42) November 27, 2022
To create this highly personal virtual assistant, Michelle Huang, like Vincent Flibustier, opened an account on OpenAI. She fed this chat show copies of her diary, but also information about the little girl she thought she was, while realizing that this might be what adult Michelle Huang conceived of Michelle Huang’s baby. “I got answers that seemed eerily close to how I would have answered them at the time,” She congratulates herself on Twitter by sharing a portion of her conversation with her.
I was also surprised at how accurately the model predicted my current stated interest (after lots of iterations/trial and error) from decade-old journal entries
This made me wonder that maybe this path had already been implanted long ago in me https://t.co/RKDhIMneyq pic.twitter.com/kzdHQyoFFy
– Michelle Huang (@michellehuang42) November 27, 2022
But when she asked the robot to ask her her questions, Michelle Huang really felt crossed “time gate”. “Have you been able to follow your dreams?” , asks him, for example, young Michel. The somewhat positive reaction from adult Michelle inspires her with this comment: “I’m glad to know you’re happy. It obviously took a lot of courage to be who you are. I hope I’ll be able to find that much courage one day.” The most disturbing thing is that the American invited the young Michelle to write her a letter Ten years into the future. Like a very benevolent friend, the AI doubles down on encouragement. “These interactions show the resilience that this medium provides, Michelle Huang saysthe possibility of sending love into the past and receiving it in exchange for ourselves as a child.”
Nurse Nessa, the virtual nurse who advocates for the right to abortion
AI can also be a very effective tool in terms of prevention. This has been noted by several NGOs specializing in reproductive health information in the Democratic Republic of the Congo. Since September 2021, they have been encouraging young people in the Democratic Republic of the Congo (DRC) to ask questions to Nurse Nisa, a personalized chatbot For Frequently Asked Questions About Contraception and Self-Management Abortion Pills, explains the YouthpPrint organization on its website.

Available for free on WhatsApp in three languages (English, French and Swahili), even without an internet connection, this bot “Allow women to access customized health information at their own time and place,” YouthPrint says: Nurse Nessa is also scheduled to discuss gender-based violence. Each type of violence is presented through the testimony of a woman. Thus we can ask Nyssa to tell us the story of Maguy “economic violence” or Mary On reproductive coercion.
Not only in Goma. Even on the other side of Lake Kivu #bukavuours # brave Continue to raise awareness about # pushincluding safe abortion and the promotion and dissemination of CHATBOT Nurse NISA, with technical and financial support from @employee pic.twitter.com/Lz14QP4LTW
Youth Alliance for Reproductive Health (YARH). Democratic Republic of the Congo (yarhdrc) January 21, 2023
Created by the US company Dimagi, Nurse Nisa has also been popularized as a preventive tool in Kenya. The chatBot is confidential, secure and free. To be able to use it, just say ‘Hi’ to Nurse Nisa on WhatsApp at +243827325289 for more tips. The company says that 12% of users requested the chatbot again after a week of first use without any prompts. Among them, 50% returned to see Nurse Nice after three weeks, and 17% after only one month.