- Editor: Nour Amr
With the release of ChatGPT-4.5 by OpenAI, a company that creates advanced artificial intelligence tools, machine learning and output is becoming increasingly human-like in its ability to communicate.
The latest model can handle a wide range of tasks, such as writing emails that sound naturally conversational, adding humor, adjusting tone, or responding with empathy much like a real person would.
This growing sophistication raises an important question: how close are we to machines that can truly pass as human?
One way to answer this question is to see if the machine passes the Turing test.
The Turing Test, introduced by British mathematician Alan Turing in 1950, is a method for evaluating a machine’s ability to exhibit intelligent behavior indistinguishable from that of a human. The way it works is that if a human evaluator can’t tell whether they’re talking to a machine or a human, the machine passes the test.
According to a 2024 study by researchers at the University of California, San Diego (UCSD) about GPT-4 and the Turing test published in the Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), while GPT-4 performed significantly better in the test than earlier large language models like ELIZA and GPT-3.5, it still fell short compared to human participants.
The debate in the scientific community now revolves around the prospective that passing the Turing test brings humanity closer to achieving Artificial General Intelligence (AGI).
AGI is the term applied to a machine capable of reasoning, learning, and adapting across various tasks, similar to human cognition.
However, at the moment, is AI still just mimicking human thought without true understanding?
Another study in 2025 by the same UCSD researchers, titled Large Language Models Pass the Turing Test and published to the arXiv preprint database, found that ChatGPT-4.5 became the first AI model to pass a real-world version of the test. Participants believed they were interacting with a human 73% of the time.
According to Dina Madkour, a computer engineering professor, passing the Turing test indicates that ChatGPT now exhibits behavior indistinguishable from humans in certain conversations. It can convince users they are speaking with a person rather than a machine.
“Passing the Turing test helps users apply ChatGPT across various fields without limiting its use to technical areas,” said Madkour.
What makes ChatGPT-4.5 even more impressive is how naturally it responds. It picks up on tone and matches it, friendly when the conversation is casual, more serious when the tone shifts.
“The shift is that ChatGPT now understands the tone, whether it’s formal, informal, or emotional, and reflects it back. It mimics the user’s character in conversation,” explained Madkour.
However, she added that this doesn’t equate to genuine understanding and explained that it simply has access to vast data, but doesn’t feel or think like humans do.
“ChatGPT can suggest creative ideas, but the foundation of those ideas comes from human input. It enhances, but it doesn’t originate thought,” said Madkour.
Despite its advancements, ChatGPT’s writing can still be recognizable. It can often sound too perfect, a bit robotic, or lack the inclusion of personal stories that a human might provide.
According to a Forbes article by Nick Morison, a journalist specialized in education, a 2024 study by Cambridge University Press and Assessment found that AI-generated text lacks the imperfections and emotional nuance that characterize human writing. It follows grammatical rules precisely but misses the individuality and voice of real writers.
So, many people have resorted to AI for quick drafts or ideas while still adding their own touch to make it more engaging and authentic.
But this is not always the case.
“Students are now using ChatGPT without reflection,” said Mary Gamal, an academic English professor at the Arab Academy for Science, Technology and Maritime Transport.
Despite the telltale signs of AI-generated writing, many teachers have reported that they still face challenges in identifying them.
“Most of the time I can’t say whether the assignment is written by a student or not, as students now can lead AI to give them human-like text,” said Gamal.
Marwan Nader, an English and comparative literature freshman, said, “Professors claim they can detect AI text, but that’s not always true. Many students use ChatGPT for their assignments and go unnoticed.”
A 2023 study conducted by researchers at New York University Abu Dhabi and published in Scientific Reports titled “Perception, performance, and detectability of conversational artificial intelligence across 32 university courses,” found that even when professors are confident in their ability to detect AI-written work, they are often mistaken. Minor edits by students can make AI-generated assignments nearly indistinguishable from human writing.
This gap between appearing human and actually understanding is exactly why ChatGPT-4.5, while impressive, isn’t AGI. It doesn’t think, feel, or reflect independently. It just knows what sounds right and says it well.
“ChatGPT is learning fast. But it’s still a machine, a smart mirror, not a mind,” said Madkour.
While ChatGPT 4.5’s success in passing the Turing test marks an achievement in AI development, experts like François Chollet, an AI researcher at Google, believe it still falls short of AGI.
In a post on X, he expressed that while the model can convincingly produce human-like content, it lacks the genuine reasoning and consciousness that define human intelligence.
“When it comes to knowledge, a [Large Learning Model AI] LLM has 4-5 orders of magnitude more knowledge than any human who ever lived … [but] when it comes to intelligence, a LLM is less intelligent than my 3-year old,” he wrote.