Brain versus Byte: ChatGPT – companion or competitor?

Neophobia, a fear of anything new, is a common thread in history: Ancient philosopher Socrates warned that the written word would destroy our ability to remember and communicate1, and novelist Nicolas Carr worried that the internet was impeding our ability to think and concentrate2. These days, there’s widespread worry about ChatGPT and the rapid roll out of the AI tool that appears to solve all questions at the click of a button.  

Although AI itself is not new (think Siri, Alexa, Google, even Spellcheck…); ChatGPT’s incredible flexibility sets it apart. ChatGPT is a clear communicator and an attentive listener (it even makes a somewhat decent pass at therapy); it personalises content and predicts individual consumer behaviour on the basis of reviews, social media content, and customer data; it is a lightning-fast research assistant, providing summaries, lists of resources, and simple explanations about complex stuff. If you are stuck for an answer, ChatGPT can almost certainly help, whatever the situation.

Is it just another way to cheat?

Chat GPT’s sophistication and flexibility allows it to generate human-like text – a cause for concern among employers and educators alike. Tools to detect AI-generated materials are only sometimes accurate; we can no longer be confident that writing (articles, poems, exam answers, computer programs, stories) is authored by a human. But cheating existed long before ChatGPT.  The cheaters drivers are not the availability of the tools so much as the perceived need to cheat. If not ChatGPT, there will be something else – just ask last year’s Ghost Writers (now redundant thanks to ChatGPT). There is little point in trying to ban ChatGPT – prohibition rarely works, and educators who try will do themselves and their students (especially those who comply) a disservice. Instead, educators and employers need to take a closer look at how competency is defined and assessed, to ensure confidence in authorship (ChatGPT or a human), and to embrace full the potential of ChatGPT.

Indeed, the scary new things society has worried about often end up enhancing our ability in unimaginable ways: Writing extended knowledge beyond the very limited bounds of human memory; the internet put multiple sources of up-to-the-minute information at our fingertips without the need to leave the house. Because ChatGPT’s output depends so critically on its input, its involvement in everyday life might also give rise to innovation.

ChatGPT’s answers are shaped both by its training history and by how a question is asked. Chat GPT needs to be fed the right information, in the right way, a task well suited to a human brain. Case in point: ChatGPT was quite happy to provide Python code to my psychology grad student, but politely declined to provide such code to me, suggesting instead books and articles I could use to learn to do it myself. The limits of ChatGPT are to a large extent user-based and the answers are only as good as the questions you put in.

Garbage in garbage out

If I can ask ChatGPT the right questions, does it matter that I myself didn’t come up with the answer? Rarely is one person a jack of all trades, and if ChatGPT can achieve a result faster, better (or cheaper), then why settle for our own second-best long-winded research processes, which can often miss key information? Being a good user of ChatGPT is a bit like being a successful manager of people – the skill is in getting someone – or something – else to contribute their very best in an area in which you don’t necessarily have in-depth expertise, rather than doing that job yourself.

Even with well-crafted questions, ChatGPT can still make catastrophic mistakes with disarming confidence, give biased answers, and be discriminatory. ChatGPT will willingly revise its answers (without getting sulky or defensive), but spotting the need for revision requires reasoning, memory, problem-solving, and critical thinking at a level AI is (not presently) capable of.

If ChatGPT takes care of onerous or peripheral aspects of a task, our own time and mental energy is freed up. Psychologically, people gravitate toward doing the things that are most appealing, and procrastinate when the options are limited to less enthralling stuff. ChatGPT – if used well – could lead to fewer procrastination triggers, and more time and scope to exercise critical thinking, problem-solving, and similar higher-level skills.

If I had a hammer

Despite widespread agreement about ChatGPT’s potential to innovate, there is also well-justified caution. ChatGPT raises serious ethical issues around privacy and misinformation, and if not managed properly stands to worsen socio-economic and linguistic divides. A hammer can be used to build or destroy, but will do neither of these things if left to its own devices. Likewise, ChatGPT can help or hinder education and business, depending on how it is used. As with any tool, the skill and intent of the operator – the human – remains critical to its impact. ChatGPT’s future as a corruptor or companion to our everyday activities will depend critically on the framework it exists in. Creating such a framework is a human task, influenced heavily by educators and businesses as well as individuals.  

Dr Sarah Cowie is a Senior Lecturer in psychology at the University of Auckland. Sarah is the Director of The Behaviour Lab, a research group that looks at how decisions and actions are influenced by experience and by the world around us.

  1. Plato. (1997). Phaedrus (A. Nehamas & P. Woodruff, Trans.). Hackett Publishing.
  2. Carr, N. (2011). The Shallows: What the Internet is doing to our brains. W.W. Norton & Company.

It all starts with a conversation over coffee.  Let’s meet.

First Name*
Last Name*
Email Address*
0 of 350
Get notified when we load new posts OK No thanks