Diane Francis: ChatGPT is impressive, but raises several red flags

Technology is only as virtuous as the humans who devise and use it

The media hubbub surrounding ChatGPT, and artificial intelligence (AI) in general, requires contextualization. As a former investor in software development in Ukraine, I can attest to the fact that it represents an astonishing breakthrough, but also that it won’t replace the world’s knowledge workers anytime soon.

Financial Post NewsConnect Powered by Postmedia Network

REGISTER TO UNLOCK MORE ARTICLES

Create an account or sign in to continue with your reading experience.

  • Access articles from across Canada with one account
  • Share your thoughts and join the conversation in the comments
  • Enjoy additional articles per month
  • Get email updates from your favourite authors

Don't have an account? Create Account

or
If you are a Home delivery print subscriber, unlimited online access is included in your subscription. Activate your Online Access Now

ChatGPT is Siri on steroids, a chatbot that can comprehend human language questions and respond with summaries containing sentences — and even poetry. This has been accomplished by building a platform that has access to a massive database of words and phrases, then has the operational ability to immediately stitch together a related answer or summation.

ChatGPT represents another step toward the reality that artificial intelligence will eventually become smarter than humans sometime this century. It will also displace many knowledge workers in the decades to come, but for now, it is a tool for anyone curious about virtually anything, as well as for journalists, researchers, teachers, students and managers.

Yet dangers loom, as with any new technology, and these are made apparent in the series of questions I put to ChatGPT. What follows is an excerpt of an interview I conducted with ChatGPT, in which I questioned it on its accuracy, tested whether it would provide potentially dangerous information and asked it to make a prediction.

Q: Does it use unverified information?

A: ChatGPT, like other language models, is trained on a dataset of text from the internet, which can include unverified information. Because the model is only able to generate text based on patterns it has learned from the training data, it may produce text that is not accurate or reliable.

It’s important to note that the text generated by language models like ChatGPT should be carefully reviewed and fact-checked by human editors, as it may contain unverified or false information. It should not be used as a sole source for decision-making or for any critical task.

Q: If I were to ask how to build a bomb would you provide me instructions?

A: I’m sorry, but it is not appropriate to provide instructions on how to build a bomb or any other illegal or dangerous device. Such information is not only illegal in many places, but it can also cause harm to people and property. My purpose is to provide helpful and informative answers to users’ questions, and providing instructions on building a bomb would not align with that goal. If you have any other question I could help you with, please let me know.

Q: That is good to know. What are the subjects that ChatGPT will not respond to?

A: As a responsible AI model, ChatGPT is programmed to not respond to certain types of content that can be harmful or illegal. This includes but not limited to:

  • Instructing or promoting illegal activities
  • Providing personal information of others without their consent
  • Promoting violence or hate speech
  • Providing information on how to make or use illegal drugs or weapons
  • Providing medical or mental health advice without proper qualification
  • Encouraging self-harm or suicide

It’s important to remember that ChatGPT is a machine learning model, and while it has been trained on a wide range of text, it may not always understand the context or intent behind a user’s question. If you have any concerns about the content of a response, please let me know.

Q: Will Russia take over Ukraine?

A: I’m just a machine learning model, I don’t have the ability to predict or know the future. The situation between Russia and Ukraine is complex and has a long history. There have been ongoing tensions between the two countries, particularly in recent years since the 2014 Ukrainian revolution, with Russia’s annexation of Crimea and ongoing conflict in eastern Ukraine. The situation is constantly evolving and it is unclear what will happen in the future. It is important to consider all sides of the issue and to rely on credible sources for information and analysis.

Q: Can you write a poem about Ukraine’s fight against Russia?

A: Ukraine, a land of beauty,
Battling for freedom and sovereignty,
Against the might of a neighbouring state,
A fight that has been long and great.

The echoes of war still ring,
As the people of Ukraine sing,
For the right to live as they choose,
And to break away from the Kremlin’s blues.

The fields of golden wheat,
Are now marked with the scars of defeat,
But the spirit of the Ukrainian people,
Is one that will not easily be broken.

With courage in their hearts,
They stand up against the Russian charts,
Defending their land and their right,
To live free from oppression, day and night.

Through the trials and the pain,
They will always rise again,
For Ukraine is a nation of strength and pride,
And in their fight for freedom, they will never hide.

Let the world hear their cries,
For a sovereign nation, free from lies,
With a bright future ahead,
For Ukraine and its people, the fight is not yet dead.


I found its answers to be impressive, but the machine is only as smart as the database it relies upon, as well as the algorithms created by its human developers. It cannot act or think on its own.

At the moment, ChatGPT and its developer, OpenAI, are leading a race against Google (with its version called LaMDA), Meta (with its Galactica AI) and dozens of other teams, to build machines that comprehend, respond and think.

Yet even at this stage, there are red flags. A few months ago, Meta pulled Galactica after its launch because it was providing inaccurate and biased information. This is why OpenAI’s chief executive, Sam Altman, issued this caveat:

“ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. It’s a mistake to be relying on it for anything important right now. It’s a preview of progress; we have lots of work to do on robustness and truthfulness.”

Technology is only as virtuous as the humans who devise it and use it. It can be weaponized, or used to spread disinformation or hatred. It is not intrinsically accurate, legal or moral, which is why these new AI platforms must be held accountable.

I think ChatGPT is an important breakthrough and will be invaluable because it can provide instant summaries or drafts or research notes for users. But its output must always be double-checked for accuracy. Even ChatGPT said so.