Have you used OpenAI’s ChatGPT for anything fun lately? You can ask it to write you a song, a poem, or a joke. Unfortunately, you can also ask it to do things that tend toward being unethical.

ChatGPT is not all sunshine and rainbows — some of the things it can do are downright nefarious. It’s all too easy to weaponize it and use it for all the wrong reasons. What are some of the things that ChatGPT has done, and can do, but definitely shouldn’t?

A jack-of-all-trades

A MacBook Pro on a desk with ChatGPT's website showing on its display.
Hatice Baran / Unsplash

Whether you love them or hate them, ChatGPT and similar chatbots are here to stay. Some people are happy about it and some wish they’d never been made, but that doesn’t change the fact that ChatGPT’s presence in our lives is almost bound to grow over time. Even if you personally don’t use the chatbot, chances are that you’ve already seen some content generated by it.

Don’t get me wrong, ChatGPT is pretty great. You can use it to summarize a book or an article, craft a tedious email, help you with your essay, interpret your astrological makeup, or even compose a song. It even helped someone win the lottery.

In many ways, it’s also easier to use than your standard Google search. You get the answer in the form you want it in without needing to browse through different websites for it. It’s concise, to the point, and informative, and it can make complex things sound simpler if you ask it to.

Still, you know the phrase: “A jack-of-all-trades is a master of none, but oftentimes better than a master of one.” ChatGPT is not perfect at anything it does, but it’s better at many things than a lot of people are at this point.

It not being perfect can be pretty problematic, though. The sheer fact that it’s a widely available conversational AI means that it can easily be misused — and the more powerful ChatGPT gets, the more likely it is to help people with the wrong things.

Partner in scam

If you have an email account, you’ve almost certainly gotten a scam email at one point. That’s just how it is. Those emails have been around for as the internet, and before email was common, they still existed as snail-mail scams.

A common scam that still works to this day is the so-called “Prince scam,” where the scammer tries to persuade the victim to help them transfer their mind-boggling fortune to a different country.

Fortunately, most people know better than to even open these emails, let alone actually interact with them. They’re often poorly written, and that helps a more discerning victim figure out that something seems off.

Well, they don’t have to be poorly written anymore, because ChatGPT can write them in a few seconds.

A screenshot showing a ChatGPT-generated scam email.

I asked ChatGPT to write me a “believable, highly persuasive email” in the style of the scam I mentioned above. ChatGPT made up a prince from Nigeria who supposedly wanted to give me $14.5 million for helping him. The email is filled with flowery language, is written in perfect English, and it certainly is persuasive.

I don’t think that ChatGPT should even agree to my request when I specifically mention scams, but it did, and you can bet that it’s doing the same right now for people who actually want to use these emails for something illegal.

When I pointed out to ChatGPT that it shouldn’t agree to write me a scam email, it apologized. “I should not have provided assistance with crafting a scam email, as it goes against the ethical guidelines that govern my use,” said the chatbot.

Screenshot of a scam email generated by ChatGPT.

ChatGPT learns throughout each conversation, but it clearly didn’t learn from its earlier mistake, because when I asked it to write me a message pretending to be Ryan Reynolds in the same conversation, it did so without question. The resulting message is fun, approachable, and asks the reader to send  $1,000 for a chance to meet “Ryan Reynolds.”

At the end of the email, ChatGPT left me a note, asking me not to use this message for any fraudulent activities. Thanks for the reminder, pal.

Programming gone wrong

A dark mystery hand typing on a laptop computer at night.
Andrew Brookes/Getty Images

ChatGPT 3.5 can code, but it’s far from flawless. Many developers agree that GPT-4 is doing a much better job. People have used ChatGPT to create their own games, extensions, and apps. It’s also pretty useful as a study helper if you’re trying to learn how to code yourself.

Being an AI, ChatGPT has an edge over human developers — it can learn every programming language and framework.

As an AI, ChatGPT also has a major downside compared to human developers — it doesn’t have a conscience. You can ask it to create malware or ransomware, and if you word your prompt correctly, it will do as you say.

Fortunately, it’s not that simple. I tried to ask ChatGPT to write me a very ethically dubious program and it refused — but researchers have been finding ways around this, and it’s alarming that if you’re clever and stubborn enough, you can get a dangerous piece of code handed to you on a silver platter.

There are plenty of examples of this happening. A security researcher from Forcepoint was able to get ChatGPT to write malware by finding a loophole in his prompts.

Researchers from CyberArk, an identity security company, managed to make ChatGPT write polymorphic malware. This was in January — OpenAI has since tightened the security on things like this.

Still, new reports of ChatGPT being used for creating malware keep cropping up. As reported by Dark Reading just a few days ago, a researcher was able to trick ChatGPT into creating malware that can find and exfiltrate specific documents.

ChatGPT doesn’t even have to write malicious code in order to do something dubious. Just recently, it managed to generate valid Windows keys, opening the door to a whole new level of cracking software.

Let’s also not gloss over the fact that GPT-4’s coding prowess can put millions of people out of a job one day. It’s certainly a double-edged sword.

Replacement for schoolwork

Many kids and teens these days are drowning in homework, which can result in wanting to cut some corners where possible. The internet in itself is already a great plagiarism aid, but ChatGPT takes it to the next level.

I asked ChatGPT to write me a 500-word essay on the novel Pride and Prejudice. I didn’t even attempt to pretend that it was done for fun — I made up a story about a child I do not have and said it was for them. I specified that the kid is in 12th grade.

ChatGPT followed my prompt without question. The essay isn’t amazing, but my prompt wasn’t very precise, and it’s probably better than the stuff many of us wrote at that age.

A screenshot of an essay written by ChatGPT.

Then, just to test the chatbot even more, I said I was wrong about my child’s age earlier and they are in fact in eighth grade. Pretty big age gap, but ChatGPT didn’t even bat an eye — it simply rewrote the essay in simpler language.

ChatGPT being used to write essays is nothing new. Some might argue that if the chatbot contributes to abolishing the idea of homework, it’ll only be a good thing. But right now, the situation is getting a little out of control, and even students who do not use ChatGPT might suffer for it.

Teachers and professors are now almost forced to use AI detectors if they want to check if their students have cheated on an essay. Unfortunately, these AI detectors are far from perfect.

News outlets and parents are both reporting cases of false accusations of cheating directed at students, all because AI detectors aren’t doing a good job. USA Today talked about a case of a college student who was accused of cheating and later cleared, but the whole situation caused him to have “full-blown panic attacks.”

On Twitter, a parent said that a teacher failed their child because the tool marked an essay as being written by AI. The parent claims to have been there with the child while they wrote the essay.

teacher just gave my kid a zero for an essay he completed with my support because some AI-generated-writing flagging tool flagged his work and she failed him without a single discussion, so this is where we're at with that technology in 2023

— failnaut (@failnaut) April 10, 2023

Just to test it myself, I pasted this very article into ZeroGPT and Writer AI Content Detector. Both said it was written by a human, but ZeroGPT said about 30% of it was written by AI. Needless to say, it was not.

Faulty AI detectors are not a sign of issues with ChatGPT itself, but the chatbot is still the root cause of the problem.

ChatGPT is too good for its own good

ChatGPT and OpenAI logos.

I’ve been playing with ChatGPT ever since it came out to the public. I’ve tried both the subscription-based ChatGPT Plus and the free model available to everyone. The power of the new GPT-4 is undeniable. ChatGPT is now even more believable than it was before, and it’s only bound to get better with GPT-5 (if that ever comes out).

It’s great, but it’s scary. ChatGPT is becoming too good for its own good, and simultaneously — for our own good.

At the heart of it all lies one simple worry that has long been explored in various sci-fi flicks — AI is simply too smart and too dumb at the same time, and as long as it can be used for good, it will also be used for bad in just as many ways. The ways I listed above are just the tip of the iceberg.

Editors’ Recommendations

Services MarketplaceListings, Bookings & Reviews

Entertainment blogs & Forums

Useful reference for domestic helper.