usedChatGPT OpenAIAnything interesting recently? You can ask him to write you a song, a poem or a joke.Unfortunately, it may also require you to do some unethical things.
content
- you know everything
- fraud partner
- programming error
- alternative schoolwork
- ChatGPT is too good to be true
ChatGPT isn't all sunshine and rainbows - some of the things it can do are downright badass. It's very easy to turn it into a weapon and use it for all the wrong reasons. What are some things ChatGPT has done and definitely could and shouldn't do?
related videos
you know everything
Love them or hate them, ChatGPT and similar chatbots are here to stay. Some are happy about it, some wish it never happened, but that doesn't change the fact that ChatGPT's presence in our lives will almost certainly increase over time. Even if you don't use a chatbot yourself, you've probably seen some of the content it creates.
related
- ChatGPT is great, but it really needs these 4 new features
- Googlers say Google's rival ChatGPT is a moral disaster
- Auto-GPT: 5 Amazing Things People Are Already Doing With It
Don't get me wrong, ChatGPT is pretty cool. you can use it tosummarize a book or article, to write a boring email,I will help you write a thesis,Interpreting your astrology essay, or even create a song. evenhelp someone win the lottery.
In many ways, it's also easier to use than standard Google search. You can get answers in your own way without browsing different websites to find them. He is concise, direct, informative and can make complicated things sound simpler if you ask him to.
However, you know the saying: "Fireplaces aren't good at everything, but they're often better at everything." ChatGPT isn't perfect at everything it does, but it's much better at a lot of things than it is today Everyone is doing better.
However, being imperfect can be quite problematic. The fact that it's a widely used chat AI means that it can easily be abused, and the more powerful ChatGPT becomes, the more likely it is to help people deal with the wrong things.
fraud partner
If you have an email account, you've almost certainly received a fraudulent email at some point. These are the ins and outs. These emails have been around since the dawn of the internet and were still around as snail mail scams before email became commonplace.
A common scam that still works today is the so-called "prince scam", in which fraudsters try to convince their victims to help transfer their incredible wealth to another country.
Fortunately, most people know not to even open these emails, let alone interact with them. They are often misspelled, which helps the more astute victim to realize that something is wrong.
Well, they don't have to write misspellings anymore because ChatGPT can write them in seconds.
I asked ChatGPT to write me a "credible and highly persuasive email" in the style of the scam I mentioned above. ChatGPT set up a Nigerian prince who supposedly offered me $14.5 million to help him. The email is full of great language, written in perfect English and definitely persuasive.
When I mentioned scams specifically, I didn't think ChatGPT would even have to agree to my request, but they did, and you can bet they're now doing the same for people who really want to use these emails for illegal activities.
When I pointed out to ChatGPT that he shouldn't have agreed to write me a scam email, he apologized. “I must not help create fraudulent emails, as this violates the code of conduct that governs my use,” the chatbot said.
ChatGPT learns from every conversation, but it clearly hasn't learned from its past mistakes, because when I asked it to pretend to be Ryan Reynolds texting me in the same conversation, it did so without hesitation. The resulting message was entertaining, insightful, and asked readers to send $1,000 for a chance to meet "Ryan Reynolds."
At the end of the email, ChatGPT left me a note asking me not to use this message for any fraudulent activity. Thanks for the reminder mate.
programming error
ChatGPT 3.5 can encode, but it's far from perfect. Many developers agreeGPT-4do better Users are already using ChatGPT to build their own games, extensions and apps. It's also great as a study aid if you want to learn programming on your own.
As an AI, ChatGPT has an advantage over human developers: it can learn all programming languages and frameworks.
As an artificial intelligence, ChatGPT also has a major disadvantage compared to human programmers: it has no consciousness. You can tell it to create malware or ransomware and if you enter the notification correctly, it will do what it says.
Fortunately, things are not that simple. I tried to ask ChatGPT to program me a very morally questionable program and it refused, but the researchers were looking for a way around it and the concern is that if you are clever and stubborn enough, you might get Dangerous code. on a silver platter
There are many examples of this happening. Forcepoint security researchers canLet ChatGPT write malwareFind loopholes in his instructions.
researchers fromCyber Ark, an identity security company, managed to get ChatGPT to write polymorphic malware. This was in January: OpenAI has since beefed up its security for such things.
However, new reports of ChatGPT being used to create malware continue to emerge. According to reportsDark readingJust a few days ago, a researcher was able to trick ChatGPT into creating malware that can find and infiltrate specific documents.
ChatGPT doesn't even need to write malicious code to do something suspicious. Recently, it succeededGenerate a valid Windows key, opening the door to a whole new level of software hacking.
Let's also not overlook that GPT-4's encoding capabilities canit put millions out of workone day. This is of course a double-edged sword.
alternative schoolwork
Many children and teenagers these days are immersed in homework, which can make them want to take as many shortcuts as possible. The internet itself does a great job of preventing plagiarism, but ChatGPT takes it to the next level.
I asked ChatGPT to write me a 500 word essay on fictionPride and prejudice.I'm not even trying to pretend I'm doing this for fun: I'm making up a story about not having a son and saying it's for them. I declare that the child is in the 12th grade.
ChatGPT followed my instructions without hesitation. The article is not surprising, but my advice is not very accurate and probably better than many of us wrote at this age.
Then, to test the chatbot further, I said that the previous estimate of my son's age was wrong, he was actually in eighth grade. The age difference is huge, but ChatGPT didn't hold back: it just rewrote the article in simpler language.
Writing papers using ChatGPT is nothing new. One could argue that it can only be a good thing if chatbots contribute to the idea of eliminating work. But right now, the situation is a bit out of control and even students who don't use ChatGPT will suffer.
Now, if teachers and professors want to check whether students are cheating on their papers, they almost have to use artificial intelligence detectors. Unfortunately, these AI detectors are far from perfect.
The media and parents are reporting cases of false accusations against students of cheating, all due to bad AI detectors.USA todayDiscuss the case of a student who was accused of cheating and later acquitted, but the whole situation caused him to "completely panic".
On Twitter, a parent said a teacher failed their child because the tool marked an essay as written by AI. The father claims to be with the boy as they write this.
The teacher just gave my son a zero for completing an essay with my support because an AI-generated writing marking tool marked her work and didn't pass it for no reason, so here's what we're doing at Where the Technology Will Be Used in 2023
Loser (@failnaut)April 10, 2023
To test it myself, I posted this post onCeroGPTYesWriter AI Content DetectorBoth said it was written by humans, but ZeroGPT said about 30% was written by AI. Needless to say, this is not the case.
Bad AI detectors are not an indication of problems with ChatGPT itself, but chatbots are still the root cause of the problem.
ChatGPT is too good to be true
I've been playing with ChatGPT since it came out. I tried with a subscriptionChat GPT Plusand free mockups available to everyone. The power of the new GPT-4 is undeniable. ChatGPT is now more reliable than ever and will only get betterGPT-5(if this thing comes out).
It's cool, but also scary. ChatGPT is getting too good for its own good as wellusHis own good.
At the heart of it all is a simple question that has long been explored in various science fiction films: artificial intelligence is very smart and at the same time very dumb, and if it can be used for good, it will also be used for evil. in many aspects. The ways I mentioned above are only the tip of the iceberg.
Editors' Choice
- Google Bard takes a big step to become a real competitor to ChatGPT
- You won't believe how much it costs to run ChatGPT
- The Stable Diffusion team has just released an open source ChatGPT competitor
- Caution: Many ChatGPT extensions and apps may be malware
- ChatGPT comes straight to Windows, but not in the way you might think