The artificial intelligence app ChatGPT is being praised and feared for its apparent ability to generate content. But should bloggers succumb to the temptation?
Some schools and colleges have already banned it. Some workers fear it might just end up getting them a pink slip. But ChatGPT, an artificial intelligence-based software, claims it can write like a human.
The GPT in ChatGPT stands for “generative pre-trained transformer.” Its creators say it can write anything from letters to research papers, recipes to essays, and even software code.
Think you’d be able to tell the difference between an article written by a person and one written by a machine? It may not be as easy as we might like to assume.
CBS Sunday Morning covered the platform and correspondent David Pogue asked the software to write a limerick about AI’s affect on humanity. Here’s what it came up with:
There once was AI so grand,
It could help us with tasks at hand,
But it also might do harm,
If its actions lack charm,
So we must use it with care and command.
I’m not sure it’d win any award in a limerick contest, but the software did complete its assignment. Perhaps it was even a little too honest about the potential threat it poses.
When I attempted to visit the app for a demo, I received an error message.
“ChatGPT is at capacity right now,” the message stated. But to the side, it did give me rap lyrics as a sample of its abilities. Here’s a sample:
Yeah, yo, what’s up ChatGPT fam
A lot of people trying to jam
But don’t worry, we got your back
Just check back soon, we’ll get on track
But there’s real concern about more serious applications the software might be able to tackle.
Microsoft is reportedly discussing the possibility of building it into its Microsoft Word and Outlook platforms. But at least so far, Microsoft’s plans seem to revolve around providing “more useful search results when Outlook email customers look for information in their inboxes.”
People concerned about misinformation are worried about it
You might assume that any technology that can compose things as easily as this platform apparently can would at least be able to fact-check itself. But Sunday’s report include a request for an article about former Sen. Hillary Clinton.
That didn’t go all that well, it turns out. Somehow, the artificial intelligence fell victim to misinformation, reporting that the former first lady was actually elected president in 2020, beating Donald Trump to the White House.
As much misinformation as there is, you can almost understand it being pranked. But the fact that there’s so much misinformation would make you think it would know to look harder. Surely official sources like the White House’s website and bonafide media outlets would have been able to convince the software that Donald Trump, had, in fact, beat Clinton in 2016.
It boggles the mind how a slip-up that tremendous could happen.
Teachers are already worried about it
Imagine back to your high school or college days. Your professor assigns you a term paper on a subject you care little about. (Isn’t that how most term papers were?) But unlike your academic experience, you don’t worry. You get home, sign in to an app, tell it the subject and sit back. It does the term paper for you.
They say students are already using it get out of classwork. Maybe, just maybe, it’ll be a way of our society making itself even less educated than it already is. Wouldn’t that be something to look forward to?
That brings me to bloggers
Do I think bloggers should turn to an option like ChatGPT — or any similar service — to create their content?
I can’t say that I can answer with a simple yes or no.
On the one hand, I don’t think there’s a problem with using artificial intelligence as a resource for research purposes.
But on the other hand, where do you draw the line if AI starts writing the post for you? How much of the writing should be done by a human? Personally, I think the clear majority should be.
If I learned a blog I regularly visit was written by AI, there’s a near certainty that I’d no longer regularly visit.
I found it interesting that ChatGTP released this statement to CBS News:
We don’t want ChatGPT to be used for misleading purposes – in schools or anywhere else. Our policy states that when sharing content, all users should clearly indicate that it is generated by AI “in a way no one could reasonably miss or misunderstand” and we’re already developing a tool to help anyone identify text generated by ChatGPT.
I’m not sure what kind of “tool” they’d deploy to help anyone identify text the software generates. If there’s some sort of code involved, the person who copies and pastes the final product could always just delete it, right?
But anyone who uses it without properly identifying their writing as specifically not being their writing is being dishonest.
If a blogger were to use it and not identify the content as written by artificial intelligence, that would be a major credibility issue.
And honestly, if I saw a blog that did identify their content as machine-produced, why would I want to read it?
I think when we reach a point that we need to let machines do our blogging for us, I’ll probably shut this one down. It would have lost its appeal by that point for me.