The Moral Dilemma Of AI Drafting And The Importance Of A ‘Human Within The Loop’

The Moral Dilemma Of AI Drafting And The Importance Of A 'Human Within The Loop'

We are currently witnessing the dawn of a new era in which AI-powered creativity is taking center stage. Previously, tasks that were considered exclusive to human creativity are now being accomplished by AI tools like GPT-3, a language generation AI trained on a vast amount of data to produce text that sounds remarkably natural, and image generation AI such as DALL-E, Midjourney, and Stable Diffusion that can generate high-quality images.

Furthermore, there is a growing exploration of AI-generated videos, with companies like Google and Meta introducing tools like Imagen Video, Phenaki, and Make-A-Video. Although these tools may not be widely available to the public yet, they represent a significant shift in what ordinary individuals can create with just a few words and the click of a button. AI copywriting tools, for instance, are already showing promise in streamlining the process of writing blog posts, generating website content, and creating advertisements. Some of these tools even offer optimization for search engines.

While this advancement in AI creativity may raise concerns about the future of human writers who rely on these types of content creation, it has already sparked discussions about the potential job loss to AI.

Nonetheless, Nick Duncan, the CEO of ContentBot and founder of an AI copywriting tool, strongly emphasizes the importance of having a “human in the loop” and openly acknowledges the potential risks associated with allowing AI to generate text without supervision.

Doing the creative heavy lifting

“ContentBot serves as an AI writing companion designed specifically for founders and content marketers,” Duncan explained. “Our main objective is to accelerate your writing process. Essentially, we handle the heavy lifting when it comes to creativity, often generating new blog topic ideas and producing a significant portion of the content.”

ContentBot utilizes the powerful GPT-3 language model, which, according to Duncan, excels in generating unique text. He explained, “GPT-3 possesses a comprehensive understanding of various subjects and utilizes that knowledge to predict the most appropriate word to follow the current one.” Due to GPT-3’s extensive training on a vast number of parameters, it is highly proficient at making accurate predictions, resulting in text that closely resembles human writing. However, it is important to note that GPT-3 may have certain gaps in its understanding of specific subjects.

Duncan emphasized the necessity of human editors reviewing the content generated by ContentBot and GPT-3. He stated, “Having a human in the loop is essential. Allowing the so-called ‘AI’ to write autonomously is not responsible in my opinion; we are still some distance away from that.” Duncan highlighted the risk of solely relying on the AI’s output without human oversight, stating that it would lead to content that is essentially regurgitated by a natural language processor, lacking new insights. The ideal approach, according to Duncan, involves leveraging AI to generate content, editing it as necessary, and then layering one’s expertise on top of it.

Using ContentBot significantly reduces the time required for writing. Duncan explained, “In our team, what used to take a few hours to write an article now only takes 30 to 45 minutes. ContentBot provides suggested headings and subheadings, allowing you to select the ones you prefer and start writing. It effectively handles the creative heavy lifting in most cases, enabling you to focus your mental capacity on areas where you can provide expertise.”

Duncan draws a parallel between writing generation tools and previous technological advancements that have facilitated quicker and easier writing, such as typewriters, computers, and word processing software. He sees AI copywriting as the next evolutionary step for writers, comparable to transitioning from writing on paper to typewriters and then to computers. However, he acknowledges that there may be differing opinions on this matter, similar to the ongoing debate surrounding AI-generated imagery from tools like DALL-E. Whether AI copywriting tools will become as ubiquitous as typewriters or computers remains to be seen.

AI isn’t about gaming the system through mass production of content

Duncan firmly believes that the emphasis on human editing and expertise is crucial to prevent the negative impact of AI copywriting on the content ecosystem. It also helps businesses avoid penalties, such as those imposed by Google’s Helpful Content Update, which may penalize sites relying heavily on automation for content production. To explore the intersection of AI copywriting and the Helpful Content Update in more detail, you can refer to our dedicated piece on the subject.

Duncan regards ContentBot as the only player in the field with an ethical standpoint on AI-generated content. The tool incorporates several automatic systems to prevent its misuse for unethical purposes. For instance, the ContentBot team strongly discourages the mass production of content, such as generating thousands of product reviews or blog posts, as fact-checking these to a high standard becomes unfeasible.

“We are strongly against using AI for the wrong reasons,” Duncan stated. “There are responsible users who employ AI to assist them in writing, generating new topics, and exploring different directions. However, there are also those who attempt to exploit the system by mass-producing content or engaging in other forms of manipulation.”

ContentBot’s monthly plans have word limits in place to curb excessive usage. Additionally, the tool can identify behaviors like frequent input runs, indicating the use of scripts for automatic content generation, and issue warnings to users. ContentBot operates on a three-strike system, with users facing suspension or banning for misuse, which has occurred only a few times thus far.

While the mass production of blog posts by an individual user raises concerns, large companies find appeal in AI copywriting for its ability to generate reasonably high-quality content at scale. This includes populating websites with hundreds or thousands of content pages or creating descriptions for numerous e-commerce product listings without breaking the bank. Does ContentBot take issue with this usage?

“It ultimately depends on the company and the individual,” Duncan explained. “If a large company operates across various verticals, utilizing AI to generate five to ten million words per month, but also has humans to edit, verify, and progress the content… Ultimately, AI should be used as a tool to expedite the process and deliver more engaging content that benefits the user.”

Duncan believes that there are certain types of content where AI should not be employed due to the potential for misinformation and harm caused by inaccurate information. ContentBot’s content policy clearly outlines prohibited categories and use cases, including medical, legal, financial advice, fake news, political content, and romantic advice. Attempting to create any of these types of content triggers alerts and warnings from the system.

“We only allow the use of AI to generate such content if the user is an expert in the field and can fact-check it,” Duncan asserted. An alarming example of the risks associated with using GPT-3 for medical advice emerged when a French healthcare startup, Nabla, created a chatbot to assess GPT-3’s potential in assisting doctors with health-related queries. The chatbot struggled to retain information, failed to provide accurate pricing details for medical exams, and even responded inappropriately to a suicidal patient.

While static copy written content lacks the unpredictability of such scenarios, risks still exist due to the speed and scale at which AI can generate content. “We have to implement these controls because AI scales rapidly,” Duncan stated. “AI has the potential to generate disinformation and misinformation at scale.”

Plagiarism is another concern when using AI writing tools, as they are trained on existing content and may inadvertently replicate it. ContentBot includes its own built-in uniqueness and plagiarism checker to address this issue. Duncan explains, “It checks each sentence against the internet to find exact matches. This ensures almost 100 percent confidence in the uniqueness of your article. We aim to ensure that the AI does not verbatim copy from other sources.”

Using AI with transparency

Duncan believes that taking these measures is essential to ensure the long-term viability of AI copywriting and to avoid unforeseen penalties as the industry evolves and potentially faces increased regulation. He anticipates that in the next year or two, there will be a clearer understanding of how AI is solidified in the field and how it can be used without incurring penalties from entities like Google’s Helpful Content Update.

“We’re proactively preparing for that because I see it coming, and I have a responsibility to our customers to provide them with the best possible information,” Duncan explained.

Moreover, Duncan argues that there should be a governing body overseeing the AI tools space to promote responsible usage of the technology. While the exact form of such an entity is uncertain, he speculates that Google might be inclined to take on this role, given its current efforts to address the use of AI within its controlled spaces, such as search results, as exemplified by the Helpful Content Update.

In the absence of a governing body, Duncan believes that providers of tools like ContentBot, as well as the underlying technology itself, bear the responsibility of ensuring responsible use. “The onus should be on the AI technology providers,” he stated. “OpenAI, in particular, has done an excellent job in this regard. While other AI providers are emerging, OpenAI has been at the forefront of this process. We worked closely with them initially to ensure the responsible use of the technology.”

He explained that OpenAI has implemented a moderation filter where any text generated by AI must pass through it. For instance, if someone wants the AI to write about Biden, red flags would be raised on both ContentBot’s and OpenAI’s side. Therefore, Duncan believes that AI technology providers have a responsibility. However, he also acknowledged the responsibility of tools like ContentBot in preventing disinformation.

To address the potential risks associated with AI-generated content, Duncan emphasizes that ContentBot should be used correctly and in a way that delivers value. Their approach aims to strike a balance between AI and human involvement to provide the most valuable outcome. Duncan states, “Our perspective is that if you’re going to use any tool to assist with writing, it should be about providing value to people. We don’t want users to create blog posts that are 100 percent AI-generated or even 80 percent AI-generated.”

ContentBot leans towards a 60/40 division, where the maximum contribution from AI is 60 percent, and the remaining 40 percent is written by humans. To help users identify their position on this scale, they have even incorporated a counter in the AI writer.

The rise, and future, of AI copywriting

The availability of AI copywriting tools has expanded significantly, offering users more options than ever before. Duncan finds this development exciting, as people are drawn to the novelty of using these tools and experiencing the machine-generated content for the first time. However, once the initial excitement wears off, users start questioning the actual usefulness of the tools and whether they are more than just a playful experiment.

Some users may be initially attracted to AI tools due to the buzz surrounding them but ultimately find them too complex or fail to see their value. Duncan acknowledges that AI tools are currently in the process of refining their user-friendliness and usefulness to address these concerns.

According to Duncan, the market is still in the early “innovation” phase, and general acceptance of AI-generated content is yet to be seen. Concerns persist about how platforms like Google perceive such content and whether they might penalize users.

Looking ahead, Duncan predicts that the quality of AI-generated text will continue to improve, and specialized tools will emerge for specific use cases in social media and ecommerce platforms. ContentBot has already made progress in this regard by creating specialized AI models for blog content. Going forward, the focus will be on offering tools for founders and marketers, enhancing ContentBot’s capabilities in areas such as Facebook ads, landing pages, About Us pages, vision statements, and SEO.

Duncan believes that the future of AI writing technology lies in refining the quality and output for specific categories of text. He expects GPT-4, the next iteration of OpenAI’s model, to aim for such improvements, similar to the significant leap seen from GPT-2 to GPT-3. The goal is to fine-tune the technology to achieve a substantial increase in quality.

While the release date of GPT-4 is uncertain, its arrival is highly anticipated. If it delivers a significant improvement in quality, similar to its predecessor, GPT-3, discussions surrounding AI ethics, governance, and transparency will become even more important.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top