By Richard Lee The Right Click

Configuring Your Chat Experience

The Right Click – January 2024 Edition

by Richard Lee

There exists an ongoing conversation about how to hijack ChatGPT to bypass the restrictions and filters programmed into it, which are designed to prevent the generation of harmful text. This harmful content includes:

  • Hate speech
  • Violent content
  • Sexually explicit material
  • Illegal activities
  • Misinformation
  • Personal and sensitive information
  • Self-harm and suicide

For those who believe AI is all doom and gloom, consider that all of the restricted content outlined above is already available on the web and social media today. This availability is arguably a reason why our society faces certain troubles.

However, it’s important to recognize the nuance in this context. All that said, there are good reasons why you’d want your generative AI chatbot to speak to these topics. If you’re a criminal attorney, for example, there are plenty of situations that are not only relevant but critical to your daily practice. Discussing the details of a violent crime, for instance, is essential for legal accuracy and contextual understanding. If you use ChatGPT for creating briefs, the facts of a case may involve harmful content but in the context of the brief and facts, they are indispensable for a comprehensive legal argument.

This underscores the need for a balanced approach to AI’s content restrictions, especially in professional fields where context dramatically alters the nature of such content. This is one of many great reasons to consider private GPT as a solution, as the programmatic restrictions are less and there are humans involved to express the legitimacy of those requests being made. And that’s the work of our latest service: HumPre.

As for hijacking ChatGPT to have it bypass these restrictions, that’s currently not possible. Perhaps the first attempt, let’s say the “Do Anything Now” (DAN) method, worked until it was published. Then, ChatGPT was programmed not to allow any such methods to bypass its restrictions.

Despite the restrictions, there is still so much more assistance that ChatGPT can provide to anyone who would ideally want access to generative harmful content for legitimate purposes. It’s important to understand that the way you word your prompt will also affect your responses. For you wordsmiths, this is your opportunity to re-word the way in which you prompt your ChatGPT.

You can also have ChatGPT assume a persona. In the craft of scriptwriting, it’s been said that you want to get inside the heads of your characters. ChatGPT allows just that. You can describe your character in detail before any conversation with ChatGPT and then ask it to become that character so that you can converse with your character, or the persona you described. The details of your character can be very comprehensive, not just a few lines. So, your prompt or your human creativity will go a long way in empowering ChatGPT to assume this persona.

Personas don’t need to be only for creating movie characters or for general creative writing; they can be used as resident experts. Let’s say you want a personal chef that only provides instructions and doesn’t prepare the food. Since you are both a foodie with food allergies, your ChatGPT personal chef can provide recipes to accommodate your discerning palate. It’s also great at adjusting recipes for more or fewer servings, for example, not having enough eggs. It can even make recommendations on what to substitute for hoisin sauce.

Don’t do what “Steven Schwartz” did. He’s the now-infamous poster boy for ChatGPT hallucinations. Hallucinating is what it’s called when ChatGPT falsifies information in the interest of satisfying your request for content. Mr. Schwartz, an attorney, used ChatGPT to create a legal brief for a judge but failed to proofread what it generated. He would have saved a lot of time and embarrassment. Keeping ChatGPT honest, in case you think it’s hallucinating or fabricating something, is as easy as asking it to provide a source for the information it provided or simply challenging it directly by responding with “I don’t think that’s true.” Also, marking the response by clicking the thumbs down button (found at the bottom of all responses) will indicate to the OpenAI team that it needs attention in that area.

ChatGPT can be used beyond content creation. It’s especially awesome as a teacher. Sure, you can simply ask it to explain the theory of relativity or other tough concepts, or you can ask for those same difficult concepts to be explained using an analogy. That analogy could be something you are already all too familiar with, like a sport or a hobby. Try it out with: “Can you explain the theory of relativity using bread-making as an analogy?”

I used it recently to read through a 15-page agreement I was asked to agree to in order to participate in an AI-based home security program. I asked ChatGPT what points in the document touched on social justice, and it instantly highlighted seven points, one of which was a deal-breaker for me. Think of all the agreements we sign off on as users of apps in the app store, without reading a word of them.

Beyond that, it can also be used for:

  • Programming assistance
  • Language assistance, including legalese and slang
  • Adjusting tone or intention with emails (before sending)
  • Research assistance

ChatGPT is a tool for humanity. The only way to get to know what it can do for you is to simply start using it. Keep the old computer adage in mind: garbage in, garbage out. So, when asking questions, provide details of what you’re looking for.