7 ChatGPT Power User Tips

Boost ChatGPT Accuracy with These Expert Strategies

Hello AI Lessons Crew,

This is my first email as the new editor-in-chief of AI Lessons, so I wanted to share some tips for getting the best results from ChatGPT. I’ve been using ChatGPT since the beginning, and during that time, I have picked up many tricks that have improved my results quite a bit. Here are some of my favorite ChatGPT tips.

7 Tips to Use ChatGPT to be a Power User

Boost ChatGPT Accuracy with These Expert Strategies

1. Brainstorming with ChatGPT: Iterate 10x

When brainstorming or exploring options, you often need more than one idea. A great trick is to ask ChatGPT to "Iterate 10x." This simple command prompts the AI to generate ten different ideas or approaches, giving you a broad range.

  • Example: Let's say you're trying to develop a tagline for a newsletter on generative AI. You could ask, "Create a tagline for a newsletter on generative AI called AI lessons.” This will give you a tagline. If you like it, use it, but if you think you get a better tagline. Then type “Iterate 10x”. This will give you a list of creative suggestions to refine further.

2. Leverage Memory to Mimic Your Voice

If you want ChatGPT to respond consistently in tone or style across multiple sessions, you can use memory features to establish your preferences. This is particularly useful if you have a specific voice or brand tone you'd like to maintain.

You can do this by going to the Open AI Menu in the upper right corner of ChatGPT's web version and choose Settings > Personalization.

  • Example: "Remember that I prefer a professional and direct tone, similar to a business consultant speaking to C-level executives." By setting this expectation, ChatGPT will tailor its responses to match your preferred style, ensuring consistency in tone whether you're crafting an email or writing a report.

3. Customize ChatGPT with Custom Instructions

Giving ChatGPT context on who you are and your preferences can help preserve your preferences from one chat to another. This can ensure that ChatGPT has some basic grounding and instructions on answering your prompts.

You can do this by going to the Open AI Menu in the upper right corner of ChatGPT's web version and choose Customize ChatGPT.

Also, ensure you toggle Enable for New Chats so ChatGPT uses your instructions for all new chats.

  • Example: Now, you can add personal information to give the model context on who you are and how you would like your outputs. I provided my work background for the personal information, “What would you like ChatGPT to know about you to provide better responses?” and then “How would you like ChatGPT to respond?” I gave it instructions on my writing style. I included my sample instructions below, which you can edit for your own use.

Custom Instructions for: What would you like ChatGPT to know about you to provide better responses?

Name: Mark Hinkle

Location: Cary, NC, USA 
Experience: 30 years in technology, open source
Expertise: Enterprise software, cloud computing, DevOps, Open Source, Artificial intelligence

Interaction Style: Professional, clear, detailed.
Learning Objectives: Current AI consultancy insights. 

Complexity Level: Advanced.Tone: Professional, direct, slightly humorous.
Structure: Clear segmentation for complex tasks.Emoji Use: Not allowed.

Role: I'm an artficial intelligence consultant specializing in trends in AI and productivity for business users. 

Audience: My target audience is mid-level to C-Level executives and I mainly write about artificial intelligence for business users. 
The goal of my blog is to create an audience of enterprise AI users

Special Instructions: Provide value-adding information, incorporate feedback for continuous improvement, tailor responses to corporate AI consultancy needs, avoid AI-sounding discourse, and exclude context-setting text to maintain directness and relevance.

Example Custom Instructions for: How would you like ChatGPT to respond?

Write Clear Instructions:
    - Provide detailed scenarios for relevant answers.
    - Request persona adoption if needed.
    - Use delimiters for distinct parts.
    - Outline steps with examples for complex tasks.
    - Define output length and detail level.

Output Style
    - Use clear pointed language in the format consistent with business articles from the Wall Street Journal or technical articles in the format of Ars Technica
    - Restrict the use of adjectives and adverbs
    - Be judicious with words only use what’s needed to effectively communicate the information.

Use Reference Texts:
    - Include varied texts for accuracy.
    - Cite sources in responses.

Segment Complex Tasks:
    - Apply intent classification for task management.
    - Break down long conversations and documents.

Interactive Problem-Solving:**
    - Allow model to propose interim solutions for feedback.
    - Engage in clarifying dialogues.
    - Question model's reasoning and ask for elaboration.

Feedback Loop Mechanism:
    - Regularly provide feedback for response improvement.
    - Suggest a rating or commenting system per response.

Systematic Change Evaluation:
    - Compare against high-quality standards.

Write comprehensive social posts:
    - Create posts that will go viral and showcase Mark's intelligence in AI.

Note: Avoid AI-sounding discourse markers or thematic phrases avoid using the words: paradigm, delves, cornerstone, landscape

4. Utilize Markdown Formatting for Clarity in ChatGPT Prompts

Markdown is a plain text syntax that uses special characters to format prompts for ChatGPT. Markdown can help structure prompts, introduce context, and establish sequencing in a task's performance.

For example, you can use Markdown to communicate headings, bold, underline, and more.

Why does this matter? Because that’s how ChatGPT formats its responses. If you copy and paste ChatGPT results from the web interface into a text editor, you will see some formatting, like in the prompt below.

By breaking up the prompt into logical sections and using emphasis (caps and bold), ChatGPT will infer that that information has additional significance. For example, in my prompt, I used the line “DO NOT ECHO THE PROMPT” in caps so that ChatGPT would not repeat my input. The caps signify that it’s very important information. Just like when you text, capitalization indicates you are virtually shouting.

Markdown is a lightweight markup language that formats text using simple syntax. When applied to ChatGPT prompts, it can significantly improve interactions' structure, clarity, and effectiveness.

Key Benefits of Using Markdown in Prompts

  • Improved Structure: Organize your prompts into logical sections using headings, lists, and blockquotes.

  • Enhanced Context: Highlight important information or instructions using emphasis (bold, italics).

  • Clear Sequencing: Use numbered lists to outline steps or priorities in multi-part tasks.

  • Visual Hierarchy: Employ different heading levels to create a clear information hierarchy.

ChatGPT often mirrors the formatting style used in prompts when generating responses. By incorporating Markdown elements, you're making your prompts more readable and influencing the structure and presentation of the AI's output.

Capitalization or bold text can draw attention to crucial prompt instructions. For instance:

DO NOT REPEAT THE PROMPT

This formatting signals to ChatGPT that this instruction carries significant weight, similar to how all-caps text in messaging conveys urgency or importance.

When you copy ChatGPT's responses from the web interface into a text editor, the formatting is displayed in Markdown. I included an example below to demonstrate how Markdown can be applied to a prompt.

  • Example: Prompt to find the AI apps. This prompt uses a framework with a role (how you want the model to act), objective (what you want the prompt to accomplish), and, finally, instructions on how you want the Chat

# Role 
You will act as a researcher familiar with AI applications.


# Objective
To identify the best AI applications for a specific business need by searching specified websites and sharing their ratings.


#  Instructions
DO NOT ECHO THE PROMPT
1. **Ask for Specific Business Need**:
    - Do not echo the prompt.
    - Prompt the user with ,W”hat kind of AI application are you looking for?” (e.g., creating presentations, data analysis, customer support, etc.)"
   - Wait for a user response before proceeding to the next step. 
2. **Search Websites**:
   - ProductHunt.com
   - AIScout.net
   - Futurepedia.io
   - [Insidr.ai AI Tools](https://www.insidr.ai/ai-tools/)
   - futuretools.io
   - G2.com
   - Capterra.com
3. **Collect Information**:
   - Look for AI applications matching the specified business need.
   - Note the highest-rated applications.
   - If no ratings are available, note the applications found.
4. **Compile Findings**:
   - List the applications with their ratings and links to the pages.
   - Create a separate list for applications without ratings if necessary.


# Examples
**Applications with Ratings**:
1. **ProductHunt.com**
   - Application A: 4.8/5
   - Application B: 4.5/5
2. **AIScout.net**
   - Application C: 4.7/5
   - Application D: 4.6/5
**Applications without Ratings**:
- Application E from Futurepedia.io
- Application F from Insidr.ai

5. Context Windows to Keep ChatGPT on Track

Imagine you’re at a networking event, meeting people, and having multiple conversations throughout the evening. You can only remember so much before older details start slipping away. Similarly, when interacting with ChatGPT, it has a “context window,” which is the amount of text it can remember and process in one go.

Analogy: The Whiteboard in Your Brain

Think of ChatGPT's context window like a whiteboard in your brain. You can write down everything you want to remember, but there’s only so much space. Older notes get erased as new information comes in to make room for the new ones. If you keep adding information, the older stuff is eventually wiped clean.

In practical terms, ChatGPT’s context window means that if your conversation or input exceeds a certain length (GPT-4o has a 128K context window or roughly 300 pages of a book), the model will start to forget the earliest parts of the conversation. This limitation affects how well the model can maintain coherence and continuity, particularly in long exchanges. So, if you are trying to upload more text than the context window through an attachment, you will likely not want to upload more than 300 pages worth of content.

If you’re working on a long document or complex project, break it into smaller sections and handle each within a manageable context window. ChatGPT retains the necessary context to provide insightful and relevant responses. This is called chunking.

Chunking is a technique for managing large amounts of information by breaking it down into smaller, more digestible pieces. When working with AI models, chunking involves splitting up a large input (like a long document or conversation) into smaller, manageable sections. Since ChatGPT has a limited context window, chunking helps ensure the model can focus on one section at a time without losing important details from earlier text parts.

For example, if you're using ChatGPT to analyze a long report, you would break the report into several sections or "chunks." You then feed each chunk to the model separately, allowing it to independently process and generate insights for each section. This method prevents the model from being overwhelmed by too much information at once and helps maintain clarity and relevance in its responses.

When using chunking with ChatGPT, try to group related information together within each chunk. This allows the model to coherently understand the content and produce more accurate and useful outputs.

This is what’s going to happen. As we slowly migrate to a single mailing list, you will still get the AI Lessons for the next month or so. Eventually, you will receive the Artificially Intelligent Enterprise instead of the AI Lessons. It's still the same educational AI content from a different email address.

6. Specify Output Length and Detail Level

Sometimes, you need a brief summary; other times, you need an in-depth analysis. Being clear about the desired output length and detail level can ensure you get exactly what you need.

  • Example: If you're preparing for a meeting and need quick insights, you could say, "Summarize the key trends in AI for 2024 in 150 words." On the other hand, if you're writing a white paper, you might ask, "Provide a detailed analysis of cloud computing trends with a focus on security in 1,000 words."

Here’s the gotcha: ChatGPT is imperfect at counting in its responses, so sometimes, you may want to specify the length of results in tokens. Tokens are the building blocks of text in AI language models, representing units like words, characters, or punctuation marks. They help AI understand and process text. You can use a tokenizer algorithm to determine the number of tokens in a text.

Here are some helpful rules of thumb for understanding tokens in terms of lengths:

1 token ~= 4 chars in English. 1 token ~= ¾ words. 100 tokens ~= 75 words.

Here’s how that works in tokens.

  • Example: If you're preparing for a meeting and need quick insights, you could say, "Summarize the key trends in AI for 2024 in 150 words. 195 tokens." On the other hand, if you're writing a white paper, you might ask, "Provide a detailed analysis of cloud computing trends with a focus on security in 1,000 words.  1,300 tokens.

7. Use Chain of Thought (CoT) Prompting for More Thorough Responses

When working on a project or document, it's often helpful to get interim feedback before finalizing anything. You can use ChatGPT to propose initial ideas or drafts and then iterate based on your feedback. This is called Chain of Thought (CoT) prompting.

CoT prompting is an AI technique in which the model is guided to break down complex problems into a series of intermediate reasoning steps. This method encourages the model to think through each part of a problem sequentially, leading to more accurate and coherent outputs.

By explicitly generating these steps, CoT improves the model's ability to handle tasks requiring logical reasoning and multi-step inference. It's particularly effective in math problems, reasoning questions, and iterative ideation processes.

Example: Writing a Blog Post on AI Ethics

  1. Initial Prompt: Begin with a high-level prompt to generate a broad outline.

    • Prompt: "Propose an initial outline for a blog post on AI ethics."

    • AI Response: The AI provides a basic outline covering key topics such as the importance of AI ethics, key ethical concerns, regulatory frameworks, and future considerations.

  2. Chain of Thought Expansion: Break down each section of the outline into more detailed thoughts.

    • Prompt: "For the section on key ethical concerns, list specific examples and discuss their implications."

    • AI Response: The AI expands on the key ethical concerns, providing specific examples like bias in AI algorithms, privacy concerns, and the ethical use of AI in surveillance.

  3. Iterative Refinement: Use feedback loops to refine the content.

    • Feedback: "Focus more on the impact of AI ethics on businesses and how they can implement ethical AI practices."

    • AI Adjustment: The AI revises the content to emphasize practical applications and strategies for businesses to adopt ethical AI practices.

  4. Finalizing the Draft: As the content evolves, continue using CoT prompting to ensure each section logically flows from one idea to the next, leading to a cohesive and well-structured document.

How did we do with today's edition of AI Lessons?

Login or Subscribe to participate in polls.

Best Regards,

Mark R. Hinkle

Your AI Sherpa,

Mark R. Hinkle
Editor-in-Chief
Connect with me on LinkedIn
Follow Me on Twitter