5 NEW ChatGPT Settings & Features To Get 10x Better Responses (don’t skip)

5 NEW ChatGPT Settings & Features To Get 10x Better Responses

Table of Contents

🔧 Change #1 — Prompt GPT‑5 differently (Use the Prompt Optimizer)

One of the most important things I want you to understand is that ChatGPT 5 cannot be prompted the same way you prompted GPT-4 or earlier. GPT‑5 introduces more powerful agentic workflows, longer context windows, and different internal heuristics. If you keep using the same casual prompts you used with older models, you’ll miss out on dramatic improvements — or worse, get worse answers.

OpenAI provides a free Prompt Optimizer specifically for GPT‑5. Use it. The URL is:

How the Prompt Optimizer helps:

  • It analyzes your raw prompt and turns it into an expert-level prompt.
  • It shows what you did wrong and explains why each change improves the prompt.
  • It gives you the final optimized prompt so you can reuse (or save) it.

Quick step-by-step:

  1. Open the Prompt Optimizer URL above (you’ll need to be signed into your OpenAI account).
  2. Paste your initial prompt — for example, “Make me a script for a TikTok video about AI agents.”
  3. Click “Optimize.”
  4. Review the recommended edits and the reasoning behind each change.
  5. Copy the final optimized prompt into GPT‑5 or save it as a project.

Why this matters: GPT‑5 expects structure. The optimizer enforces best practices like defining the audience, desired tone, expected length, output format, and step-by-step constraints. Without those constraints, GPT‑5’s agentic workflows may attempt multi-step reasoning or side-actions that don’t match your intent.

Practical trick: Build an in‑Chat prompt optimizer

If you don’t want to open the external site every time, do this:

  1. Take a screenshot or copy the optimizer’s recommended prompt structure.
  2. Open a new chat with GPT‑5 and say: “Apply this structure to the following prompt to optimize it,” then paste the structure and your raw prompt.
  3. GPT‑5 will apply the same optimizations internally and return an expert-level prompt you can use immediately.

Sample before / after (illustrative):

  • Before: “Write a TikTok about AI agents.”
  • After (optimized): “You are a friendly, high-energy social media strategist. Create a 45–60 second TikTok script about ‘AI agents’ aimed at creators who know basic AI concepts but aren’t developers. Include: hook (3 seconds), 3 key benefits, one short demo idea, and CTA. Output as timestamped script lines.”

That extra structure makes the output predictable and usable—no guesswork, no rambling.

💬 Change #2 — Enable Follow‑Up Suggestions in Chat

There’s a small toggled setting that has a huge positive impact on your workflow: “Show follow‑up suggestions in chat.” Turning this on gives you short, actionable next-step suggestions automatically after each GPT reply. It’s subtle but transformative.

Where to find it: Settings > General > Show follow-up suggestions in chat (turn it on).

What it does:

  • Produces suggested follow-up prompts after each response (for example: “Do you want a sample TikTok script or just the framework?”).
  • Keeps the conversation moving forward and helps non-expert users know what to ask next.
  • Works very well with voice mode — you can just say the suggestion and continue the flow.

Why it matters: Without follow-up suggestions, you frequently stall or end up asking the wrong follow-up question. With it, GPT nudges you toward clarifying, deepening, or executing the output—turning one-shot answers into iterative, higher-quality deliverables.

🧠 Change #3 — Understand and Manage the Context Window (Up to 200k tokens)

GPT‑5 has an enormous context window—up to 200,000 tokens. That’s powerful, but it introduces new failure modes if you don’t manage it properly.

Quick conversion to help you reason: roughly every 100 tokens is about 75 words (ballpark). With a 200k token limit you can store very long documents, entire books, or multiple long conversations. But that also means it’s possible to accidentally fill the context window and cause the model to “forget” earlier parts of the conversation.

How the context window works (simple metaphor)

Think of the context window as a single sheet of paper where two people write back and forth. At first the sheet is blank. With each message both people write, more of the sheet gets filled. Once the sheet fills up, the top lines start getting erased so only the newest lines remain. GPT works the same way: older exchanges can get evicted when the token budget fills.

Why this causes bad responses

  • If you keep using the same chat for many unrelated tasks, the chat fills with tokens and older instructions get pushed out, so the model “forgets” your original goals or constraints.
  • If you feed large files or long documents into one chat without summarizing them, you consume tokens quickly and reduce the model’s effective memory for the rest of the chat.
  • If you allow the model to ramble (no length constraints), you waste tokens on verbose answers that could be concise and more useful.

Practical ways to manage the context window

  • Use one chat per project/task. Avoid one-chat-for-everything.
  • Explicitly constrain output length: ask for “a 3-paragraph summary” or “bullet list of 6 points.”
  • Periodically summarize long chat histories into a short context note and then start a new chat with that summary.
  • Avoid pasting whole books or huge PDFs into a single chat—chunk and reference only the relevant sections.
  • Use the token estimate prompt after a long exchange: “Please give me a rough calculated estimate of tokens used so far in this conversation based on the text length of all our exchanges.”

That token-estimate prompt is extremely useful. Run it after long sessions and when you’re unsure if the model has started to forget earlier instructions.

🧾 Change #4 — Set Up and Audit Your Memory Properly

GPT’s memory feature can be incredibly helpful. When set up correctly, memory allows ChatGPT to personalize responses across chats by remembering persistent facts about you: your role, preferences, ongoing projects, etc. But memory can also store incorrect, outdated, or irrelevant facts—so you need to manage it actively.

Where to go: Settings > Personalization > Manage memories (and make sure personalization is enabled).

What to enable and check:

  • Enable personalization and memory toggles in settings.
  • Add or edit your custom instructions so the model consistently knows how to behave by default.
  • In Advanced, enable memory for new chats if you want persistent personalization across sessions.
  • Open Manage Memories and review stored items periodically (weekly or bi-weekly is a good rhythm).

Common memory issues I see:

  • The model remembers incorrect facts about you (e.g., associating you with someone else’s niche or channel).
  • Outdated project details remain in memory and lead to irrelevant or wrong answers.
  • Memory is filled with trivial items that provide no real benefit and clutter personalization.

Example from my own usage: I had a memory entry that my Instagram was “holistic health.” That was wrong, so I deleted it. If you never audit memory, you’ll eventually get answers that seem like the model “doesn’t know you” — it does, but it’s recalling the wrong things.

Important new detail: Memory can be used to personalize queries to external search providers like Bing. That means not only what you tell ChatGPT but also other integrated inputs may influence personalization. Be aware of this when debugging odd results.

One of the most powerful shifts is that ChatGPT now integrates with your apps (connectors). Think Gmail, Google Calendar, Google Drive, Notion, HubSpot, GitHub, Dropbox, and soon Canva. When you connect these apps, ChatGPT can access and search their data to provide context-aware responses.

Why connectors change the game:

  • ChatGPT can draft emails by pulling real data from your Gmail.
  • It can check your calendar and schedule tasks or suggest times.
  • It can pull documents from Drive or Notion to summarize or suggest edits.
  • It enables more powerful AI agents that act on real, live data in your systems.

How to set connectors for the best experience:

  1. Open Connectors (or Settings > Connected Apps).
  2. Add the apps you use: Gmail, Google Calendar, Google Drive, Notion, HubSpot, GitHub, Dropbox, etc.
  3. For each connector, check that the connection mode is set to “auto” under Connected apps > Added source. When set to “auto,” the connector will be searched automatically without you needing to mention it explicitly.
  4. If a connector is not set to “auto,” you must explicitly ask the model to use that source in your prompt.

Security & privacy note: Only connect apps you trust and review the permissions requested by each connector. Consider using read-only permissions where possible and periodically audit connected apps.

Here’s a step-by-step workflow that uses all five improvements so you get consistent, high-quality outputs:

  1. Start a new chat per project or major task to avoid filling the context window.
  2. Before you ask anything substantive, paste a short project brief (2–4 sentences) and specify the output format and length constraints.
  3. If you’re building creative content (e.g., social media, emails), run your base prompt through the Prompt Optimizer or use your saved optimizer structure inside chat.
  4. Turn on follow-up suggestions so ChatGPT prompts you with helpful next steps automatically.
  5. If personalization matters, make sure memory is enabled and that your custom instructions are accurate. Audit Manage Memories weekly for stale facts.
  6. If relevant, ensure connectors are connected and set to “auto” so the model can pull contextual data from your apps.
  7. After a long session, ask: “Please give a rough estimate of tokens used so far” to check whether the context window is becoming an issue.

Example end-to-end scenario

Use case: You want a 60‑second promotional TikTok script and a follow-up email to send to collaborators.

  1. Start a new chat titled “TikTok promo + collab email.”
  2. Paste a one-paragraph project brief: audience, tone, length, CTA.
  3. Paste the rough prompt: “Write a 60-second TikTok script and a short email to request collaboration. Use a confident, conversational tone.”
  4. Run the prompt through the Prompt Optimizer or apply your saved optimization template.
  5. Enable connectors: if the email needs to reference calendar availability, ensure Google Calendar is connected and set to auto.
  6. Get the script. Use follow-up suggestions to prompt revisions like “Make the hook stronger” or “Include a demo idea.”
  7. Save the final outputs and summarize the chat in one paragraph for future reference; optionally store it in Notion or Drive and delete irrelevant memories.

Practical example prompts and templates you can copy

Below are copyable prompt templates optimized for GPT‑5. Use the Prompt Optimizer if you want to refine them further.

1) 60‑second TikTok script (social style)

“You are a high-energy social media strategist. Create a 60-second TikTok script about [TOPIC]. Target audience: [e.g., indie creators, marketing managers]. Structure: 3-second hook, 3 main points with examples, 10-second demo idea, 5-second CTA. Output as timestamped lines.”

2) Email draft using Gmail connector

“You are a professional outreach specialist. Draft an email to [Recipient name] asking about collaboration on [project]. Tone: friendly-professional. Include: one-sentence intro referencing a recent accomplishment of theirs, two short value points about how we can help, proposed calendar times (pull from my Google Calendar availability), and a brief signature. Output the email only.”

3) Project brief summary (for long chat cleanup)

“Summarize the following conversation into a 3-sentence brief that includes objective, deliverables, and next steps. Make it neutral and saveable to Notion.”

4) Token check

“Please give me a rough calculated estimate of tokens used so far in this conversation based on the text length of all our exchanges.”

5) Memory audit instruction

“List any saved memories you have about me and flag those that might be outdated or incorrect. Suggest deletions or edits if a memory seems irrelevant to my current projects: [list current projects].”

Common mistakes people make (and how to fix them)

  • Prompting GPT‑5 like GPT‑4: Fix: Use the Prompt Optimizer and include structure (audience, format, constraints).
  • Single chat for everything: Fix: Create a new chat per project and summarize old chats.
  • Not auditing memory: Fix: Review Manage Memories weekly and delete wrong items.
  • Not using connectors: Fix: Connect Gmail, Calendar, Drive, etc. and set to “auto” for seamless context.
  • Ignoring token usage: Fix: Use the token estimate prompt and constrain output lengths.

FAQ 🤔

Q: Do I need to pay for GPT‑5 to use these settings?

A: Access to GPT‑5 may require a paid plan depending on OpenAI’s pricing and rollout. Some features like the Prompt Optimizer are free to use on OpenAI’s platform, but model availability is governed by OpenAI’s subscription tiers.

Q: How often should I audit my memories?

A: Weekly to bi-weekly is a good cadence. If you’re running a lot of projects or have frequent changes, check once a week. For casual users, once per month may suffice.

Q: What does “set connector to auto” mean?

A: When a connector is set to “auto,” ChatGPT will search that data source automatically when appropriate. If it’s not set to auto you must explicitly request the connector in your prompt.

Q: Is the token estimate exact?

A: No — the token estimate prompt provides a rough calculated estimate based on text length. It helps you know if you’re approaching the context limit but isn’t a precise accounting tool.

Q: Will connecting apps leak my data?

A: Connectors require permissions. Only connect apps you trust, review permission scopes (read vs read/write), and periodically remove unused connections. Use organization-level security features if connecting business accounts.

Q: Can GPT‑5 access private calendars, emails, or docs without permission?

A: No. Connectors must be authorized explicitly by you. Don’t share credentials; use the connector authorization flow provided by the platform.

Resources and further reading

  • OpenAI Prompt Optimizer: https://platform.openai.com/chat/edit?models=gpt-5&optimize=true
  • AI Automation School: https://www.skool.com/ai-automation-school/about
  • If you want to follow industry analysis, look for reports by major banks and consultancies—Goldman Sachs and others publish periodic AI impact reports.

Conclusion & Call to Action 🚀

If you want to get 10x better responses from ChatGPT right now, don’t just hope for better outputs—change three things: your prompting style, your settings, and your tooling. Enable follow-up suggestions, manage context windows and memories, use the Prompt Optimizer, and connect the apps you rely on. These five changes will make ChatGPT more reliable, faster, and far more useful.

Leave a Reply

Your email address will not be published. Required fields are marked *

Most Read

Subscribe To Our Magazine

Download Our Magazine