GuidesMar 13, 2026Yash Khare

10 Things OpenAI Doesn't Tell You About Submitting a ChatGPT App

I submitted a ChatGPT app to OpenAI last week. Here are 10 undocumented requirements that will get your submission rejected if you don't know about them.

I submitted a ChatGPT app to OpenAI last week. The official submission guidelines cover the happy path. They tell you what your app should do, what policies to follow, and how to think about user experience. What they do not tell you is what the actual submission form requires — and the gap between the docs and the form is wide enough to cost you a week.

Here are 10 things I learned the hard way. Most of these are not documented anywhere in OpenAI's official SDK docs. A few are mentioned in passing in the guidelines but never surfaced in the form itself. Some directly contradict what developer mode tells you.

Credit where it is due: Nikolay from Alpic wrote up most of these requirements in a blog post back in December 2025, when the app directory first launched. The official docs still have not caught up.

1. EU data residency blocks your submission

If your OpenAI account uses EU data residency, you cannot submit an app. Your account must be set to global data residency.

The submission guidelines mention this requirement buried in the text, but the form never asks about it and never warns you. You fill out the entire multi-step form, hit submit, and get rejected. No indication of what went wrong until you dig through the guidelines again.

If you are building from the EU, check your organization settings before you start. Switch to global data residency if you need to submit. This one cost me more time than any other item on this list.

2. You need a live privacy policy URL

Not a PDF. Not a Google Doc. Not a file on your laptop. A publicly hosted URL that anyone can visit.

OpenAI requires a privacy policy that covers, at minimum, what personal data you collect, how you use it, who you share it with, and what controls users have. The URL must be live and reachable at the time of submission.

If you do not already have one published, get it set up before you open the form. Something like yourapp.com/privacy works fine. You can use a template as a starting point, but make sure it actually reflects what your app does.

3. You need live terms of service too

Same deal. A publicly accessible URL with your terms of service. Not a placeholder page, not a "coming soon" — an actual document at a real URL.

Most developer docs do not emphasize this as a hard blocker, but the submission form marks it required. Have both your privacy policy and terms of service ready at /privacy and /terms on your domain before you start filling anything out.

4. You need a published screencast video

This one caught me off guard. The submission guidelines do not mention a demo video anywhere. The form marks it as a required field.

You need a publicly hosted video showing your app in action. A short screencast walking through the core user flows. Upload it to YouTube, Loom, or any service that gives you a direct public URL. The form asks for the URL directly — there is no file upload.

Record the main use cases end to end. Show the app working in ChatGPT, not just your backend or your local dev environment. Keep it under a few minutes. OpenAI's reviewers will watch this to understand what your app does before they test it themselves.

5. Your logo must be a 64x64 SVG under 5KB

This is where the docs, developer mode, and the actual form all disagree with each other.

Developer mode says your icon should be 128x128 PNG, up to 10KB. The official docs do not specify a format or size at all. The submission form enforces SVG format, 64x64 pixels, under 5KB. The form is what matters.

If you designed your logo in Canva and exported it as SVG, there is a good chance it is 15-20KB because Canva embeds raster images inside the SVG file. You need to strip that out.

I used SVGO to compress mine:

npx svgo logo.svg -o logo-optimized.svg

Check the output file size. If it is still over 5KB after SVGO, you may need to simplify the paths or redraw the logo as a simpler vector. The form also asks for optional light-mode and dark-mode variants — same spec applies to both.

6. Screenshots have undocumented size requirements

You need 1 to 4 screenshots. Each must be exactly 706 pixels wide, between 400 and 860 pixels tall. These dimensions are not listed in the submission guidelines. The form enforces them and rejects anything that does not match.

Each screenshot also needs a matching "example user message" — the prompt text that OpenAI renders above your screenshot in the app directory listing. You write the prompt, they display it. Nobody warns you to prepare these prompts in advance.

Capture your app's widget output at a realistic state. Do not screenshot your entire chat window — just the widget content. And do not include the user's prompt in the screenshot itself, because OpenAI adds it on top automatically from the text you provide.

Practical tip: resize your browser or capture tool to exactly 706px wide before taking screenshots. Saves you from having to resize and potentially lose quality.

7. Customer support email is required

A small one, but still worth knowing. The form has a required field for a customer support email address. Not documented anywhere in the submission guidelines.

Have a dedicated support email ready. Something like support@yourapp.com works. Do not use your personal email if you can avoid it — this will be associated with your app listing.

8. Domain verification happens in real time and is not documented

After you paste your MCP server URL into the form, you need to verify your domain. The form generates a unique token and tells you to publish it at:

https://yourdomain.com/.well-known/openai-apps-challenge

This path is not mentioned in any docs. The form just expects it.

Here is the catch: you need to create that file, deploy it to your server, and hit "Verify Domain" in the form — all while the form is still open. If you are using a platform like Vercel or Netlify, the simplest approach is to create a static file in your public/ directory:

public/.well-known/openai-apps-challenge

Put the token as the only content of the file. No JSON wrapper, no HTML, no trailing newline. Just the raw token string. Deploy, wait for the deploy to finish, then click verify.

If you are running a custom server, add a route that returns the token as plain text. Either way, you need to be able to deploy on the spot. Having your CI/CD pipeline ready matters here.

9. Every tool annotation needs a written justification

When you scan your MCP server's tools in the form, you do not just confirm the annotations — you have to write a free-text justification for each one. Per tool. Three fields per tool: readOnlyHint, destructiveHint, and openWorldHint.

This is nowhere in the docs, and it is important. Getting tool annotations wrong is a leading cause of rejection. The annotations tell ChatGPT how to handle your tool — whether it should ask for user confirmation before calling it, whether it modifies data, whether it talks to the open internet.

Here is what good justification text looks like:

For a read-only search tool:

  • readOnlyHint: "This tool only reads product data from our catalog API. It does not create, update, or delete any records."
  • destructiveHint: "This tool cannot delete or overwrite any data. It returns search results only."
  • openWorldHint: "This tool only queries our own product API at api.ourapp.com. It does not browse the internet or access third-party services."

For a tool that creates records:

  • readOnlyHint: "This tool creates new entries in the user's project. It is not read-only."
  • destructiveHint: "This tool creates new records but does not delete or overwrite existing ones. The action is additive."
  • openWorldHint: "This tool interacts only with our backend API. No external services are contacted."

Write these justifications before you open the form. If you have more than a few tools, this takes real time. The annotations on your MCP server must match what you write here — reviewers will check.

If you want to understand how tool annotations fit into the MCP architecture, I wrote about that in Function Calling vs MCP.

10. You need 5 positive and 3 negative test prompts

The form requires 5 positive test cases and 3 negative test cases. Prepare them before you open the form. Seriously.

Positive test cases need four things each:

  1. A scenario describing the general use case (e.g., "Search for products")
  2. The exact user prompt to test (e.g., "Find running shoes under $100")
  3. The MCP tool that should be triggered (exact tool name from your server)
  4. The expected output in the format your server returns

Negative test cases need:

  1. A prompt that sounds related to your app's domain but should not trigger it
  2. A short explanation of why the app should stay out of it

For example, if you built a flight booking app, a negative case might be "Find me hotels in Tokyo" — related to travel, but outside your app's scope. The point is to show OpenAI that your app knows when not to activate.

Draft all 8 prompts in a separate document first. The form's UX makes this essential — there is an autosave feature that steals input focus every few characters. If you are trying to compose test cases in the form itself, you will lose your mind. Write them offline, then paste them in.

The form itself

A note about the submission form UX: it is rough. The autosave behavior I mentioned interrupts your typing. The form is long — six steps with multiple required fields on each. Treat it like a grant application, not a signup flow.

Have everything prepared in a document before you start:

  • Logo SVG (64x64, under 5KB, optional dark variant)
  • Privacy policy URL
  • Terms of service URL
  • Customer support email
  • Screencast video URL
  • 1-4 screenshots at 706px wide with matching prompts
  • Tool annotation justifications (per tool, three fields each)
  • 5 positive and 3 negative test prompts with expected responses
  • Domain verification ability (access to deploy a file to your server)

If you go in unprepared, you will end up with a half-finished draft saved in OpenAI's system and a frustrating afternoon.

What happens after you submit

Your app goes into "Review" status. There is no guaranteed timeline. Anecdotally, larger companies seem to get approved faster, while independent developers and smaller teams wait longer. There is community speculation that broader publishing access will open up through 2026.

We submitted drio — we built our ChatGPT app with our own visual MCP builder — and it is currently in review. I will update this post when we hear back.

Regardless of wait time, submitting early puts you in the queue. Being among the first apps in your category is a real advantage for visibility once the directory opens up further.

Quick reference checklist

#RequirementIn the Docs?What You Actually Need
1Global data residencyBuried in guidelinesSwitch from EU before submitting
2Privacy policy URLMentionedLive, public, hosted URL
3Terms of service URLMentionedLive, public, hosted URL
4Screencast videoNot mentionedPublic URL (YouTube, Loom, etc.)
5Logo formatNot specified64x64 SVG under 5KB
6Screenshot specsNot specified706px wide, 400-860px tall, with prompts
7Support emailNot mentionedDedicated support@ address
8Domain verificationNot mentionedToken at /.well-known/openai-apps-challenge
9Tool annotation justificationsNot mentionedFree-text per tool, per annotation
10Test promptsNot mentioned5 positive + 3 negative with expected outputs

FAQ

Can I submit as an individual developer?

Yes. The form allows both individual and organization submissions. You will need to complete identity verification either way — a selfie/liveness check and a government ID.

How long does review take?

No official timeline. Some apps from larger companies went live within weeks of the directory launch in December 2025. Smaller developers report waiting months. There is no public SLA.

What if my app gets rejected?

You can fix the issues and resubmit. The most common rejection reasons are incorrect tool annotations, missing legal pages, and apps that do not follow OpenAI's UI guidelines. The draft stays in your dashboard at platform.openai.com/apps-manage/.

Do I need OAuth for my app?

Only if your app requires per-user authentication. If your MCP server is public and does not need user login, select "No Auth" in the form. If some tools are public and some require auth, select "Mixed Auth."

Where can I learn more about building ChatGPT apps?

Start with What Are ChatGPT Apps for the full overview, or What Is MCP if you want to understand the protocol these apps are built on.