ava's blog

LLM prompt superstitions

I remember when we used to exchange ways to make Google Search better. Search operators like wildcards (*), "exact search term", site:, filetype:, and more to truly get you what you wanted. The people who wanted to talk to the search bar like a human would get vastly worse results. Well, look now, and we have the search that you can talk to like a human, but it also made me notice the emergence of LLM superstitions.

At least that is the term we came up with to describe the phenomenon last I met with my mentor in data protection law. We were talking about legal aspects of in-house GenAI use and soon veered off to how to get the thing to actually do what you want - how to limit hallucinations, how to get the most bang out of your buck (prompt), and more. That's when it started: Stories we had heard about how to make the LLM act right.

During the talk, my mentor's coworker mentioned that he had heard that if you include "Take a deep breath" in the prompt, the LLM would be more thorough and take more time to actually verify things, and that it was speculated that it was put in there on purpose by the developers. Oof.

I totally believe that tips like these make the rounds, but this conversation suddenly had the same air to it like in elementary school when the other children would tell you rumors about a secret button sequence in some video game that would make a special version of the hero appear, or made up stories about extra stars in Super Mario N64 and the like. Now we know it's all fake, but with the tips around LLM prompting, it's more difficult.

One tip for better prompting worked that time, the other time it didn't. Was it bad luck? Was it due to the version change/new model? Did you mix it up and this tip actually worked better for Gemini than for ChatGPT? Who knows. At least with search engines, there was a clear way to improve that has stayed consistent for years and worked pretty rigidly. They were also easier to explain to laypeople, better to grasp the concept of than how an LLM works. Now, prompting in the way you have learned is somewhat a randomized slot machine, and people make up weird rules because it worked one time and they can't see or understand why it worked, all while the LLM is being anthropomorphized.

What we're left with are rumors about magical incantations and other workarounds and manipulations that feel like old rituals to ward off bad luck, like throwing salt over one's shoulder, as well as warnings that have the same vibe as "Never place two mirrors opposite of each other" or "Knock on wood for good luck". It seems less like a science or thing you can actively train and get better at, and more like a panic response by people to make something uncontrollable seem more controllable.

We had a good laugh after realizing how it sounds.

Just wanted to pin this discussion somewhere, I thought it was pretty interesting and I'd love to see this topic revisited and explored further in the future.

Reply via email
Published 12 Jul, 2025

#2025 #tech