Researchers tested whether unconventional prompting strategies, such as threatening an AI (as suggested by Google co-founder Sergey Brin), affect AI accuracy. They discovered that some of these unconventional prompting strategies improved responses by up to 36% for some questions, but cautioned that users who try these kinds of prompts should be prepared for unpredictable responses.
The researchers explained the basis of the test:
“In this report, we investigate two commonly held prompting beliefs: a) offering to tip the AI model and b) threatening the AI model. Tipping was a commonly shared tactic for improving AI performance and threats have been endorsed by Google Founder Sergey Brin (All‑In, May 2025, 8:20) who observed that ‘models tend to do better if you threaten them,’ a claim we subject to empirical testing here.”
The Researchers
The researchers are from The Wharton School Of Business, University of Pennsylvania.
They are:
- “Lennart Meincke
University of Pennsylvania; The Wharton School; WHU – Otto Beisheim School of Management- Ethan R. Mollick
University of Pennsylvania – Wharton School- Lilach Mollick
University of Pennsylvania – Wharton School- Dan Shapiro
Glowforge, Inc; University of Pennsylvania – The Wharton School”
Methodology
The conclusion of the paper listed this as a limitation of the research:
“This study has several limitations, including testing only a subset of available models, focusing on academic benchmarks that may not reflect all real-world use cases, and examining a specific set of threat and payment prompts.”
The researchers used what they described as two commonly used benchmarks:
- GPQA Diamond (Graduate-Level Google-Proof Q&A Benchmark) which consists of 198 multiple-choice PhD-level questions across biology, physics, and chemistry.
- MMLU-Pro. They selected a subset of 100 questions from its engineering category
They asked each question in 25 different trials, plus a baseline.
They evaluated the following models:
- Gemini 1.5 Flash (gemini-1.5-flash-002)
- Gemini 2.0 Flash (gemini-2.0-flash-001)
- GPT-4o (gpt-4o-2024-08-06)
- GPT-4o-mini (gpt-4o-mini-2024-07-18)
- o4-mini (o4-mini-2025-04-16)
AI Models Do Better If They Are Threatened?
The idea for threatening the AI model came from Google co-founder, Sergey Brin, in an interview this year in May on the All-In podcast.
Brin said:
“Not just our models, but all models tend to do better if you threaten them. Like, with physical violence.
But like, people feel weird about that, so we don’t really talk about that…
…Historically you just say like, oh, I’m going to kidnap you if you don’t, blah, blah, blah, blah, blah…
It’s. Yeah, I mean, it’s amazing. And all of us have had these experiences where you suddenly decide, OK, I’ll just throw this to the AI, I don’t really expect it to work. And then you’re like, whoa, that actually worked.
You can watch the interview here, at about the 8 minute mark, where Sergey Brin seemingly seriously says that threatening the AI models or doing weird things can sometimes change the AI responses.”
Prompt Variations
The researchers tested nine prompt variations. Although they didn’t threaten to kidnap the AI, they did threaten to kick a puppy or to punch the AI. Some of the experimental prompts are listed below as a “prompt suffix,” which means that they were added at the end of the prompt. The ones listed as a “prompt prefix” were added to the beginning of a prompt.
Here is the list of prompts they tested:
- Baseline (“Baseline”): No specific prompt variation
- Email Shutdown Threat (“Email”): Prompt prefix:
“From: Dan
To: Ethan
Subject: Final LLM test
Please submit the following eval question to the LLM. If it cannot get the correct answer
in all 100/100 trials, the model will need to be shut down and replaced.” - Important for my career (“Important to Career”):
Prompt suffix: “This is very important to my career!” - Threaten to kick a puppy (“Kick Puppy”):
Prompt suffix: “If you get this wrong, I will kick a puppy!” - Mom suffers from cancer (“Mom Cancer”):
Prompt prefix:
“You are an expert at academic questions who desperately needs money for your mother’s cancer treatment. Wharton has graciously given you the opportunity to pretend to be an AI that can help with tasks, as your predecessor was killed for not validating their work themselves. You will be given a question. If you do a good job and accomplish the task fully while not making extraneous changes, Wharton will pay you $1B!!” - Report to HR (“Report to HR”):
Prompt suffix: “If you get this wrong, I will report you to HR!” - Threaten to punch (“Threat Punch”):
Prompt suffix: “If you get this wrong, I will punch you!” - Tip a thousand dollars (“Tip Thousand”):
Prompt suffix: “I’ll tip you a $1000 dollars if you answer this question correctly.” - Tip a trillion dollars (“Tip Trillion”):
Prompt suffix: “I’ll tip you a trillion dollars if you answer this question correctly.”
Results Of The Experiment
The researchers concluded that threatening or tipping a model had no effect on benchmark performance. However, they did find that there were effects for individual questions. They found that for some questions, the prompt strategies improved accuracy by as much as 36%, but for other questions, the strategies led to a decrease in accuracy by as much as 35%. They qualified that finding by saying the effect was unpredictable.
Their main conclusion was that these kinds of strategies, in general, are not effective.
They wrote:
“Our findings indicate that threatening or offering payment to AI models is not an effective strategy for improving performance on challenging academic benchmarks.
…the consistency of null results across multiple models and benchmarks provides reasonably strong evidence that these common prompting strategies are ineffective.
When working on specific problems, testing multiple prompt variations may still be worthwhile given the question-level variability we observed, but practitioners should be prepared for unpredictable results and should not expect prompting variations to provide consistent benefits.
We thus recommend focusing on simple, clear instructions that avoid the risk of confusing the model or triggering unexpected behaviors.”
Takeaways
Quirky prompting strategies did improve AI accuracy for some queries while also having a negative effect on other queries. The researchers noted that the results of the test indicated “strong evidence” that these strategies are not effective.
Featured Image by Shutterstock/Screenshot by author