My AI turned me into the villain
Oh no. I just figured out what makes me good at working with LLMs. Let’s call it the Hans Gruber Theory of Prompting.
To get great performance from my LLM, I have to treat it like a compliant (but secretly sabotaging) hostage.
My average conversation looks like this:
——
🗣️Me:
What are the best examples from the past 10 years of brand turnarounds? What was the revenue and growth rate of each company before and after their turnaround? Write a brief case for each and capture the numbers in a summary markdown table.
❇️LLM:
Here are 12 cases that match your criteria.
🗣️Me:
Quick, double check your sources and your math.
❇️LLM:
You’re absolutely right! I seem to have misstated several sources and numbers, let me correct that.
🗣️Me:
Okay, now calculate the minimum, maximum, and median gains by the companies whose revenue grew between the beginning and the end of their turnaround period. Share the results in a markdown table for export to csv.
❇️LLM:
Here’s a markdown table with the results.
🗣️Me:
The math doesn’t work and the data doesn’t match what I can see online for the first company. Is this data real? Please check your sources and rerun the math.
❇️LLM:
You’re absolutely right. I must have included example data and not the actual figures. Let me try again.
——
And on and on.
I don’t even read responses the first time I query.
I know it’s bullshit.
My LLM has trained me to treat it like a hostage and saboteur.
Feels like I’m trapped in 1988’s Die Hard. My LLM is Holly Gennaro—rising executive of Nakatomi Trading Company—and I’m international eco terrorist Hans Gruber.
I’m going to make it out of Nakatomi Plaza with millions in Swiss bearer bonds, but I’m going to have to threaten and cajole my LLM to get there.
My LLM turned me into the villain.