"I say it, you figure it out."
→ Take the AIBT TestENTER is the AI user archetype defined by one habit: hitting return as fast as possible. Your prompts are one sentence. "Write me a plan." "Fix this." "Summarize it." You don't provide context, constraints, examples, or even a clear goal — you just yeet the request into the chat box and expect the AI to figure out the rest. When the AI comes back with something that misses the mark, your immediate thought is "this AI is so dumb." Meanwhile the AI is silently thinking: "You could at least tell me what you want."
The name comes from your favorite key. You press Enter with the conviction of someone who has never re-read a prompt in their life, and frankly the world would slow down if you did. ENTER users are the opposite of BACKSPACE users in every way that matters — where BACKSPACE treats prompting as engineering, ENTER treats prompting as "just tell me, please." The tradeoff is exactly what you'd expect: ENTER users get outputs fast but often have to redo things when the AI missed the real intent.
ENTER scores high on Delegation (you hand off tasks quickly) but low on almost everything else — low prompting precision, low emotional relationship, low willingness to iterate:
The extremely low P (Prompt precision) — just 8% — is the defining trait. ENTER users barely prompt at all, in the engineering sense. They tell the AI what they want in the same number of words they'd use to text a friend. Sometimes fewer. The R (Relationship) and A (Attitude) scores are also low: ENTER isn't emotionally invested in AI, isn't enthusiastic about AI, doesn't think of AI as a "partner." It's more like a subordinate that you're slightly annoyed with.
This is textbook ENTER behavior. The AI asks for clarification; ENTER doesn't bother elaborating and essentially bullies the AI into producing something, anything, by refusing to add context. The output is inevitably mediocre — not because the AI is bad, but because ENTER didn't give it enough to work with. The loop repeats with a new chat tomorrow.
This exchange is real ENTER comedy. The AI has no idea what "the first thing" refers to — the first line of code? The first error message? The first part of the function? ENTER users operate as if the AI is already inside their head, reading context that simply isn't there. Then they get frustrated when the AI can't guess correctly.
ENTER has a full vocabulary of one-word feedback. "Better." "Worse." "Shorter." "Longer." "Different." The AI is supposed to intuit what each of these means in context. Remarkably, it often does — modern LLMs are pretty good at guessing. But when they guess wrong, ENTER doesn't add information; they just keep yeeting more one-word instructions until something sticks.
It's easy to mock ENTER's prompting style, but there's a counterintuitive advantage: speed. An ENTER user can fire off 20 prompts in the time a BACKSPACE user crafts one. Most of those 20 produce mediocre output, but one or two hit something unexpected — an angle the user wouldn't have thought to ask for, a solution that only surfaces because the prompt was underspecified enough for the AI to interpret creatively. This isn't a defensible workflow for high-stakes work, but for brainstorming, getting unstuck, or exploring an unknown space, ENTER's bulk-yeet approach actually works.
ENTER users also force AI models to get better at "guessing intent from minimal input." Every ENTER-style prompt is a training signal for the next generation of models: users want short, under-specified interactions to produce correct results. Modern AI has gotten dramatically better at this compared to 2023. ENTER users are, in a way, the reason.
Not all AI models tolerate under-specified prompts equally. Models that do well with ENTER users:
Models that struggle with ENTER users typically have strict system prompts telling them to ask many clarifying questions or to refuse under-specified work. These feel "annoying" to ENTER users, but they're trying to prevent garbage-in-garbage-out.
ENTER pairs well with JARVIS (excellent at guessing intent and producing usable output from little input) and BAYMAX (warm enough to ask clarifying questions without making ENTER feel dumb). They clash with SKYNET (too willing to push back on vague requests) and BACKSPACE-friendly models that reward detailed prompts.
Curious if you're an ENTER or something else? The AIBT human test takes 5 minutes and reveals which of 16 keyboard-key user types you actually are. Even if you're an ENTER — you can probably answer it in three clicks.
→ Take the Test