7 min read
The quality of an AI coaching session is largely decided by what you don't ask. GPT-4o, o1, and Claude Sonnet/Opus are genuinely strong at LoL match analysis, but they fail confidently in a small number of recurring patterns. Here are the five I've hit repeatedly over a year of using AI coaching, plus the prompt addition that defuses each.
Knowledge cutoffs on current frontier LLMs are roughly mid-to-late 2024. League patches every two weeks — current meta, the latest tier list, and recent champion balance changes are not reliably in the model. Don't trust answers to "is this ADC strong this patch" or "what comp is meta right now."
Workaround: leave patch-dependent questions to op.gg / u.gg / ProBuilds. Ask the AI "what happened in this match." Anything bounded by the JSON is unaffected by stale meta knowledge.
Training data is heavy on pro casts and coaching content, so without a rank context, the model evaluates against LCK / LEC tempo. Telling a Gold jungler to "back-time the enemy's blue from minute six and counter-gank the resulting topside aggression" is technically correct and entirely unhelpful.
Workaround: state your role and rank.
I'm a Gold III jungle main. Skip pro-level macro; only suggest things realistically executable at this rank.
AIs reflexively criticize items that diverge from the u.gg/op.gg main path. But the central skill in LoL itembuilding is reading the enemy comp. A Maw of Malmortius into 4 AP threats is correct even if it's not the standard rush.
Workaround: hand the AI the enemy comp and force the evaluation into that frame.
Don't evaluate my build against the standard path. Evaluate it against the enemy comp ({five enemy champions}) and judge its damage type and defensive coverage on that basis.
KDA is the most visible number, so the model's default reasoning anchors there. Default Coach persona will infer "clean KDA = good fights" even when damage share and fight participation say otherwise.
Workaround: explicitly remove KDA from the evaluation.
Ignore KDA. Evaluate teamfight and macro performance from damage share, vision score, CS delta vs lane opponent, and objective participation only.
The Coach persona is tuned to encourage — useful for tilt control, counterproductive for improvement. On a game where you actually played badly, default Coach will say "the team had bad macro overall, you did fine."
Workaround: switch to Roast, or add this line to Coach.
Skip encouragement. State concrete mistakes and concrete fixes only. I want facts, not feelings.
AIs know the popular matchups well. Off-meta matchups (Illaoi vs Naafiri, Smolder vs Aphelios at unusual ranks) get analyzed at a lower resolution — "weak early, strong mid" level, not specific spell interactions.
Workaround: lean on specialized tools (matchup.lol, Mobalytics) for matchup detail. Keep the AI strictly inside the JSON.
AI coaching is powerful but not infallible. Separate "conclusion derived from the JSON I gave it" from "guess from prior knowledge" and you can extract only the high-confidence half. Cross-check anything suspicious against op.gg or Mobalytics. Run that loop and the AI stops being "your personal coach" and becomes the more accurate label: a fast, knowledgeable research assistant that's wrong often enough that you should always be reading critically.