6 min read
This article is currently shown in English. A translation is in progress.
LoL2LLM doesn't just dump the raw Riot API response — it shapes the JSON so an LLM can recover game context without confusion. Knowing what each top-level key means lets you pick the right checkboxes for the question you're actually asking, and lets you write prompts that reference fields directly ("compare laneOpponent.economy.csPerMin to mine"), which dramatically improves answer quality.
context.meta carries the queue (ranked solo, normal, ARAM), game length, patch, and result. Patch matters most — without it the AI might evaluate your champion against an outdated meta. Game length normalizes everything else: a 6/0/2 in a 20-minute stomp is a different game from a 6/0/2 in 40 minutes of teamfighting.
context.teamMacro includes bans, dragons and barons taken, and team-wide gold and kills. This is the team-level frame your individual stats sit inside. If your team has 30 kills and you have 5, that's a participation question, not a mechanics one.
Your stats, organized into four sub-categories: combat, economy, vision, loadout.
combat: KDA, damage to champions split by physical/magic/true, damage taken, CC score, largest spree, healing and shielding done. The healing/shielding fields are what make enchanter and tank evaluation possible.
economy: minion CS, jungle CS, gold earned and spent, final level. The CS split tells the AI whether you were a lane farmer or a roamer.
vision: wards placed, wards killed, control wards bought, vision score. Critical for separating "aggressive support" from "safe enchanter" play patterns.
loadout: full items, runes, and summoner spells. Lets the AI judge your build against the matchup, not in isolation.
Allies and enemies are stored separately under participants. This looks redundant, but without it an AI cannot tell whether your 6/2/8 is good or bad — it has no peer comparison. The model gets to compute team-relative damage share, see which enemy carry actually carried, and contextualize your line.
laneOpponent contains the full stat block of the enemy player who shared your lane. For lane-phase analysis it's decisive. A prompt like "compute CS, damage, and death deltas vs laneOpponent and judge who won lane" gives you a metric-grounded verdict instead of a vibes-based one.
Supports get the enemy support as their opponent. Comparing damage share and control-ward counts there reveals whether the engage support actually engaged or whether the enchanter actually peeled.
Turning on "hide usernames" rewrites every game name to Player1 through Player10. Useful when you want to discuss the game publicly (Reddit, Discord, X) without dragging a stranger's tag along. It does not change analysis quality — the LLM doesn't care who anyone is.
Sending the full payload bloats the JSON and dilutes the AI's focus. Curated sets by question:
Lane-phase analysis: context.meta + myPerformance.combat + myPerformance.economy + laneOpponent
Teamfight / macro analysis: context.teamMacro + myPerformance.combat + participants.allies + participants.enemies
Build review: context.meta + myPerformance.loadout + myPerformance.combat + laneOpponent.loadout
Once you understand what each field carries, you can match the export shape to the question. Specific input, specific question, specific answer — that's how you turn an LLM into a useful coach instead of a generic feedback machine.