BLM Stat Weights, Simulations, and more

Introduction

I originally wrote my simulator back during 2.1 because the question of how useful spell speed was for BLM was one that I’d been turning over in my mind for some time. This was prior to PuroStrider publishing any of his work, and I also saw it as a way to play around with the C# language some more, since I’d just picked up the language as part of my work. It started out fairly basic, with a lot of hard-coded assumptions about the formulas and didn’t have any kind of user interface. Over time, it’s evolved into a combination simulator, stat weight calculator, and gearset tracker that I’ve used to plan out my gearsets since midway through 2.2.

The stat weights I’ve gotten out of the simulators I wrote via this framework for other jobs have lined up fairly closely with those Dervy presented, within a reasonable margin of error for differences in the way we calculate them. Given that, I’m fairly confident my methods are sound. Here then, is the run-down of how I arrive at the numbers I use. I’ll describe how I simulate optimal BLM play, how that simulation is used to calculate the stat weights for a given gear set, and how I use those calculations to arrive at the BiS sets I recommend as well as what those sets are for the current patch.

In this document, I will try to lay out the methods I use in such a way that those less familiar with the math and programming logic used can still follow it. Apologies in advance if it ends up being fairly verbose.

Simulating the Black Mage

Rather than build a complex statistical model to account for Firestarter and Thundercloud procs, how those interact with Astral/Umbral timers (and now Enochian timers as well), I opted to write a simulation loop that could account for all the various cooldowns, buff timers, cast times and GCD times, and animation lockouts, and select the next action to perform based on the same criteria an optimally played BLM would under ideal circumstances. Each loop of the simulator advances the “time” by 1/1000th of a second and any time something important happens, like a cast finishing, a cooldown being used, or a server tic occurring, it processes what happened at that time. It needs to be that granular to be able to detect differences between single points of spell speed, otherwise it would end up treating the cast speed formula as a stepwise function instead of a linear one.

The current iteration of the simulator will run through the standard Sharpcast F1 opener:

Sharpcast

Fire

Enochian

Firestarter

Ley Lines

Raging Strikes

Fire 4 x2

Fire

Fire 4 x2

Firestarter if available

Swiftcast Flare if not

Convert

Fire 4 (x2 if Firestarter used above)

Blizzard 3

Thunder 1

Blizzard 4

From there, it follows the typical F3 -> F4 x2 -> F1 -> F4x2 -> B3 -> T1 -> B4 rotation. T1 is swapped with B4 if there isn’t enough time to cast it first and keep Enochian, or dropped altogether if there was also a fast mana tic and it can jump right back into fire phase.

During the rotation, Sharpcast is used on Fire 1 when restarting Enochian. Ley Lines is used as soon as possible after it comes off cooldown, unless Raging Strikes will be back soon. If it is, it’ll hold both until starting a fresh Enochian to do a 6 Fire 4 burst rotation:

(From Umbral Ice III)

Sharpcast

Fire 3

Enochian

Fire 1

Ley Lines

Raging Strikes

Fire 4 x3

Firestarter

Convert

Fire 4 x3

Blizzard 3

Thunder 1

Blizzard 4

All of this is fairly standard, and the complexity comes with the logic behind using extra Firestarter and Thundercloud procs. In both cases, the main criteria for whether it’s OK to use a proc is that its use will not cause either Astral Fire or Enochian to fall off. Firestarters will generally be used just before swapping into Umbral Ice if there will be time to hit Blizzard 3 and Blizzard 4 afterwards. If there isn’t time, and if the Firestarter is still available when the Blizzard 4 is done, it will Transpose then throw the Firestarter to get back into Astral Fire. Otherwise, it will let the Firestarter drop off, since losing Enochian is a greater DPS loss than skipping a Firestarter. The one exception to this rule is if the Umbral Ice phase coming up is one in which we’re going to pop the job ability again and start the cycle over. In that case, it will ignore that part of the limitations.

Thundercloud usage is harder to simulate. Deciding what the best usage pattern for Thunderclouds back in 2.x was a secondary purpose of the original simulator. At that point I had several different sets of conditions and I was constantly toying with them to see which set produced the highest DPS. The current set of conditions are:

1. Use a Thundercloud in place of the Thunder 1 during Umbral Ice, if available.
2. Use a Thundercloud in Astral Fire if:
1. Enochian’s remaining duration is long enough to do Thundercloud, Blizzard 3, Blizzard 4 or it’s time to use the job ability this refresh.
2. Astral Fire’s duration is long enough that either a Fire 1 or a Blizzard 3 can be done afterwards without losing it.

Mathematically speaking, Thunderclouds are higher potency per second than a Fire 4 as soon as two tics of the DoT have happened, once you factor out the tics being overwritten, and is higher total potency once you’re down to around 12 seconds of the DoT duration left, depending on where the server tic lands. Practically speaking, these conditions result in usage that mimics optimal play closely enough that I haven’t gone to the extent of adding in on-the-fly calculations of potency per second given remaining DoT duration as part of the condition for using the Thundercloud.

That’s essentially the logic behind the simulation. The actual conditions for what to use when and which takes precedence over the others is a little more complicated, but this is the distilled version of what it tries to accomplish. In order to reduce the impact of proc RNG variance on the overall end result, each simulation lasts for an effective 13 minutes of play time, and also repeats 25 times with different seeds for the random number generator(s) used so it doesn’t just spit out the exact same numbers with every iteration. Potions and things like Battle Litany/Trick Attack/Foe Requiem are not taken into account since this is intended to describe BLM DPS in a vacuum, and their impact shouldn’t actually change the end result in a meaningful way.

A few notes to mention before I move on. The simulators all run off the same set of damage, crit, and speed formulas for ease of coding. While that does mean that the numbers are likely off by a few percentage points in terms of telling you exactly what the optimal DPS for a given gearset is, the only important part as far as determining stat weights is concerned is DPS relative to itself. So while the simulation may put out an overall DPS number that is skewed one way other the other, since they’ll all come out with that same skew, we don’t lose sight of the relative impact of changing one stat. For reference, here are the formulas I’m working from, updated to the latest BLM formulas provided by Dervy in response to my original posting:

Cast/GCD time: BaseTime + 0.01 - ((speed - 354) / 2641.0). Base time is the base cast time for spells, or 2.5 for GCD abilities (Scathe included).

Base damage: ((WD * 0.0403757 + 1) * (INT * 0.1122727 - 1.9037031) * (DET * 0.0001331 + 1) * damageBonus (This last term is things like Raging Strikes, Enochian, the spell speed scalar for DoT damage and the bonus damage that we get from the Magick and Mend/Maim and Mend traits.

Crit chance: (Crit - 354) / (858.0 * 5.0) + 0.05 + CritRateMod / 100.0 (Inner Release/Battle Litany go in to the CritRateMod piece)

Crit damage bonus: (Crit - 354) / (858.0 * 5.0) + 1.45

Speed scalar on DoT damage: 1 + (speed - 354)) / 7722.0. This gets used as a multiplier against the base damage, which is what necessitates the “1 +”.

So for example, to calculate the potency of a DoT tic, since that uses all of these pieces, we’d used the following formula:

((WD * 0.0403728 + 1) * (INT * 0.0904428 + 1.5387753) * (DET * 0.0001271 + 1) * (Raging/Enochian modifier * 1.3 * (1 + (speed - 354)) / 7722.0)) * (1 - ((Crit - 354) / (858.0 * 5.0) + 0.05 + CritRateMod / 100.0) + ((Crit - 354) / (858.0 * 5.0) + 1.45) * ((Crit - 354) / (858.0 * 5.0) + 0.05 + CritRateMod / 100.0)) * Potency / 100.0

This would give us the average crit-adjusted damage that a given tic would do. Thankfully we have computers to crunch the numbers for us!

Calculating Stat Weights

So now we can plug a given set of total stats into the simulator, run it, and get a reasonably accurate measurement of DPS out of it. How do we turn that into a set of stat weights for that gearset?

The basic way you calculate a stat weight is to get the DPS for your base set, the DPS for your base set plus one INT, and the DPS for base plus one of whatever stat you want the weight for. Once you have those numbers, plug them into this formula:

(StatDPS - BaseDPS) / (INTDPS - BaseDPS)

This tells you how worthwhile adding one point of your chosen secondary stat (or weapon damage) is versus just adding one point of INT (substitute STR or DEX for classes that use those). It works pretty well for those stats because their relative impact is fairly static. This totally breaks down for speed because of how minor differences in timing can mean large swings one way or another in terms of how many extra procs you can fit into a given Enochian refresh cycle or how many spells you get off under the effect of Raging Strikes, or even how long you have to wait for an MP tic at any point in time. A simple one-run-and-done method wasn’t going to work to get a weight for speed.

UPDATED: As Dervy and others pointed out, a large part of the skew I was seeing in my speed weight results was due to the large per-spell speed swings in damage you’d see when small changes in speed netted a change in attack count. The revised weight calculation method looks at DPS over a set number of rotations, rather than total damage over a set period of time. The simulation now ends once a certain number of Raging Strikes uses is reached, since that signifies the start of a new cycle, and the time elapsed is used in place of the GCD count to ensure the values move in the right direction. This produces more reliable results when run through the same normalization work as before. I’ll leave the rest of the description below alone, since it still describes the overall process well, but where I reference attack counts, I’m now using relative time elapsed instead, where we expect longer times when decreasing speed, and shorter times when increasing it.

To get around this issue, instead of trying to get the stat weight of a single point of spell speed, what I actually do is try to get the “stat weight” of getting an extra attack in during the simulation, and then link that to the amount of spell speed needed to cause that extra attack. Here’s how that works. Starting from the base amount of speed originally passed into the simulation, we get DPS values and total spell counts while incrementing the total spell speed each time until a simulation returns that it both did more damage and caused more attacks over the course of the simulation than the base one. It then repeats the process starting from that new baseline until it fulfills the same greater DPS and more attacks criteria again. This is to help keep small amounts of spell speed causing that first extra attack from skewing the final result too badly.

Once that’s done, it starts going the opposite direction from the original baseline, decrementing spell speed until we have both less damage and fewer attacks in the same manner that we did for the increased spell speed tests. Once we have the second lower DPS value and attack count, we also use that speed value to get a “plus one INT” DPS number. Now we have enough data to try to calculate a weight for speed. To recap, here are the values we’re using:

BaseDPS, BaseAttacks, INTDPS

SpeedUPDPS, SpeedUPAttacks, SpeedIncrease

SpeedDownDPS, SpeedDownAttacks, SpeedDownINTDPS, SpeedDecrease (this is a positive value, we’ll see why below)

For each grouping of speed data, I use this basic formula to try to get at the relative value of changing the number of attacks caused during the run vs. the amount of speed it took to cause that:

(SpeedDPS - BaseDPS) / (SpeedAttacks - BaseAttacks) / (INTDPS - BaseDPS) / SpeedChange

Using this formula, I generate three different weights: From base to higher speed, lower speed to base, and lower speed to higher speed

SpeedUPWeight = (SpeedUPDPS - BaseDPS) / (SpeedUPAttacks - BaseAttacks) / (INTDPS - BaseDPS) / SpeedIncrease

SpeedDownWeight = (BaseDPS - SpeedDownDPS) / (BaseAttacks - SpeedDownAttacks) / (SpeedDownINTDPS - SpeedDownDPS) / SpeedDecrease

SpeedRangeWeight = (SpeedUPDPS - SpeedDownDPS) / (SpeedUPAttacks - SpeedDownAttacks) / (SpeedDownINTDPS - SpeedDownDPS) / (SpeedDecrease + SpeedIncrease)

The last step is to average all three weights together:

(SpeedUPWeight + SpeedDownWeight + SpeedRangeWeight) / 3.0

This gives the actual value of spell speed with respect to INT, and is the equivalent of the Crit/Det/WD weights we’re used to. There is still the chance, however, for fairly small changes in speed to cause an unusually large swing in damage values, so I also usually cap the weight of speed at twice the larger of Crit and Determination’s weights, just so I don’t end up seeing spell speed weights of .75 or .9 or even higher. This is mostly me hedging against weird behavior that I haven’t been able to detect in the simulations yet, but I could be convinced to strip this out in favor of the raw numbers.

Generating BiS Gear Sets

Once I had the capability to determine the stat weights of a given gear set, I then wanted this program to figure out for me which set of gear would produce the highest DPS, and then tell me what the strongest set that met a particular accuracy requirement was. The method I chose to do this was as follows:

1. Figure out the stat weights for the starting set of gear
2. For each slot, pick the strongest piece of gear given the current working stat weights.
3. If any adjustments to gear was made as part of step 2, recalculate the stat weights.
4. If the total value of this set is higher than the stored “best” set, both using the new sets weights and using the best sets weights, store this new set as the best.
5. If the total value of this set is higher than the stored best set while using the new stat weights, but the best set is stronger while using its weights, average the two stat weight sets together, and store these as the new stat weights for both sets. Whichever of the two sets is stronger is stored as the best set.
6. If at any point we arrive at a set we made before, set its stat weights to those of the current best set.
7. Break out of the loop if we stop getting better sets or if we start bouncing back and forth between two results.
8. The resultant gearset of the above steps will be the blind stat-weight optimized set, ignoring any accuracy or skill/spell speed breakpoints.
9. If there is a breakpoint in speed or accuracy to calculate, build a list of all potential gear pieces that have more of the needed stat than the “DPS BiS” item for that slot, then recursively build gearsets swapping in those pieces. Whichever set meets the accuracy and speed requirement that also has the highest total weight (stat weights are not recalculated during this process), return that set.

Current Stat Weights

Updated for 3.3 as of 6/7:

 Weapon Damage 9.8186 Spell Speed 0.3802 Crit Rate 0.2028 Determination 0.1647

Gear Set Recommendations

Gearset recommendations pending for 3.3

Change Log

6/7 - Updated formulas, speed weight calculation method