How can I smooth my data?
Average, dampen, or filter? Computers can record engine information at an incredible pace, but humans still absorb it slowly. To maximize a test’s value, raw data can be mathematically processed (in DYNO-MAX) by one or more of these methods.
Averaging: Takes anywhere from two to thousands of consecutive data points and combines them into a single data point. It is analogous to driving cross county, taking MPH readings every second, then reporting this data as a single average speed for the entire trip (or alternatively as a daily MPH average).
Dampening: Unlike simple averaging which reduces the quantity of data points, dampening does not condense data. Rather, it reduces the significance of any single data point by averaging it with one or more sets of adjacent points on either side of the center reading. This is equivalent to using an oil filled pressure gauge in order to get a steady reading from a pulsating source.
Spike Filtering: Occasionally large transient pulses (either real or artificially induced) creep into data. Such “noise” needles in a graph or data listing are distracting, especially to novice operators. Filters use algorithms to “intelligently” strip the high frequency pulses out of otherwise meaningful data.