**Error bars** are used on graphs to indicate the error, or uncertainty in a reported measurement. They give a general idea of how accurate a measurement is, or conversely, how far from the reported value the true (error free) value might be. Error bars often indicate one standard deviation of uncertainty, but may also indicate the standard error. These quantities are not the same and so the measure selected should be stated explicitly in the graph or supporting text.

When graphing, the error bars should be equal to one standard deviation of uncertainty, and are critical to determine the range of the gradient. Using the smallest and largest measured values, connecting the lowest possible value (the smallest measured value minus the uncertainty) with the highest value (the largest measured value plus the uncertainty) with a straight line you can measure the highest gradient value. Using smallest measured value and add the uncertainty, and the largest measured value and subtracting the uncertainty, connecting those two point with a straight line will give you the lowest gradient value.