Respond to the discussion post below with YOUR educated opinion in 3-4 sentences WITH scholarly source backing it up
My company outputs Kodak and Epson proofs for color matching on printing presses. Each proof has to be measured to ensure it hits our quality standard of 2 delta E (delta E is a way of measuring distance between two colors in a color space. A measurement of more than 3 delta E can be a pretty big shift. In some colors, 1 delta E is discernible by the human eye). Industry standard is 3 delta E, so we hold our standard higher to ensure we meet and exceed our customers’ expectations. An ongoing record is kept of these measurements for every proof that comes out of our devices. These measurements fall within a normal distribution if our proofers are maintained properly and the measuring devices are calibrated per manufacturer’s specifications. If measurements start to trend away from our standard (or the distribution starts to skew), it signals an issue that needs to be addressed. To ensure that we are falling into a normal distribution, it is important that we use the same color space for each measurement (L*a*b color space) and make sure that our record-keeping is kept current. Our ISO coordinator audits the measurements as a back up to our proofing operators to confirm that we are following process.
Typical causes for measured variances are environmental factors (how hot/humid/cool/dry in our proofing room), ink contaminants, substrate type used in the proofers, substrate source (sourcing one brand/type of stock from China can yield different results from sourcing the same brand/type of stock in the US) and substrate age. However, all of these factors should not affect the normal distribution of results unless there is an error in data input or a calibration issue.