Physics how accuracy depends on the number of measurements. Physical quantities

Inaccuracy is the deviation of the measurement result of a physical quantity (for example: pressure) from the true value of the measured quantity. The error arises as a result of imperfection of the method or those. measuring instruments, insufficient consideration of the influence of external conditions on the measurement process, the specific nature of the measured quantities themselves and other factors.

The accuracy of the measurements is characterized by the closeness of their results to the true value of the measured values. There is a concept of absolute and relative measurement error.

The absolute measurement error is the difference between the measurement result and the actual value of the measured quantity:

DX = Q- X,(6.16)

The absolute error is expressed in units of the measured value (kgf / cm2, etc.)

The relative measurement error characterizes the quality of the measurement results and is defined as the ratio absolute error DX to actual value:

d X = DX / X , (6.17)

The relative error is usually expressed as a percentage.

Depending on the reasons leading to the measurement error, a distinction is made between systematic and random errors.

Systematic measurement errors include errors that, when repeated measurements under the same conditions, manifest themselves in the same way, that is, they remain constant or their values ​​change according to a certain law. Such measurement errors are determined quite accurately.

Random errors are called errors, the values ​​of which are measured when repeated measurements of a physical quantity are made in the same way.

The assessment of the error of the instruments is made as a result of their verification, i.e., a set of actions (measures) aimed at comparing the readings of the instruments with the actual value of the measured value. When checking working instruments, the value of exemplary measures or indications of exemplary instruments is taken as the actual value of the measured quantity. When evaluating the error of exemplary measuring instruments, the value of the reference measures or the readings of the reference instruments is taken as the actual value of the measurement of the quantity.

The main error is the error inherent in a measuring instrument under normal conditions (atmospheric pressure, Tair = 20 degrees, humidity 50-80%).

Additional error is the error caused by the measurement of one of the influencing quantities outside of normal conditions. (e.g. temperature, avg. meas.)

The concept of classes of accuracy. Accuracy class is a generalized characteristic of measuring instruments, determined by the limits of permissible basic and additional errors, as well as other properties of these instruments that can affect their accuracy. The accuracy class is expressed by a number that matches the value of the permissible error.

An exemplary pressure gauge (sensor) of accuracy class 0.4 has permissible error= 0.4% of the measurement limit i.e. the error of an exemplary pressure gauge with a measurement limit of 30 MPa should not exceed + -0.12 MPa.

Accuracy classes of pressure measuring devices: 0.16; 0.25; 0.4; 0.6; 1.0; 1.5; 2.5.

Sensitivity devices called the ratio of the movement of its pointer D n (arrow direction) to the change in the value of the measured value, which caused this movement. Thus, the higher the accuracy of the instrument, the higher the sensitivity, as a rule.

The main characteristics of measuring devices are determined in the course of special tests, including calibration, in which the calibration characteristic of the device is determined, i.e. the relationship between its readings and the values ​​of the measured value. The calibration characteristic is drawn up in the form of graphs, formulas or tables.

In the practical use of certain measurements, it is important to assess their accuracy. The term "measurement accuracy", that is, the degree of approximation of the measurement results to a certain real value, does not have a strict definition and is used for a qualitative comparison of measurement operations. For a quantitative assessment, the concept of "measurement error" is used (the smaller the error, the higher the accuracy).

An error is the deviation of the measurement result from the actual (true) value of the measured quantity. It should be borne in mind that the true value of a physical quantity is considered unknown and is used in theoretical research. The actual value of a physical quantity is established experimentally on the assumption that the result of the experiment (measurement) approaches the true value as much as possible. Evaluation of the measurement error is one of the important measures to ensure the uniformity of measurement.

Measurement errors are usually given in the technical documentation for measuring instruments or in regulatory documents. True, if we take into account that the error also depends on the conditions in which the measurement itself is carried out, on the experimental error of the methodology and the subjective characteristics of a person in cases where he directly participates in measurements, then we can talk about several components of the measurement error, or about the total error ...

The number of factors affecting the measurement accuracy is quite large, and any classification of measurement errors (Fig. 2) is to a certain extent arbitrary, since various errors, depending on the conditions of the measurement process, appear in different groups.

2.2 Types of errors

Measurement error is the deviation of the measurement result X from the true X and the value of the measured quantity. When determining measurement errors, instead of the true value of the physical quantity X and, its actual value X d is actually used.

Depending on the form of expression, a distinction is made between absolute, relative and reduced measurement errors.

The absolute error is defined as the difference Δ "= X - X and or Δ = X - X d, and the relative error is defined as the ratio δ = ± Δ / X d · 100%.

Reduced error γ = ± Δ / Χ Ν · 100%, where Χ N is the normalizing value of the quantity, which is used as the measuring range of the device, the upper measurement limit, etc.

In the case of multiple measurements of the parameter, the arithmetic mean value is used as this true value:

= i,

where Xi is the result of the i-th measurement, - n is the number of measurements.

The quantity , obtained in one series of measurements, is a random approximation to X and. To assess its possible deviations from X and determine the estimate of the standard deviation of the arithmetic mean:

S ( )=

To assess the scatter of individual measurement results Xi relative to the arithmetic mean determine the sample standard deviation:

σ =

These formulas are used provided that the measured value is constant during the measurement.

These formulas correspond to the central limit theorem of probability theory, according to which the arithmetic mean of a series of measurements always has a smaller error than the error of each specific measurement:

S ( )=σ /

This formula reflects the fundamental law of the theory of errors. It follows from it that if it is necessary to increase the accuracy of the result (with excluded systematic error) by 2 times, then the number of measurements should be increased by 4 times; if the accuracy needs to be increased by 3 times, then the number of measurements

increase 9 times, etc.

It is necessary to clearly distinguish between the use of the quantities S and σ: the first is used when assessing the errors of the final result, and the second is used when assessing the error of the measurement method. The most probable error of a single measurement Δ in 0.67S.

Depending on the nature of the manifestation, the causes of occurrence and the possibilities of elimination, a distinction is made between systematic and random measurement errors, as well as gross errors (slips).

The systematic error remains constant or changes regularly with repeated measurements of the same parameter.

The random error changes randomly under the same measurement conditions.

Gross errors (slips) arise due to erroneous actions of the operator, malfunction of measuring instruments or abrupt changes in measurement conditions. As a rule, gross errors are revealed as a result of processing the measurement results using special criteria.

The random and systematic components of the measurement error appear simultaneously, so that their total error is equal to the sum of the errors when they are independent.

The value of the random error is not known in advance; it arises due to many unrefined factors. It is impossible to exclude random errors from the results, but their influence can be reduced by processing the measurement results.

For practical purposes, it is very important to be able to correctly formulate the requirements for the measurement accuracy. For example, if Δ = 3σ is taken as the permissible manufacturing error, then by increasing the requirements for accuracy (for example, up to Δ = σ), while maintaining the manufacturing technology, we increase the probability of rejection.

As a rule, it is believed that systematic errors can be detected and eliminated. However, in real conditions, it is impossible to completely eliminate these errors. There are always some non-excluded residues that need to be taken into account in order to assess their boundaries. This will be the systematic measurement error.

In other words, in principle, the systematic error is also random and the indicated division is due only to the established traditions of processing and presenting measurement results.

In contrast to the random error identified as a whole, regardless of its sources, the systematic error is considered by its components, depending on the sources of its occurrence. Distinguish between subjective, methodological and instrumental components of the error.

The subjective component of the error is associated with the individual characteristics of the operator. Typically, this error occurs due to errors in reading readings (approximately 0.1 scale divisions) and incorrect operator skills. Basically, the systematic error arises from the methodological and instrumental components.

The methodological component of the error is due to the imperfection of the measurement method, methods of using measuring instruments, incorrect calculation formulas and rounding of results.

The instrumental component arises from the intrinsic error of the measuring instruments, determined by the accuracy class, the influence of the measuring instruments on the result, and the limited resolution of the measuring instruments.

The expediency of dividing the systematic error into methodological and instrumental components is explained by the following:

To increase the measurement accuracy, limiting factors can be identified, and, therefore, a decision can be made to improve the methodology or choose more accurate measuring instruments;

It becomes possible to determine the component of the total error, which increases over time or under the influence of external factors, and, therefore, purposefully carry out periodic checks and certification;

The instrumental component can be assessed before the development of the methodology, and the potential accuracy of the chosen method will be determined only by the methodological component.

2.3 Measurement quality indicators

Uniformity of measurements, however, cannot be ensured only by coincidence of errors. When making measurements, it is also important to know the measurement quality indicators. The quality of measurements is understood as a set of properties that determine the receipt of results with the required accuracy characteristics, in the required form and on time.

The quality of measurements is characterized by such indicators as accuracy, accuracy and reliability. These indicators should be determined on the basis of estimates, which are subject to the requirements of consistency, unbiasedness and efficiency.

The true value of the measured quantity differs from the arithmetic mean of the observation results by the value of the systematic error Δc, i.e. X = -Δ s. If the systematic component is excluded, then X = .

However, due to the limited number of observations, the quantity it is also impossible to determine exactly. One can only estimate its value, indicate with a certain probability the boundaries of the interval in which it is located. Evaluation the numerical characteristics of the distribution law X, depicted by a point on the number axis, is called a point. Unlike numerical characteristics, estimates are random variables, and their value depends on the number of observations n. A consistent estimate is an estimate that, as n → ∞, is reduced in probability to the estimated value.

An unbiased estimate is an estimate whose mathematical expectation is equal to the estimated value.

An effective estimate is one that has the smallest variance σ 2 = min.

The listed requirements are met by an average arithmetic value results of n observations.

Thus, the result of a single measurement is a random variable. Then the measurement accuracy is the closeness of the measurement results to the true value of the measured value. If the systematic components of the error are excluded, then the accuracy of the measurement result characterized by the degree of scattering of its value, i.e., dispersion. As shown above, the variance of the arithmetic mean σ is n times less than the variance of an individual observation result.

H Figure 3 shows the distribution density of the individual and total measurement result. The narrower shaded area refers to the probability density of the distribution of the mean. The accuracy of measurements is determined by the closeness to zero of the systematic error.

The reliability of measurements is determined by the degree of confidence in the result and is characterized by the probability that the true value of the measured quantity lies in the specified vicinity of the actual one. These probabilities are called confidence limits, and the boundaries (neighborhoods) are called confidence limits. In other words, the measurement confidence is the closeness to zero of the non-excluded systematic error.

The confidence interval with boundaries (or confidence limits) from - Δ d to + Δ d is the interval of values ​​of the random error, which, with a given confidence probability P d, covers the true value of the measured value.

R d ( - Δ d ≤, X ≤ + Δ e).

With a small number of measurements (n 20) and using the normal law, it is not possible to determine the confidence interval, since the normal distribution law describes the behavior of a random error, in principle, with an infinitely large number of measurements.

Therefore, for a small number of measurements, the Student's distribution or t - distribution (proposed by the English statistician Gosset, published under the pseudonym "student") is used, which provides the ability to determine confidence intervals with a limited number of measurements. The boundaries of the confidence interval are determined by the formula:

Δ d = t S ( ),

where t is the Student's distribution coefficient, which depends on the given confidence level P d and the number of measurements n.

With an increase in the number of observations n, the Student's distribution quickly approaches normal and coincides with it already at n ≥30.

It should be noted that measurement results that do not have reliability, that is, a degree of confidence in their correctness, are of no value. For example, a sensor of a measuring circuit can have very high metrological characteristics, but the influence of errors from its installation, external conditions, methods of registration and signal processing will lead to a large final measurement error.

Along with such indicators as accuracy, reliability and correctness, the quality of measurement operations is also characterized by the convergence and reproducibility of results. These indicators are most common in assessing the quality of tests and characterize their accuracy.

It is obvious that two tests of the same object by the same method do not give identical results. Their objective measure can be statistically substantiated estimates of the expected similarity of the results of two or more tests obtained in strict adherence to their methodology. The convergence and reproducibility are taken as such statistical assessments of the consistency of test results.

Convergence is the closeness of the results of two tests obtained by the same method, on identical installations, in the same laboratory. Reproducibility differs from repeatability in that both results must be obtained in different laboratories.


Short way http://bibt.ru

§ 32. ACCURACY AND ERRORS OF MEASUREMENT.

No measurement can be made with absolute precision. There is always some difference between the measured value of a quantity and its actual value, which is called the measurement error. The smaller the measurement error, the naturally higher the measurement accuracy.

Measurement accuracy characterizes the error that is inevitable when working with the most accurate measuring tool or device of a certain type. Measurement accuracy is influenced by the material properties of the measuring tool and the design of the tool. Measurement accuracy can only be achieved if the measurement is made according to the rules.

The main reasons for lowering the measurement accuracy can be:

1) unsatisfactory condition of the tool: damaged edges, dirt, incorrect position of the zero mark, malfunction;

2) careless handling of the tool (impact, heating, etc.);

3) inaccuracy in the installation of the tool or the measured workpiece relative to the tool;

4) the temperature difference at which the measurement is made (normal temperature at which the measurement should be made, 20 °);

5) poor knowledge of the device or inability to use a measuring tool. Wrong choice of measuring instrument.

The degree of measurement accuracy of a device depends on its care and correct use.

An increase in the measurement accuracy can be achieved by repeated measurement, followed by the determination of the arithmetic mean obtained as a result of several measurements.

When starting to measure, it is necessary to know well the measuring instruments, the rules of handling the instrument and master the techniques of using it.


Part one

Estimation of measurement errors. Recording and processing of results

In the exact sciences, in particular in physics, particular importance is attached to the problem of assessing the accuracy of measurements. That no measurement can be absolutely exact is a fact of general philosophical significance. Those. in the process of conducting an experiment, we always obtain an approximate value of a physical quantity, only approaching, to one degree or another, its true value.

Measurements, measurement accuracy indicators

Physics as one of natural sciences, explores our surroundings material world, using the physical method of research, the most important component of which is the comparison of the data obtained by theoretical calculation with the experimental (measured) data.

The most important part of the process of teaching physics at the university is the implementation laboratory work... In the process of their implementation, students measure various physical quantities.

When measured, physical quantities are expressed in the form of numbers that indicate how many times the measured quantity is greater or less than another quantity, the value of which is taken as a unit. Those. measurement is understood as "a cognitive process, which consists in comparing a given physical quantity with a known physical quantity taken as a unit of measurement by means of a physical experiment."

Measurements are performed using measures and measuring instruments.

Measure they call the real reproduction of a unit of measurement, fractional or multiple of its value (weight, measuring flask, boxes of electrical resistances, capacities, etc.).

Measuring instrument is called a measuring instrument that makes it possible to directly read the value of the measured quantity.

Regardless of the purpose and principle of operation, any measuring device can be characterized by four parameters:

1) Measurement limits indicate the range of the measured value available to this device. For example, a vernier caliper measures linear dimensions in the range from 0 to 18 cm, and a milliammeter measures currents from -50 to +50 mA, etc. On some devices, you can change (switch) the measurement limits. Multi-range instruments can have several scales with different numbers of divisions. The counting should be carried out on the scale in which the number of divisions is a multiple of the upper limit of the device.

2) Value of division C determines how many units of measurement (or their fractions) are contained in one (smallest) division of the instrument scale. For example, the scale division of a micrometer C = 0.01 mm / division(or 10 μm / div), and for a voltmeter C = 2 In / cases etc. If the entire scale C is the same (uniform scale), then to determine the division value, you need the measurement limit of the device x nom divided by the number of instrument scale divisions N:

3) Sensitivity instrument α shows how many minimum scale divisions fall on the unit of the measured value or any part of it. From this definition it follows that the sensitivity of the device is the reciprocal of the division price: α = 1 / С. For example, the sensitivity of a micrometer can be estimated as α = 1 / 0.01 = 100 divisions / mm(or α = 0.1 div / μm), and for a voltmeter α = 1/2 = 0.5 cases / in etc.

4) Accuracy of the device characterizes the value of the absolute error, which is obtained during the measurement by this device.

The characteristic of the accuracy of the measuring instruments is the limiting error of the calibration Δ x deg... On the scale or in the passport of the device, the maximum absolute or relative error of the calibration is given, or the accuracy class is indicated, which determines the systematic error of the device.

In order of increasing accuracy, electrical measuring instruments are divided into eight classes: 4.0; 2.5; 1.5; 1.0; 0.5; 0.2; 0.1 and 0.05. The number denoting the accuracy class is applied to the scale of the device and shows the largest permissible value of the basic error as a percentage of the measurement limit x nom

Cl. accuracy = ε pr =.(2)

There are devices (mainly of high accuracy), the accuracy class of which determines the relative error of the device in relation to the measured value.

If there is no data on the accuracy class on the devices and in their passports and the formula for calculating the error is not indicated, then the instrumental error should be considered equal to half the scale division of the device.

Measurements are divided into straight and indirect... In direct measurements, the desired physical quantity is established directly from experience. The value of the measured value is counted down on the scale of the device or the number and value of measures, weights, etc. are counted. Direct measurements are, for example, weighing on a scale, determining the linear dimensions of a regular body with a caliper, determining the time using a stopwatch, etc. ...

In indirect measurements, the measured value is determined (calculated) from the results of direct measurements of other quantities that are associated with the measured value by a certain functional dependence. Examples of indirect measurements - determining the area of ​​a table by its length and width, body density by measuring body weight and volume, etc.

The quality of measurements is determined by their accuracy. In direct measurements, the accuracy of the experiments is established from the analysis of the accuracy of the method and instruments, as well as from the repeatability of the measurement results. The accuracy of indirect measurements depends both on the reliability of the data used for the calculation and on the structure of the formulas connecting these data with the desired value.

The accuracy of measurements is characterized by their error. Absolute measurement error called the difference between the experiment found x meas and the true value of the physical quantity x ist

To assess the accuracy of any measurements, the concept is also introduced relative error.

The relative measurement error is the ratio of the absolute measurement error to the true value of the measured value (can be expressed as a percentage).

As follows from (3) and (4), in order to find the absolute and relative measurement error, it is necessary to know not only the measured, but also the true value of the quantity of interest. But if the true value is known, then there is no need to measure. The purpose of measurements is always to find out the previously unknown value of a physical quantity and to find, if not its true value, then at least a value that differs little from it. Therefore, formulas (3) and (4), which determine the magnitude of the errors, are unsuitable for practice. Often instead of x ist use the arithmetic mean over multiple dimensions

where x i Is the result of a single measurement.